I'm deploying a 3rd-party application in compliance with the 12 factor advisory, and one of the points tell that application logs should be printed to stdout/stderr: then clustering software can collect it.
However, the application can only write to files or syslog. How do I print these logs instead?
This way, we configure the application so it logs to a file, and continuously tail -f it.
Luckily, tail can accept --pid PID: it will exit when the specified process exits. We put $$ there: PID of the current shell.
As a final step, the launched application is exec'ed, which means that the current shell is completely replaced with that application.
NOTE: by using tail -F we list filenames, and it will read them even if they appear later!
Finally, the minimalistic Dockerfile:
FROM ubuntu
ADD run.sh /root/run.sh
CMD ['/root/run.sh']
Note: to workaroung some extremely strange tail -f behavior (which says "has been replaced with a remote file. giving up on this name") i tried another approach: all known log files are created & truncated on start up: this way I ensure they exist, and only then -- tail them:
I've just had to solve this problem with apache2, and wrestled with using CustomLog to try redirecting to /proc/1/fd/1 but couldn't get that working. In my implementation, apache was not running as pid 1, so kolypto's answer didn't work as is. Pieter's approach seemed compelling, so I merged the two and the result works wonderfully:
# Redirect apache log output to docker log collector
RUN ln -sf /proc/1/fd/1 /var/log/apache2/access.log \
&& ln -sf /proc/1/fd/2 /var/log/apache2/error.log
Technically this keeps the apache access.log and error.log going to stdout and stderr as far as the docker log collector is concerned, but it'd be great if there were a way to separate the two outside the container, like a switch for docker logs that would show only one or the other...