Make a Docker application write to stdout

I'm deploying a 3rd-party application in compliance with the 12 factor advisory, and one of the points tell that application logs should be printed to stdout/stderr: then clustering software can collect it.

However, the application can only write to files or syslog. How do I print these logs instead?

An amazing recipe is given in the nginx Dockerfile:

# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log

Simply, the app can continue writing to it as a file, but as a result the lines will go to stdout & stderr!

For a background process in a docker container, e.g. connecting with exec to /bin/bash I was able to use.

echo "test log1" >> /proc/1/fd/1

This sends the output to the stdout of pid 1, which is the one docker pickups up.

In another question, Kill child process when the parent exits, I got the response that helped to sort this out.

This way, we configure the application so it logs to a file, and continuously tail -f it. Luckily, tail can accept --pid PID: it will exit when the specified process exits. We put $$ there: PID of the current shell.

As a final step, the launched application is exec'ed, which means that the current shell is completely replaced with that application.

Runner script, run.sh, will look like this:

#! /usr/bin/env bash
set -eu

rm -rf /var/log/my-application.log
tail --pid $$ -F /var/log/my-application.log &

exec /path/to/my-application --logfile /var/log/my-application.log

NOTE: by using tail -F we list filenames, and it will read them even if they appear later!

Finally, the minimalistic Dockerfile:

FROM ubuntu
ADD run.sh /root/run.sh
CMD ['/root/run.sh']

Note: to workaroung some extremely strange tail -f behavior (which says "has been replaced with a remote file. giving up on this name") i tried another approach: all known log files are created & truncated on start up: this way I ensure they exist, and only then -- tail them:

#! /usr/bin/env bash
set -eu

LOGS=/var/log/myapp/

( umask 0 && truncate -s0 $LOGS/http.{access,error}.log )
tail --pid $$ -n0 -F $LOGS/* &

exec /usr/sbin/apache2 -DFOREGROUND

for nginx you can have nginx.conf pointing to /dev/stderr and /dev/stdout like this

user  nginx;
worker_processes  4;
error_log  /dev/stderr;
http {
    access_log  /dev/stdout  main;
...

and your Dockerfile entry should be

/usr/sbin/nginx -g 'daemon off;'

In my case making a symbolic link to stdout didn't work so instead I run the following command

ln -sf /proc/self/fd/1 /var/log/main.log 

I've just had to solve this problem with apache2, and wrestled with using CustomLog to try redirecting to /proc/1/fd/1 but couldn't get that working. In my implementation, apache was not running as pid 1, so kolypto's answer didn't work as is. Pieter's approach seemed compelling, so I merged the two and the result works wonderfully:

# Redirect apache log output to docker log collector
RUN ln -sf /proc/1/fd/1 /var/log/apache2/access.log \
    && ln -sf /proc/1/fd/2 /var/log/apache2/error.log

Technically this keeps the apache access.log and error.log going to stdout and stderr as far as the docker log collector is concerned, but it'd be great if there were a way to separate the two outside the container, like a switch for docker logs that would show only one or the other...

You can have a daemon process use syslog and have a front process which prints?

They are already going to syslog. You can just pick them up from there!

@MichaelHampton, seems fine, but Docker runs a single process that can write to stdout, and this sounds like combining two of them?

@qkrijger, good point! Now, if anyone has experience with it … ?