How to forward Docker container logs to ELK?

I would like to know what is the easiest way to forward my docker container logs to an ELK server, so far the solutions I have tried after having searched the internet didn't work at all.

Basically I have a docker image that I run using docker-compose, this container does not log anything locally (it is composed of different services but none of them are logstash or whatever) but I see logging through docker logs -tf imageName or docker-compose logs. Since I am starting the containers with compose I cannot make use (or at least I don't know how) of the --logs-driver option of docker.

Thus I was wondering if someone may enlighten me a bit regarding how to forward that logs to an ELK container for example.

Thanks in advance,

Regards

SOLUTION:

Thanks to madeddie I could achieve to solve my issue in the following way, mention that I used the basic ELK-stack-in-containers which madeddie suggested in his post.

First I update the docker-compose.yml file of my container to add entries for the logging reference as madeddie told me,I included an entry per service, a snippet of my docker-compose looks like this

   version: '2'
    services:
      mosquitto:
        image: ansi/mosquitto
        ports:
          - "1883:1883" # Public access to MQTT
          - "12202:12202"  #mapping logs
        logging: 
           driver: gelf
           options: 
             gelf-address: udp://localhost:12202
    redis:
     image: redis
     command: redis-server --appendonly yes
     ports:
       - "6379:6379" # No public access to Redis
       - "12203:12203" #mapping logs
     volumes:
       - /home/dockeruser/redis-data:/data
     logging: 
       driver: gelf
       options: 
         gelf-address: udp://localhost:12203 

Secondly, I had to use a different port number per sevice in order to be able to forward the logs.

Finally,I updated my elk container docker-compose.yml file to map each of the upd port where I was sending my logs to the one that logstash listens to

logstash:
  build: logstash/
  command: logstash -f /etc/logstash/conf.d/logstash.conf
  volumes:
    - ./logstash/config:/etc/logstash/conf.d
  ports:
    - "5000:5000"
    - "12202:12201/udp" #mapping mosquitto logs
    - "12203:12201/udp" #mapping redis logs

This configuration and adding the entry of gelf {} in logstash.conf made it work, it is important as well to set up properly the IP address of the docker service.

REgards!

Docker compose has the logging keyword: source

logging: 
  driver: syslog
  options: 
    syslog-address: "tcp://192.168.0.42:123"

So if you know where to go from there, go for it.

If not, I can advice you to look into the gelf logging driver for docker and the logstash gelf input plugin

If you would for instance use this basic ELK-stack-in-containers setup, you would update the docker-compose file and add port - "12201:12201/udp" to logstash.

Edit the logstash.conf input section to:

input {
    tcp {
        port => 5000
    }
    gelf {
    }

}

Then configure your containers to use logging driver gelf (not syslog) and the option gelf-address=udp://ip_of_logstash:12201 (instead of syslog-address).

The only magic you will have to take care of is how Docker will find the IP address or hostname of the Logstash container. You could solve that through docker-compose naming, Docker links or just manually.

Docker and ELK are powerful, flexible, but therefore also big and complex beasts. Prepare to put in some serious time, in reading and experimentation.

Don't be afraid to open new (and preferably very specific) questions you come across while exploring all this.

do you configure the 12201 port per service? because you shouldn’t

I don’t understand it completely,do you mean that my logging option should be just at the same level of services tag instead per container?

the logging keyword doesn’t actually open a port to listen on, it configures where docker connects to. I’m talking about the port mappings you make on each container, you can only open a port once, but you shouldn’t open a port for logging on each container, just the logstash one. The problem lies in your usage of “localhost” as host of the geld endpoint, instead of localhost you should use the IP of the machine where logstash container is running or 172.17.0.1 if all the containers are on the same host or use “logstash” if all containers are started with the same compose file.

@madeddie I am facing a small issue with that elk stack, I can see that kibana or elasticsearch reboots around 2 hours or so, have you seen this behaviour as well? If I set my images to the latest instead of the build ones, would this stack still working?

I know what you mean now, I think I tried that before getting the error that the port was already in use, but I can give a second try. Thanks again

@madeddie in order to have less load in my server, I wanted to change the port fowarding, however, I cannot maneage to make it work. I added a line like this gelf-address: udp://172.17.0.1:12201 in each service, getting error when I start up the services due to the port in usage ERROR: for redis driver failed programming external connectivity on endpoint ttnbackend_redis_1 : Bind for 0.0.0.0:12201 failed: port is already allocated ERROR: for broker driver failed programming external connectivity on endpoint ttnbackend_broker_1 : Bind for 0.0.0.0:12201 failed: port is already allocated

I tried to do so, but while running docker-compose up I saw a complaint that the port was already in use.

yes, because you open a port for logstash on all your containers, while you should only open it for the logstash container. there is no need for all the other containers to listen for logstash connections, therefor, they don’t need to open a port for it.

Nowadays I keep using " the port forwarding" solution…