I would like to know what is the easiest way to forward my docker container logs to an ELK server, so far the solutions I have tried after having searched the internet didn't work at all.
Basically I have a docker image that I run using docker-compose, this container does not log anything locally (it is composed of different services but none of them are logstash or whatever) but I see logging through docker logs -tf imageName or docker-compose logs. Since I am starting the containers with compose I cannot make use (or at least I don't know how) of the --logs-driver option of docker.
Thus I was wondering if someone may enlighten me a bit regarding how to forward that logs to an ELK container for example.
Thanks in advance,
Regards
SOLUTION:
Thanks to madeddie I could achieve to solve my issue in the following way, mention that I used the basic ELK-stack-in-containers which madeddie suggested in his post.
First I update the docker-compose.yml file of my container to add entries for the logging reference as madeddie told me,I included an entry per service, a snippet of my docker-compose looks like this
This configuration and adding the entry of gelf {} in logstash.conf made it work, it is important as well to set up properly the IP address of the docker service.
If you would for instance use this basic ELK-stack-in-containers setup, you would update the docker-compose file and add port - "12201:12201/udp" to logstash.
Edit the logstash.conf input section to:
input {
tcp {
port => 5000
}
gelf {
}
}
Then configure your containers to use logging driver gelf (not syslog) and the option gelf-address=udp://ip_of_logstash:12201 (instead of syslog-address).
The only magic you will have to take care of is how Docker will find the IP address or hostname of the Logstash container. You could solve that through docker-compose naming, Docker links or just manually.
Docker and ELK are powerful, flexible, but therefore also big and complex beasts. Prepare to put in some serious time, in reading and experimentation.
Don't be afraid to open new (and preferably very specific) questions you come across while exploring all this.
the logging keyword doesn’t actually open a port to listen on, it configures where docker connects to. I’m talking about the port mappings you make on each container, you can only open a port once, but you shouldn’t open a port for logging on each container, just the logstash one. The problem lies in your usage of “localhost” as host of the geld endpoint, instead of localhost you should use the IP of the machine where logstash container is running or 172.17.0.1 if all the containers are on the same host or use “logstash” if all containers are started with the same compose file.
@madeddie I am facing a small issue with that elk stack, I can see that kibana or elasticsearch reboots around 2 hours or so, have you seen this behaviour as well? If I set my images to the latest instead of the build ones, would this stack still working?
@madeddie in order to have less load in my server, I wanted to change the port fowarding, however, I cannot maneage to make it work. I added a line like this gelf-address: udp://172.17.0.1:12201 in each service, getting error when I start up the services due to the port in usage ERROR: for redis driver failed programming external connectivity on endpoint ttnbackend_redis_1 : Bind for 0.0.0.0:12201 failed: port is already allocated ERROR: for broker driver failed programming external connectivity on endpoint ttnbackend_broker_1 : Bind for 0.0.0.0:12201 failed: port is already allocated
yes, because you open a port for logstash on all your containers, while you should only open it for the logstash container. there is no need for all the other containers to listen for logstash connections, therefor, they don’t need to open a port for it.