The State of Logging on Docker

Last week I wrote a post on how to log from Docker containers. In short I was playing around with Docker and wanted to try to figure out how to get some log data from my containers into my Logentries account. I tried the following approaches:

  • Docker ‘logs’: Docker has a ‘logs’ command $ docker logs Container_ID that will fetch the logs from a container. You can run this via the docker daemon on your host and it will  capture all the stdout/stderr from the process you’re running. I ran this in conjunction with the Logentries Agent which was running on the host machine to send any logs obtained via the docker ‘logs’ command into my Logentries account. The big drawback with the docker ‘logs’ command, however, is that it will obtain the contents of the entire log from the container each time – so this approach does’t really work very well if you want to stream all your Docker logs to a centralized logging service/server.
  • Creating a Docker image with rsyslog installed: As an alternative approach Chris Mowforth, one of our engineers at Logentries, put together a Docker image with rsyslog preinstalled. This allowed me to ship the logs in real time from the container directly to Logentries – nice!

Interestingly I got some great feedback from the Docker community on this post. Mostly asking why I didn’t run rsyslog outside the container on the host. In order to do this you can bind a volume to the container and then write logs from your process to that mount point.

Logging on Docker

This approach means I do not need to run any other processes inside my container and can capture logs from all containers via a single rsyslog daemon. Binding a volume to the container is described in more detail in a nice post on Tim Gross’ blog.  Tim does call out a security concern with this approach however – “attaching a volume from the host slightly weakens the LXC security advantages. The contained process can now write outside its container and this is a potential attack vector if the contained process is compromised.”


Although, according to the baseimage-docker guys, “there is no technical reason why you should limit yourself to one process – it only makes things harder for you and breaks all kinds of essential system functionality, e.g. syslog.” In fact baseimage-docker ships with syslog-ng preinstalled so that important system messages do not get lost. I have not had the chance to hook this up to Logentries yet, but intend to soon so keep an eye out here!

After looking around further I also found that another approach is to run a log collector inside a container on your host machine and expose a named pipe as a volume that other containers on that host can mount and write to. There’s a great post on HNWatcher on how to run a log-stash forwarder in a container, which will allow you to collect and forward logs from any other container on the same host.

So in summary if you are trying to get at your docker log data right now you have a number of options:

  • Run syslog in the container (e.g. like with baseimage-docker) and use it to forward your logs to a centralized server/service
  • Run syslog ‘outside’ the container and bind a volume to the container – then write logs from your process to that mount point
  • Have another container take care of log collection. E.g. run a collector inside the container (e.g. the Logentries agent/ log stash forwarder) and expose a volume that other containers on the host can write to.

All that being said, keep your eyes peeled. It looks like there is more coming down the track from the Docker developer community – check out some interesting discussions, on how ideas around improving access to container logs are developing on the docker-dev mailing list.

You Might Also Like

Posted in Agent, Cloud, Feature, How To, Log Management
3 comments on “The State of Logging on Docker
  1. Greg Weber says:

    There is another alternative that I use to ship logs from docker to logentries which keeps things de-coupled and reduces some security concerns: pipe docker run output to logger.

    docker run … | logger

    And use the logentries information to ship syslog. Maybe there is a good reason not to do things this way?

  2. A disadvantage of the binding to /dev/log and running syslogd outside the container is that all your docker log entries will appear to be running on your host. If you are running the same process on more than one Docker container, then you can’t tell which one wrote the log entry you’re reading.

    On the other hand, it does work really well.

  3. Option number 3 is certainly the most “docker” way of doing it and my first choice.

    You can keep one container per process (which makes everything simpler to reason about) and you don’t have to deal with the rsyslog dependency in your system (which could be crucial if you’re deploying to something like kubernetes on GCE or deis)

Leave a Reply

Your email address will not be published. Required fields are marked *



Subscribe to the Blog