How to Centralize Logs from CoreOS Clusters

Containerization and microservice architectures are commonly resulting in highly distributed systems with large numbers of dynamic and ephemeral instances that autoscale to meet demands on system load. It’s not uncommon to see clusters of thousands of container instances, where once there were tens of physical servers, now there are hundreds of (cloud) server instances.

Because containers are extremely lightweight, they allow code to run in isolation from other containers while safely sharing the machine’s resources, all without the overhead of a hypervisor. Thus, containers can boot extremely quickly which gives you unprecedented flexibility in managing load across your cluster.

Containerization however is also leading to new challenges for system monitoring. Host machines and container instances are often ephemeral in their nature and many in their number. Using SSH + Grep for log access is now becoming completely infeasible in any reasonably sized system (not to mention a really bad practice).

As more and more IT and Dev Ops teams turn to microservice architectures and highly distributed systems, centralized logging becomes an even more critical requirement to provide visibility into system behavior, including application and operating system performance, error conditions and security anomalies.

So if you are building your microservice architecture on top of CoreOS how can you centralize your log data?

A challenge with modern, minimal operating systems like CoreOS is that they are designed to be very lightweight and often do not provide the traditional monitoring or logging processes out of the box. For example, CoreOS does not ship with a package manager such as yum or apt or many of the other common system elements one might expect coming from other more desktop-oriented distributions. Instead, CoreOS uses systemd for process management. Systemd also provides a logging system called journal which can easily forward logs to Logentries (thanks to Kelsey at CoreOS).

Journal is a modern logging system and provides some nice capabilities such as JSON export, forward sealing, and indexing for fast querying. For example, forward sealing is particularly useful as an extra security measure to avoid any tampering with log data from a potential attacker.

journal-2-logentries is a journal extension that allows you to easily send log data from CoreOS to your Logentries account.

To use this extension you can use the docker command to run a pre-configured journal-2-logentries Docker image. Simply run the following to start the process in a new container:

sudo docker run -d -e 'LOGENTRIES_TOKEN=YOUR_LOG_TOKEN' -v /run/journald.sock:/run/journald.sock \
quay.io/kelseyhightower/journal-2-logentries

In the above, replace YOUR_LOG_TOKEN with the token for the log in your Logentries account. The command will output the container ID and begin forwarding your logs to Logentries.

For a step-by-step guide on configuring journal-2-logentries you can check out the Logentries docs page. Note: to cater for clustered environments the journal-2-logentries plugin takes advantage of CoreOS’s distributed process manager, Fleet, as well as etcd for service discovery and global storage of your config settings. This is really cool as Fleet allows you to treat your CoreOS cluster as if it shared a single init system and makes configuration of journal log forwarding dead simple across your clustered environment.

More details on configuring Fleet and etcd with journal-2-logentries are available here.

 

Posted in Containers, How To, Log Management, Tips & Tricks

Leave a Reply