Installing ELK in Docker on Ubuntu 18.04 with ZFS

Elk dashboards

I’m working on my next project, with is a DMZ Minecraft server, but I want to ensure that I have adequate logging in place. I decided that I’d install the ELK stack on a server so I could easily mine and visualize my logs. For those of you who aren’t familiar, ELK stands for ElasticSearch, LogStash and Kibana. These tools let you store, parse, and see log files from lots of computers in one place, so you can analyze them. It’s also handy for seeing when a system you maintain is behaving badly, like the morning after a deployment.

Elk dashboards
Elk dashboards

Since I hadn’t had much experience with it, I decided to try and install it with Docker, which was a challenging little project. Here are the steps I used to do it.

Install Docker

This part was relatively straightforward and taken from here.

apt install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

Apt update

apt-cache policy docker-ce

apt install docker-ce

 

Then you need to edit /etc/sysctl.conf and add:

vm.max_map_count = 262144

 

to the end of the file (I did not have an entry for vm.max_map_count).

I rebooted at this point and ran:

systemctl status docker

and saw that the docker process is up and running.

Docker Status
Docker Status

That was the easy part. Once I wanted to do more complex stuff, it got a bit hairy…

 

Moving Docker to Use a ZFS Partition

My server’s main drive is a smallish SSD, and since docker can use a lot of space, I wanted to move the image storage to my ZFS drive. After a lot of googling (it seems that docker has changed several times since it was first released), I figured it out.

First, I created a ZFS partition in my storage ZFS pool (which mounts automatically to /storage/docker):

zfs create storage/docker

then I stopper docker

systemctl docker stop

and moved all of the contents of /var/lib/docker:

mv /var/lib/docker/* /storage/docker

then you need to create a file called daemon.json in the /etc/docker folder. This file is automatically read at docker startup if it exists, but does not seem to be created by default. It should contain the following: (see here fore the reference)

vi /etc/docker/daemon.json
{
"storage-driver": "zfs",
"graph": "/storage/docker"
}

then:

 service docker start

should start docker. You can verify it’s running with

docker info

Getting and Installing the ELK Image

I found an ELK image here, and followed those instructions to install it:

 docker pull sebp/elk

 docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk

and that’s it. However, it gets complicated because the image doesn’t have the geoip feature installed, which is required for syslog ingestion. This is referenced in the instructions, but getting it started took a bit of work. I found this site that almost got me there.

To do this you need to create a derived image from the image you just pulled down, which includes the geoip feture.

First, you create an empty folder in your docker root folder

mkdir elk-docker-geoip

then add a file called ‘Dockerfile’ there, with these contents (which I got from the instructions):

vi Dockerfile
FROM sebp/elk

ENV ES_HOME /opt/elasticsearch
WORKDIR ${ES_HOME}

RUN yes | CONF_DIR=/etc/elasticsearch gosu elasticsearch bin/elasticsearch-plugin \
install -b ingest-geoip

Then you need it build it. It Docker has two concepts – an image, and a container. What I understand happens is this (as explained here):

  • When you execute ‘docker build’, you are making an image.
  • When you execute ‘docker run’ the first time, you are creating and initializing your container.
  • When you execute ‘docker start’, you start a container.

One image can create many containers, which is the heart of scalability.

So, the next command to run, from your elk-docker-geoip folder, is:

docker build -t elk-geoip .

This will create a new image and give it a name, in this case, ‘elk-geoip’. The -t parameter is to add a tag, but the notes say it is also used to set the name. This command reads the Dockerfile and builds a new image inherited from another image on your system (or pulls it).

In this case, as you can see, the Dockerfile builds a new image from ‘sebp/elk’ and runs the commands that follow the ‘FROM’ line.

When you build, you’ll see that it execute the commands in the Dockerfile and output a new image.

Finally, you run it the first time, which creates your container, and, importantly, sets up your port redirections:

docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk-geoip sebp/elk

 

From then on, you can start it with ‘docker start elk-geoip’.

Now you have a running, functioning ELK stack in docker. You’ll probably also need:

ufw allow 5601/tcp
ufw allow 9200/tcp
ufw allow 5044/tcp

 

To allow access via firewall.

Last but not least, if you want that container to automatically start on reboot, run:

 docker update --restart always elk-geoip

Sending Logs

Finally, on your servers that need to send logs, you need to use Filebeats to send the logs to your ELK server.

Installation was pretty painless, with their guide. You need to edit /etc/filebeat/filebeat.yml to change 2 lines:

In the ‘Kibana’ section, change the ‘host’ entry to your kibana host.

In the ‘Elasticsearch output’ section, change the ‘hosts’ entry to your server as well.

Since I’m monitoring standard system logs, I had to run:

filebeat modules enable system

Which enables system log reading, then:

chkconfig filebeat on

to set that service to start automatically (it’s on a CentOS server).

You need to run ‘filebeat setup’ to push the filebeat dashboards to Kibana

then you can run:

filebeat -e

which will start it in interactive mode to see if it works. If it does, then you can start it normally with ‘systemctl start filebeat’

Then it should start to feed Kibana, and you’ll see logs coming in.

Kibana Logs View
Kibana Logs View

Conclusion

This was very cool and worked pretty well, except for one problem that I don’t think I have the appetite to get around. Since I use LetsEncrypt, I need to be able to automate certificate installation. However, since I do this every 90 days, I strongly suspect that I’ll need to build a new image from my certificate script each time I get a new certificate. I think this would work, but I’m a little torn on whether to introduce a third level of certificate management, when otherwise it’s just a matter of copying two files. I suspect that in this case, I’ll just use docker as a prototyping tool.

 

What I’m listening to as I do this: The Mighty Mighty Bosstones. I am prone, as many are, to periods of nostalgia for the 90s, and I like to go back to the good old days of 3rd wave Ska, and the Bosstones. Takes you right back to the dorm room, and their earlier work that I listened to in high school has some great stuff, like ‘I’ll drink to that‘.