I thought it was about time, as I had some to spare today, to have a play with one of the new feature of the Elastic family of products I have yet to try.
Beats is the platform for single-purpose data shippers. They install as lightweight agents and send data from hundreds or thousands of machines to Logstash or Elasticsearch.
I have used an ELK stack before but never for metrics, it always been for more traditional log files and tools such as Zabbix have been my goto for metrics.
Before going any further it should be pointed out that this is in no way a production configuration, there is no high availability, storage volumes used or any thought around security, it is just a proof of concept.
Launching an ElasticĀ Stack
As I needed somewhere to send my metrics to I decided to use Docker Machine (Docker again, now there is a surprise) to launch three Docker hosts in DigitalOcean , configure a Swarm and then create an Elasticsearch & Kibana service.
To do this, I first launched a manager host;
Then two workers;
Once I had all three Docker hosts online I ran the following to make sure that the Elasticseach container would launch;
Notice that I only ran the commands on the two worker nodes, I am going to keep my Elastic stack on just these two hosts.
Now that my three Docker hosts are available and configured I created the Docker Swarm cluster by running the following commands;
I checked that all three Docker hosts were correctly in the cluster;
Everything was as expected, it was time to launch the Elasticsearch and Kibana services. I started by creating an overlay network called elk;
Then I created the Elasticsearch service;
Followed by the Kibana service;
After a minute I checked that the two services were running as expected using;
I now had my Elastic stack up and running.
Installing Metricbeats
Now that I had my three hosts running and an Elastic stack ready to ingest data I need to install some Beats on the host. Looking at the available Beats I decided to go with the Metricbeat as this covered all of the basics I wanted;
System-Level Monitoring, Simplified; Deploy Metricbeat on all your Linux, Windows, and Mac hosts, connect it to Elasticsearch and voila: you get system-level CPU usage, memory, file system, disk IO, and network IO statistics, as well as top-like statistics for every process running on your systems.
As Docker Machine provides an SSH command, I decided to continue to use that to install and configure the service. Before installing I grabbed the IP address of the Manager node so I could use it when configuring Metricbeat, to do this I ran the following;
I should be able to use $SWMIP where I need to need to use the IP address of a host within the Swarm Cluster. Remember, as I launched the Elastic stack as a service hosted on an overlay network I should be able to hit any of my three nodes to be routed to the correct container.
I started by installing Metricbeats on worker01, first of all by downloading and installing the deb package;
Once I had installed the deb package I configured Elasticbeat by running the following command which overwrote the default configuration file;
The configuration file enables two Metricbeat modules;
- system; this is the default module which collects host metrics such as the ones listed under āmetricsetsā.
- docker; this is an experimental module which gathers metrics on both the Docker hosts and the containers running on them, notice that I have configured it to use the socket file rather connecting to the Docker API using a network socket.
Also, at the end of the configuration, we tell Metricbeat where our Elasticsearch container is accessible for it to send data.
Once the configuration file was in place I started the service by running;
I then repeated the process on worker02 and manager by replacing worker01 in the docker-machine ssh
commands.
Before I logging into Kibana there are two more things that I needed to do, first of all, import the Metricbeat template, to do this I ran;
Then, I ran a script to import the pre-built Kibana Dashboards by running;
Viewing the Metrics Dashboard
Now that I had installed and configured Metricbeat on all three hosts and readied Elasticsearch and Kibana it was time to open the Kibana dashboard.
To do this, I ran the following command;
Like all other Kibana installations, the first thing I needed to do was configure an index pattern, to do this I entered metricbeat-*
and selected @timestamp
from the drop-down list;
Once that index pattern had been configured clicking on Discover took me to the following view, as you can see, I was receiving metrics from my three hosts;
Clicking on Dashboard, and then selecting Metricbeat-overview gave me the following view;
From there I clicked on Load/CPU and Processes gave me the following dashboards;
So far, so good. Before moving onto the Docker dashboards, I decided to launch a few more services. To do this, I ran the following;
This created a service which launched three basic containers using the image from russmckendrick/cluster and then made them available on port 80 on all three hosts.
Then I launched a service using manomarks/visualizer , this gives you a visual representation of your Docker Swarm cluster. To this I ran;
Running the following showed me my cluster
Then running;
Opened my browsers and showed me how my Swarm cluster was organised;
Going back to Kibana, I selected the Metricbeat Docker dashboard and was greeted by the following;
The dashboard is exactly what I was expecting to see, selecting the container from the list on the top left shows just the metrics for the selected container. At this point, my time ran out, so I tore down the cluster by running;
After dipping my toe in the water I think it is something I am going to be looking into more, there are plenty of other Beats available;
The following talk from OSDC 2016 gives a good idea about the sort of things you can use Beats for;