elk stack amazon web services

The ELK Stack is a great open-source stack for log aggregation and analytics. It stands for Elasticsearch (a NoSQL database and search server), Logstash (a log shipping and parsing service), and Kibana (a web interface that connects users with the Elasticsearch database and enables visualization and search options for system operation users). With a large open-source community, ELK has become quite popular, and it is a pleasure to work with.

In this article, we will guide you through the simple ELK installation process on Amazon Web Services.

The following instructions will lead you through the steps involved in creating a working sandbox environment. Due to the fact that a production setup is more comprehensive, we decided to elaborate on how each component configuration should be changed to prepare for use in a production environment.

We’ll start by describing the environment, then we’ll walk through how each component is installed, and finish by configuring our sandbox server to send its system logs to Logstash and view them via Kibana.

Note: All of the ELK components need Java to work, so we will have to install a Java Development Kit (JDK) first.

The AWS Environment

We ran this tutorial on a single AWS Ubuntu 14.04 server (ami-d05e75b8 in US-East zone) on an m4.large instance using its local storage. We started an EC2 instance in the public subnet of a VPC, and then we set up the security group (firewall) to enable access from anywhere using SSH and TCP 5601 (Kibana). Finally, we added a new elastic IP address and associated it with our running instance in order to connect to the internet.

Production tip: A production installation needs at least three EC2 instances — one per component, each with an attached EBS SSD volume.

Step-by-Step ELK Installation

To start, connected to the running server via SSH: ssh ubuntu@YOUR_ELASTIC_IP

Package installations

Prepare the system by running:

  1. sudo apt-get update
  2. sudo apt-get upgrade

Install OpenJDK

All of the packages we are going to install require Java. Both OpenJDK and Oracle Java are supported, but installing OpenJDK is simpler:

  1. sudo apt-get install openjdk-7-jre-headless

Verify that Java is installed:

  1. java -version

If the output of the previous command is similar to this, then you’ll know that you’re heading in the right direction:

  1. java version "1.7.0_79"
  2. OpenJDK Runtime Environment (IcedTea 2.5.5) (7u79-2.5.5-0ubuntu0.14.04.2)
  3. OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

You can set up your own ELK stack using this guide or try out our simple ELK as a Service solution.


Elasticsearch Installation

Elasticsearch is a widely used database and search server, and it’s the main component of the ELK setup.

Elasticsearch’s benefits include:

  • Easy installation and use
  • A powerful internal search technology (Lucene)
  • A RESTful web interface
  • The ability to work with data in schema-free JSON documents (noSQL)
  • Open source

To begin the process of installing Elasticsearch, add the following repository key:

  1. wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Add the following Elasticsearch list to the key:

  1. echo "deb http://packages.elastic.co/elasticsearch/1.7/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-1.7.list
  2. sudo apt-get update

Install:

  1. sudo apt-get install elasticsearch

Start service:

  1. sudo service elasticsearch restart

Test:

  1. curl localhost:9200

If the output is similar to this, then you will know that Elasticsearch is running properly:

  1. {
  2. "status" : 200,
  3. "name" : "Jigsaw",
  4. "cluster_name" : "elasticsearch",
  5. "version" : {
  6. "number" : "1.7.1",
  7. "build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
  8. "build_timestamp" : "2015-07-29T09:54:16Z",
  9. "build_snapshot" : false,
  10. "lucene_version" : "4.10.4"
  11. },
  12. "tagline" : "You Know, for Search"
  13. }

In order to make the service start on boot run:

  1. sudo update-rc.d elasticsearch defaults 95 10

Production tip: DO NOT open any other ports, like 9200, to the world! There are many bots that search for 9200 and execute groovy scripts to overtake machines.

Logstash Installation

Screen Shot 2015-10-13 at 13.02.23Logstash is an open-source tool that collects, parses, and stores logs for future use and makes rapid log analysis possible. Logstash is useful for both aggregating logs from multiple sources, like a cluster of Docker instances, and parsing them from text lines into a structured format such as JSON. In the ELK Stack, Logstash uses Elasticsearch to store and index logs.

To begin the process of installing Logstash, add the following Elasticsearch list:

  1. echo "deb http://packages.elasticsearch.org/logstash/1.5/debian stable main" | sudo tee -a /etc/apt/sources.list
  2. sudo apt-get update

Then, install the service, have it start on boot, and run:

  1. sudo apt-get install logstash
  2. sudo update-rc.d logstash defaults 97 8
  3. sudo service logstash start

To make sure it runs, execute the following command:

  1. sudo service logstash status

The output should be:

  1. logstash is running

Redirect System Logs to Logstash

Create the following file:

/etc/logstash/conf.d/10-syslog.conf

You will have to use Sudo to write in this directory:

  1. input {
  2. file {
  3. type => "syslog"
  4. path => [ "/var/log/messages", "/var/log/*.log" ]
  5. }
  6. }
  7. output {
  8. stdout {
  9. codec => rubydebug
  10. }
  11. elasticsearch {
  12. host => "localhost" # Use the internal IP of your Elasticsearch server
  13. # for production
  14. }
  15. }

This file tells Logstash to store the local syslog ‘/var/log/syslog’ and all the files under ‘/var/log*.log’ inside the Elasticsearch database in a structured way.

The input section specifies which files to collect (path) and what format to expect (syslog). The output section uses two outputs – stdout and elasticsearch. The stdout output is used to debug Logstash – you should find nicely-formatted log messages under ‘/var/log/logstash/logstash.stdout’. The elasticsearch output is what actually stores the logs in Elasticsearch.

In this example, we are using localhost for the Elasticsearch hostname. In a real production setup, however, the Elasticsearch hostname would be different because Logstash and Elasticsearch should be hosted on different machines.

Production tip: Running Logstash and Elasticsearch is a very common pitfall of the ELK stack and often causes servers to fail in production. You can read some more tip on how to install ELK in production.

Finally, restart Logstash to reread its configuration:

  1. sudo service logstash restart

You can set up your own ELK stack using this guide or try out our simple ELK as a Service solution.


Kibana Installation

kibanaKibana is an open-source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line, and scatter plots; pie charts; and maps on top of large volumes of data.

Among other uses, Kibana makes working with logs easy. Its graphical web interface even lets beginning users execute powerful log searches.

To begin the process of installing Kibana, download the following binary with this command:

  1. wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz

Extract it:

  1. tar -xzf kibana-4.1.1-linux-x64.tar.gz
  2. cd kibana-4.1.1-linux-x64/

Move the files to ‘/opt’, create a service file, and have it start on boot:

  1. sudo mkdir -p /opt/kibana
  2. sudo mv kibana-4.1.1-linux-x64/* /opt/kibana
  3. cd /etc/init.d && sudo wget https://raw.githubusercontent.com/akabdog/scripts/master/kibana4_init -O kibana4
  4. sudo chmod +x /etc/init.d/kibana4
  5. sudo update-rc.d kibana4 defaults 96 9
  6. sudo service kibana4 start

Test: Point your browser to ‘http://YOUR_ELASTIC_IP:5601’ after Kibana is started.

You should see a page similar to this:

install kibana aws

Before continuing with the Kibana setup, you must configure an Elasticsearch index pattern.

What does an “index pattern” mean, and why do we have to configure it? Logstash creates a new Elasticsearch index (database)every day. The names of the indices look like this: logstash-YYYY.MM.DD — for example, “logstash-2015.09.10” for the index that was created on September 10, 2015.

Kibana works with these Elasticsearch indices, so it needs to know which ones to use. The setup screen provides a default pattern, ‘logstash-*’, that basically means “Show the logs from all of the dates.”

Clicking the “Create” button creates the pattern and enables Kibana to find the logs.

Production tip: In this tutorial, we are accessing Kibana directly through its application server on port 5601, but in a production environment you might want to put a reverse proxy server, like Nginx, in front of it.

To configure Kibana to show the logs:

1. Go to the Kibana configuration page
2. Click on “Create”
3. Click on “Discover” in the navigation bar to find your log

The result should look like this:

run kibana on aws

As you can see, creating a whole pipeline of log shipping, storing, and viewing is not such a tough task. In the past, storing, and analyzing logs was an arcane art that required the manipulation of huge, unstructured text files. But the future looks much brighter and simpler.