/ Linux

Setup ELK Stack with Elasticsearch Kibana Logstash

Centralized logging, analytics and visualization with ElasticSearch, Filebeat, Kibana and Logstash.

Our ELK Stack will consist of:

Elasticsearch:
Stores all of the logs
Kibana: Web interface for searching and visualizing logs
Logstash: The server component of Logstash that processes incoming logs
Filebeat: Installed on our client servers that will push their logs to Logstash

Let's get started on setting up our environment for our ELK Stack.

Install Java 8:

$ cd ~
$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"
$ yum localinstall jdk-8u65-linux-x64.rpm
$ rm -rf jdk-8u65-linux-x64.rpm
``` <p>

**Install Elasticsearch:**

```language-bash
$ rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
$ vi /etc/yum.repos.d/elasticsearch.repo
``` <p>

Add our repo configuration:   

```language-apacheconf
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
```<p>

Then install elasticsearch:

```language-bash
$ yum -y install elasticsearch
```<p>

**Configure Elasticsearch:**

```language-bash
$ vi /etc/elasticsearch/elasticsearch.yml
```<p>

We will set our `network.host` to localhost

```language-bash

network.host: localhost

```<p>

Start elasticsearch and enable on startup:


```language-bash
$ service elasticsearch start
$ chkconfig elasticsearch on
```<p>

**Install Kibana:**

Create repo configuration for Kibana:

```language-bash
$ vi /etc/yum.repos.d/kibana.repo
```<p>

```language-apacheconf
[kibana-4.4]
name=Kibana repository for 4.4.x packages
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
```<p>

Install Kibana:

```language-bash
$ yum -y install kibana
```<p>

**Configure Kibana:**

```language-bash
$ vi /opt/kibana/config/kibana.yml
```<p>

We will set our `server.host` to localhost:

```language-bash
server.host: "localhost"
```<p>

Start kibana and enable on startup:

```language-bash
$ service kibana start
$ chkconfig kibana on
```<p>

**Install Logstash:**

Create repo configuration for Logstash:

```language-bash
$ vi /etc/yum.repos.d/logtash.repo
```<p>

```language-apacheconf
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
```<p>

Install Logstash:

```language-bash
$ yum -y install logstash
```<p>

**Generate SSL Certificates:**

```language-bash
$ vi /etc/pki/tls/openssl.cnf
```<p>

Find the configuration line, and replace your private servers ip address:

```language-apacheconf
subjectAltName = IP: your_servers_private_ip_here
```<p>

After saving your config, we will generate the SSL Certificate and Private Key:

```language-bash
$ cd /etc/pki/tls
$ openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
```<p>

Keep in mind that all the servers that will be pushing logs to logstash, needs to have the `logstash-forwarder.crt` certificate.

**Configure logstash:**

```language-bash
$ vi /etc/logstash/conf.d/02-beats-input.conf
```<p>

```language-yaml
input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
```<p>


```language-bash
$ vi /etc/logstash/conf.d/10-syslog-filter.conf
```<p>

```language-yaml
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
```<p>


```language-bash
$ vi /etc/logstash/conf.d/30-elasticsearch-output.conf
```<p>

```language-yaml
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
```<p>

```language-bash
$ service logstash configtest
```<p>

**Note:**   
If you are using elasticsearch v2+, then change `host` to `hosts` in this following configuration:

```language-yaml
output {
  elasticsearch { hosts => localhost }
  stdout { codec => rubydebug }
}
```<p>

If all is okay, restart logstash:

```language-bash
$ service logstash restart
```<p>

We will load Kibana dashboards and Beats index patterns that is provided by Elastic that can help us to get started with Kibana:

```language-bash
cd ~
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
unzip beats-dashboards-*.zip

cd beats-dashboards-*
./load.sh
```<p>
These are our Index Patterns that we just loaded:

```language-bash
[packetbeat-]YYYY.MM.DD
[topbeat-]YYYY.MM.DD
[filebeat-]YYYY.MM.DD
[winlogbeat-]YYYY.MM.DD
```<p>

We are using Filebeat to ship logs to Elasticsearch, we should then load a Filebeat index template:

```language-bash
cd ~
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
```<p>

Now we will load our template into Elasticsearch:

```language-bash
$ curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json
```<p>

The expected output should look like this:

```language-json
{
  "acknowledged" : true
}
```<p>

**Set Up Filebeat on our Client Servers:**

Copy our certificate onto our client servers:

```language-bash
$ scp /etc/pki/tls/certs/logstash-forwarder.crt user@client_server_private_IP_address:/tmp
```<p>

On our client servers:

```language-bash
[clientserver] $ mkdir -p /etc/pki/tls/certs
[clientserver] $ cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
[clientserver] $ rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
```<p>

Create repo configuration for beats on our client servers:

```language-bash
$ vi /etc/yum.repos.d/elastic-beats.repo
```<p>

```language-apacheconf
[beats]
name=Elastic Beats Repository
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1
```<p>

Install filebeat on our client servers:

```language-bash
[clientserver] $ yum -y install filebeat
```<p>

**Configure Filebeat:**

```language-bash
[clientserver] $ vi /etc/filebeat/filebeat.yml
```<p>

```language-yaml
...
    -
      paths:
        - /var/log/secure
        - /var/log/messages
        - /var/log/squid/*.log
...
      document_type: syslog
...
  ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["your-ELK-private-ip:5044"]
...
    bulk_max_size: 1024

...
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

```<p>

```language-bash
[clientserver] $ service filebeat start

Test filebeat installation:

[clientserver] $ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
```<p>

Expected output:

```language-json
...
{
      "_index" : "filebeat-2016.02.19",
      "_type" : "log",
      "_id" : "AVL5Dl3hhgXUIOAm_McP",
      "_score" : 1.0,
      "_source" : {
        "@metadata" : {
          "beat" : "filebeat",
          "type" : "log"
        },
        "@timestamp" : "2016-02-19T10:23:33.447Z",
        "beat" : {
          "hostname" : "elk.int.ruanbekker.com",
          "name" : "elk.int.ruanbekker.com"
        },
        "count" : 1,
        "fields" : null,
        "input_type" : "log",
        "message" : "2016/02/19 12:03:38| Loaded Icons.",
        "offset" : 1811,
        "source" : "/var/log/squid/caching.log",
        "type" : "log"
      }
...
```<p>
---
Finally, Connect to Kibana Web Interface on `http://your_endpoint:5061`