Setup ELK Stack with Elasticsearch Kibana Logstash
~Note:~ This post is old and is scheduled to be updated.
Centralized logging, analytics and visualization with ElasticSearch, Filebeat, Kibana and Logstash.
Our ELK Stack will consist of:
Elasticsearch:
Stores all of the logs
Kibana: Web interface for searching and visualizing logs
Logstash: The server component of Logstash that processes incoming logs
Filebeat: Installed on our client servers that will push their logs to Logstash
Let's get started on setting up our environment for our ELK Stack.
Install Java 8:
$ cd ~
$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"
$ yum localinstall jdk-8u65-linux-x64.rpm
$ rm -rf jdk-8u65-linux-x64.rpm
Install Elasticsearch:
$ rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
$ vi /etc/yum.repos.d/elasticsearch.repo
Add our repo configuration:
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
Then install elasticsearch:
$ yum -y install elasticsearch
Configure Elasticsearch:
$ vi /etc/elasticsearch/elasticsearch.yml
We will set our network.host
to localhost
network.host: localhost
Start elasticsearch and enable on startup:
$ service elasticsearch start
$ chkconfig elasticsearch on
Install Kibana:
Create repo configuration for Kibana:
$ vi /etc/yum.repos.d/kibana.repo
[kibana-4.4]
name=Kibana repository for 4.4.x packages
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
Install Kibana:
$ yum -y install kibana
Configure Kibana:
$ vi /opt/kibana/config/kibana.yml
We will set our server.host
to localhost:
server.host: "localhost"
Start kibana and enable on startup:
$ service kibana start
$ chkconfig kibana on
Install Logstash:
Create repo configuration for Logstash:
$ vi /etc/yum.repos.d/logtash.repo
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
Install Logstash:
$ yum -y install logstash
Generate SSL Certificates:
$ vi /etc/pki/tls/openssl.cnf
Find the configuration line, and replace your private servers ip address:
subjectAltName = IP: your_servers_private_ip_here
After saving your config, we will generate the SSL Certificate and Private Key:
$ cd /etc/pki/tls
$ openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Keep in mind that all the servers that will be pushing logs to logstash, needs to have the logstash-forwarder.crt
certificate.
Configure logstash:
$ vi /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
$ vi /etc/logstash/conf.d/10-syslog-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
$ vi /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
$ service logstash configtest
Note:
If you are using elasticsearch v2+, then change host
to hosts
in this following configuration:
output {
elasticsearch { hosts => localhost }
stdout { codec => rubydebug }
}
If all is okay, restart logstash:
$ service logstash restart
We will load Kibana dashboards and Beats index patterns that is provided by Elastic that can help us to get started with Kibana:
cd ~
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
unzip beats-dashboards-*.zip
cd beats-dashboards-*
./load.sh
These are our Index Patterns that we just loaded:
[packetbeat-]YYYY.MM.DD
[topbeat-]YYYY.MM.DD
[filebeat-]YYYY.MM.DD
[winlogbeat-]YYYY.MM.DD
We are using Filebeat to ship logs to Elasticsearch, we should then load a Filebeat index template:
cd ~
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
```<p>
Now we will load our template into Elasticsearch:
$ curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json
The expected output should look like this:
```json
{
"acknowledged" : true
}
Set Up Filebeat on our Client Servers:
Copy our certificate onto our client servers:
$ scp /etc/pki/tls/certs/logstash-forwarder.crt user@client_server_private_IP_address:/tmp
On our client servers:
[clientserver] $ mkdir -p /etc/pki/tls/certs
[clientserver] $ cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
[clientserver] $ rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
Create repo configuration for beats on our client servers:
$ vi /etc/yum.repos.d/elastic-beats.repo
[beats]
name=Elastic Beats Repository
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1
Install filebeat on our client servers:
[clientserver] $ yum -y install filebeat
Configure Filebeat:
[clientserver] $ vi /etc/filebeat/filebeat.yml
...
-
paths:
- /var/log/secure
- /var/log/messages
- /var/log/squid/*.log
...
document_type: syslog
...
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["your-ELK-private-ip:5044"]
...
bulk_max_size: 1024
...
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
[clientserver] $ service filebeat start
Test filebeat installation:
[clientserver] $ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
```<p>
Expected output:
```json
...
{
"_index" : "filebeat-2016.02.19",
"_type" : "log",
"_id" : "AVL5Dl3hhgXUIOAm_McP",
"_score" : 1.0,
"_source" : {
"@metadata" : {
"beat" : "filebeat",
"type" : "log"
},
"@timestamp" : "2016-02-19T10:23:33.447Z",
"beat" : {
"hostname" : "elk.int.ruanbekker.com",
"name" : "elk.int.ruanbekker.com"
},
"count" : 1,
"fields" : null,
"input_type" : "log",
"message" : "2016/02/19 12:03:38| Loaded Icons.",
"offset" : 1811,
"source" : "/var/log/squid/caching.log",
"type" : "log"
}
...
Finally, Connect to Kibana Web Interface on http://your_endpoint:5061