I personally believe that databases should reside outside your swarm, but for testing we will create a Elasticsearch Container from a Alpine Image to keep the size of our image small, which is also great for a testing environment.

The size of our Elasticsearch 5.6 Image comes down to about 184MB and version 2.4 to about 171MB

Our Dockerfile:

As mentioned, we will build our image from the base Alpine image, as you can see below, and also the rest of our Dockerfile.

Please note: This image is prepared for Docker swarm as I have defined the discovery.zen.ping.unicast.hosts value to a hostname, which will be the same as my master elasticsearch service's name, which will be resolvable.

If you are using this without swarm, you can simple remove the last parameter.

FROM alpine:latest

RUN apk update \
    && apk upgrade \
    && apk add curl wget bash openssl openjdk8 \
    && rm -rf /var/cache/apk/*

WORKDIR /root/

RUN wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.0.tar.gz -O elasticsearch-5.6.0.tar.gz

RUN tar -xf  elasticsearch-5.6*.tar.gz -C /usr/local/ \
    && mv /usr/local/elasticsearch-5.6* /usr/local/elasticsearch \
    && mkdir /usr/local/elasticsearch/logs \
    && mkdir /usr/local/elasticsearch/data \
    && echo '-Xms512m' > /usr/local/elasticsearch/config/jvm.options \
    && echo '-Xmx512m' >> /usr/local/elasticsearch/config/jvm.options \
    && adduser -D -u 1000 -h /usr/local/elasticsearch elasticsearch \
    && chown -R elasticsearch /usr/local/elasticsearch

USER elasticsearch

CMD ["/usr/local/elasticsearch/bin/elasticsearch", "-Ecluster.name=es-cluster", "-Enode.name=${HOSTNAME}", "-Epath.data=/usr/local/elasticsearch/data", "-Epath.logs=/usr/local/elasticsearch/logs", "-Enetwork.host=", "-Ediscovery.zen.ping.unicast.hosts=es-master"]

EXPOSE 9200 9300

Building our Image:

While we have our Dockerfile in our current working directory, we can run the following:

$ docker build -t 'es:5.6' .

Pushing your Image:

You can then push your image to any registry of choice, for this example let's push it to Docker Hub:

$ docker login
$ docker tag es:5.6 <username>/es:5.6
$ docker push <username>es:5.6

Creating your Elasticsearch Cluster:

Create the Overlay Network:

$ docker network create --driver=overlay appnet

Let's create the Master (aka Exposed Entrypoint), this need to match the same name as mentioned before:

$ docker service create --name es-master -p 9200:9200 --network appnet --replicas 1  --with-registry-auth <username>/es:5.6

Wait until the service reached the desired replica count, which in this case is 1. Then we can continue to create 3 other data nodes:

$ docker service create --name es-data-1  --network appnet --replicas 1 --with-registry-auth <username>/es:5.6

$ docker service create --name es-data-2  --network appnet --replicas 1 --with-registry-auth <username>/es:5.6

$ docker service create --name es-data-3  --network appnet --replicas 1 --with-registry-auth <username>/es:5.6

After some time, we should see an output more or less like the following:

$ docker service ls -f name=es
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
mf80ezthqw5b        es-data             replicated          1/1                 es:5.6
j54v3rzs15km        es-data-2           replicated          1/1                 es:5.6
n02029dtosr9        es-data-3           replicated          1/1                 es:5.6
r2mfhekkvv6b        es-master           replicated          1/1                 es:5.6              *:9200->9200/tcp

Query Our Elasticsearch Cluster:

Let's have a look at our Cluster Health API:

$ curl -XGET
  "cluster_name" : "es-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 4,
  "number_of_data_nodes" : 4,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0

And also our Nodes API:

$ curl -XGET
ip        heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name           31          76   7    0.33    0.34     0.55 mdi       -      1caebab2d4a4           31          76   7    0.33    0.34     0.55 mdi       -      8b494325c714           35          76   7    0.33    0.34     0.55 mdi       -      0f64c27257eb            33          76   7    0.33    0.34     0.55 mdi       *      01e903680e00

Further Actions:

I have a WIP cheatsheet of some Elasticsearch examples, which you can follow on one of my Elasticsearch Cheatsheet Gists

Thats it for now :D