Running a Terracotta cluster within Docker containers

This wiki article describes how to run and build Docker images for Terracotta Server OSS (Open Source) version 4.3.1

[translation missing: en.edit] Introduction

The Terracotta 4.x OSS offering includes the following:

  • Ehcache 2.x compatibility
  • Distributed In-Memory Data Management with fault-tolerance via Terracotta Server (1 stripe – active with optional mirror)
  • In memory off-heap storage - take advantage of all the RAM in your server

[translation missing: en.edit] Tools to have installed on your laptop

You will need to install the Docker Toolbox on your laptop (version 1.9.1 minimum).

If you already have a Docker host created, for example :

 $ docker-machine ls
 NAME          ACTIVE   DRIVER       STATE     URL   
 dev           -        virtualbox   Running   tcp://192.168.99.103:2376

In this example, I have one Docker host available from Docker Machine; to make sure it will have enough RAM available to run a Terracotta server node, I suggest you to edit this file :

 ~/.docker/machine/machines/dev/config.json

and to replace :

 "Memory":1024

with :

 "Memory":2048

and then restart your host :

 $ docker-machine restart dev

[translation missing: en.edit] Starting one server and one client on the same Docker host

Start the server

 $ docker run --name tc-server -d anthonydahanne/terracotta-server-oss:4.3.1

and then start a client :

 $ docker run -d --name petclinic -p 9966:9966 --link tc-server:tsa anthonydahanne/spring-petclinic-clustered:4.3.1

The client is based on the Spring petclinic app, but using a clustered cache instead of a standalone cache (you can see the difference here)

You can have a look at the logs using

 $ docker logs -f petclinic

If everything goes well, you should see :

 INFO - Connection successfully established to server at 192.168.59.103:9510

At this point go to http://DOCKER_HOST:9966/petclinic/ from the docker host machine to interact with this client webapp. Click on Veterinarians, you should see this list of Veterinarians stored in the cluster.

Congratulations ! You could successfully start a clustered cache application in a Docker container !

[translation missing: en.edit] Getting serious : one active, and one passive - multi-host networking

[translation missing: en.edit] Setting up the hosts

We will use the Docker overlay network plugin, introduced in Docker 1.9 to make several Docker hosts join the same Virtual Network. We will use hosts based on Virtual Box, but the beauty of this tutorial is that you could EC2 or Google Cloud or Digital Ocean (etc. ) hosts.

Follow the official overlay network driver documentation until the step 4 (do not apply step 4)

If everything went fine, you should have a small cluster of Docker hosts at your disposal :

 $ docker-machine ls
 NAME          ACTIVE       STATE     URL                         SWARM
 dev           -   Running   tcp://192.168.99.103:2376
 mh-keystore   *   Running   tcp://192.168.99.104:2376
 mhs-demo0     -   Running   tcp://192.168.99.105:2376   mhs-demo0 (master)
 mhs-demo1     -   Running   tcp://192.168.99.106:2376   mhs-demo0

And also an overlay network ready to be used :

 $ docker network ls
 NETWORK ID          NAME                DRIVER
 [...]
 87fde954a83b        my-net              overlay
 [...]

We would need first to stop mhs-demo0 and mhs-demo1 to give them some more RAM :

 $ docker-machine stop mhs-demo0 mhs-demo1

And then, in

 ~/.docker/machine/machines/mhs-demo0/config.json

and

 ~/.docker/machine/machines/mhs-demo1/config.json

Adjust from

 "Memory":1024

to

 "Memory":2048

Now you can restart those 2 hosts :

 $ docker-machine start mhs-demo0 mhs-demo1

And also create and start another one :

 $ docker-machine create -d virtualbox \
    --virtualbox-memory "2048" \
    --swarm \
    --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
    --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
    --engine-opt="cluster-advertise=eth1:2376" \
  mhs-demo2

Now we should have such a setup :

 $ docker-machine ls
 NAME          ACTIVE       STATE     URL                         SWARM
 dev           -   Running   tcp://192.168.99.103:2376
 mh-keystore   -   Running   tcp://192.168.99.104:2376
 mhs-demo0     *   Running   tcp://192.168.99.105:2376   mhs-demo0 (master)
 mhs-demo1     -   Running   tcp://192.168.99.106:2376   mhs-demo0
 mhs-demo2     -   Running   tcp://192.168.99.107:2376   mhs-demo0

[translation missing: en.edit] Starting the containers !

 $ docker run --hostname tsa --name tsa -d -e TC_SERVER1=tsa -e TC_SERVER2=tsa2 --net=my-net --env="constraint:node==mhs-demo1" anthonydahanne/terracotta-server-oss:4.3.1
 $ docker run --hostname tsa2 --name tsa2 -d -e TC_SERVER2=tsa2 -e TC_SERVER1=tsa --net=my-net --env="constraint:node==mhs-demo2" anthonydahanne/terracotta-server-oss:4.3.1
 $ docker run --name petclinic -d --net=my-net --env="constraint:node==mhs-demo0" -p 9966:9966 anthonydahanne/spring-petclinic-clustered:4.3.1

You should end up with something similar to :

 $ docker ps
 IMAGE                          PORTS            NAMES
 spring-petclinic-clustered:4.3.1       192.168.99.109:9966->9966/tcp mhs-demo0/petclinic
 terracotta-server-oss:4.3.1  9510/tcp, 9530/tcp, 9540/tcp    mhs-demo1/tsa2
 terracotta-server-oss:4.3.1  9510/tcp, 9530/tcp, 9540/tcp    mhs-demo2/tsa

Now go to http://192.168.99.109:9966/petclinic/

Want to make sure it's a cluster ? How about restarting a server ?

 $ docker restart tsa

and then :

 $ docker logs -f tsa

    04:51:08,998  INFO console:90 - Moved to State[ PASSIVE-STANDBY ]
    [TC] 2015-12-15 04:51:08,998 INFO - Moved to State[ PASSIVE-STANDBY ]

Congratulations ! You successfully started an Active / Passive Terracotta Cluster with one client !

[translation missing: en.edit] Extending this example

You can have a look at the images pages on Docker Hub :

And you can of course have a look at the Dockerfiles used for those examples