TerracottaDB(TCStore) Integration Deployment Scenario

This deployment scenario has a two TerrcottaDB server, two Terrocotta Server. two IS server and the load balancer.

There can be multiple instances of IS Server connected through a load balancer and clustered using Terracotta Server array.

We can also use TerracottaDB cluster having two or more TerracottaDB servers.

 

Configuration steps:

Terracotta(Big Memory 4.x) Configuration For IS Clustering

Terracotta Startup

1. Make sure Integration Server is also installed. 
2. Copy the licence Terracotta(BigMemory) and paste the licence in the SoftwareAG- Install Dir/common/conf/ . 
3. Run the bat file  SoftwareAG- Install Dir/Terracotta/bin/start-tc-server.bat or sh . 
4. Terracotta should start successfully on the default port 9510.(check the console output in cmd).

Terracotta BigMemory Clustering setup

1) Open "C:\<SAG_DIR>\Terracotta\server\wrapper\conf\tc-config.xml" on All VMs, replace localhost with hostname

2) Add all other terracotta server as <server> node under parent node servers.

Sample: Terracotta tc-config.xml Nodes:

 <servers>

  <server host="VM_01" name="S1_ActiveServer1">

   <data>logs/terracotta/server-%h-1/S1_ActiveServer1_server-data</data>

   <logs>logs/terracotta/server-%h-1/S1_ActiveServer1_server-logs</logs>

   <index>logs/terracotta/server-%h-1/S1_ActiveServer1_server-index</index>

   <tsa-port>9510</tsa-port>

   <jmx-port>9520</jmx-port>

   <tsa-group-port>9530</tsa-group-port>

   <offheap>

    <enabled>true</enabled>

    <maxDataSize>512m</maxDataSize>

   </offheap>

  </server>

.

.

.

  <server host="VM_N" name="S1_PassiveServer1">

   <data>logs/terracotta/server-%h-1/S1_PassiveServer1_server-data</data>

   <logs>logs/terracotta/server-%h-1/S1_PassiveServer1_server-logs</logs>

   <index>logs/terracotta/server-%h-1/S1_PassiveServer1_server-index</index>

   <tsa-port>9510</tsa-port>

   <jmx-port>9520</jmx-port>

   <tsa-group-port>9530</tsa-group-port>

   <offheap>

    <enabled>true</enabled>

    <maxDataSize>512m</maxDataSize>

   </offheap>

  </server>

 <servers>

 

3) Copy on All VMs attached tc-config.xml file to "C:\<SAG_DIR>\TerracottaDB\server\bin" and Modify the server hostname and ports

4) Start terracotta server on all machines using  

C:\<SAG_DIR>\Terracotta\server\bin\start-tc-server.bat" file and check status

    check status, One VM will be in ACTIVE-COORDINATOR State and others will be in Passive StandBy

IS Cluster Configuration

IS Clustering setup

1) Copy IS License file to C:\<SAG_DIR>\common\conf on All VMs

2) Start the IS on all VMs and do the cluster setup:

    i)  Navigate to Settings->Clustering in left side panel of IS.

    ii) Click Edit cluster settings-> Enable Cluster.

        Provide below details:

                Cluster Name: IS_Cluster (must be same in all the IS and the name should not contain ".")

                Action on Startup Error : Startas Stand-Alone Integration Server

                terracotta Server Array URLs: <VM_1>:<Terracotta_Port>,...,<VM_N>:<Terracotta_Port>

                            e.g.: vmritk01:9510,vmritk02:9510

    iii) Session timeout will be 60 minutes by default.

    iv)  Save the cluster settings and restart the VMs

3) Make sure that after Restart you can see  the cluster host in Clustering Page which will notify that you have configured Clustering Successfully.

Terracotta DB Server Configuration 

TerracottaDB Setup

1. Make sure Integration Server and TC Store are installed. 
2. Optional Step: Copy the license TerracottaDB and paste the license in the <SAG_Install_DIR>\TerracottaDB\tools\cluster-tool\conf\license.xml .

3. Start the TC Store server and Execute below command on VM 

<SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\bin\cluster-tool.bat configure -n <CLUSTER_NAME> -l <TC Store License> <TC-Config File>

e.g.

<SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\bin\cluster-tool.bat configure -n SingleMachine -l <SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\conf\license.xml <SAG_INSTALL_DIR>\TerracottaDB\server\conf\tc-config.xml
4. Run the bat file <SAG_Install_DIR>/TerracottaDB/server/bin/start-tc-server.bat or sh . 
5. TerracottaDB should start successfully on the default port 9410.(check the console output in cmd).

e.g.

 

TerracottaDB  Cluster setup

1. Open <SAG_DIR>\TerracottaDB\server\conf\tc-config.xml on all VMs and replace localhost with respective VM hostname.

Recording on Understanding the Terracotta Configuration File

2. Add each TC Store Server Node details under <servers> tag in <SAG_DIR>\TerracottaDB\server\conf\tc-config.xml file on all VMs/TC server Instances.

     e.g.

    <server host="HostName_VM_1" name="VM_1">

      <logs>%H/terracotta-logs</logs>

      <tsa-port>9410</tsa-port>

      <tsa-group-port>9430</tsa-group-port>

    </server>

and similar way add all other nodes under <servers> tag.

3.Start the TC store server and Execute below command on VM where TC Store server is running

<SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\bin\cluster-tool.bat configure -n <CLUSTER_NAME> -l <TC Store License> <TC-Config File>

e.g.

<SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\bin\cluster-tool.bat configure -n TCS_Cluster -l <SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\conf\license.xml <SAG_INSTALL_DIR>\TerracottaDB\server\conf\tc-config.xml

4. Start TC Store on other VMs and check status, One VM will be in ACTIVE-COORDINATOR State and others will be in Passive StandBy

e.g.

  

5. TMC Console http://<VMNAME>:9480/ can be used to check VM status in cluster/stripes

5. Use DB server url like terracotta://VM_1:<TCDB_PORT>,VM_2:<TCDB_PORT>...VM_N:TCDB_PORT

e.g.using TCDB cluster in TCDB Adapter Connection

 

TerracottaDB  distributed setup

For Distributed Setup, we need atleast 2 TerracottaServer in Active State. In total 4 TC server are used, in which both the active server are backed by a passive server resulting in 2groups/stripes. The grouping is based on the TC-config File, refer the image above.

 

1. Open <SAG_DIR>\TerracottaDB\server\conf\tc-config.xml on all VMs and replace localhost with respective VM hostname.

2. Group config files:

TC-Config-1.xml for group1

    <servers>

     <server host="HostName_VM_1" name="server-1">

     <logs>%H/terracotta-logs</logs>

     <tsa-port>9410</tsa-port>

     <tsa-group-port>9430</tsa-group-port>

     </server>

     <server host="HostName_VM_2" name="server-2">

     <logs>logs</logs>

     <tsa-port>9410</tsa-port>

     <tsa-group-port>9430</tsa-group-port>

     </server>

     <client-reconnect-window>120</client-reconnect-window>

     </servers>

 

TC-Config-2.xml for group2

    <servers>

     <server host="HostName_VM_3" name="server-3">

     <logs>%H/terracotta-logs</logs>

     <tsa-port>9410</tsa-port>

     <tsa-group-port>9431</tsa-group-port>

     </server>

     <server host="HostName_VM_4" name="server-4">

     <logs>logs</logs>

     <tsa-port>9410</tsa-port>

     <tsa-group-port>9431</tsa-group-port>

     </server>

     <client-reconnect-window>120</client-reconnect-window>

     </servers>

 

3. Execute below command on all VMs

<SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\bin\cluster-tool.bat configure -n <CLUSTER_NAME> -l <TC Store License> <TC-Config File1...TC-Config FileN>

Config file names will be separated by space

e.g.

<SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\bin\cluster-tool.bat configure -n TCS_Cluster -l <SAG_INSTALL_DIR>\TerracottaDB\tools\cluster-tool\conf\license.xml <SAG_INSTALL_DIR>\TerracottaDB\server\conf\tc-configGroup1.xml <SAG_INSTALL_DIR>\TerracottaDB\server\conf\tc-configGroup1.xml

4. Start TC Store on all VMs and check status, One VM will be in ACTIVE-COORDINATOR State and others will be in Passive StandBy

e.g.

  

5. TMC Console http://<VMNAME>:9480/ can be used to check VM status in cluster/stripes

6. Use DB server url like terracotta://VM_1:<TCDB_PORT>,VM_2:<TCDB_PORT>...VM_N:TCDB_PORT

e.g.using TCDB cluster in TCDB Adapter Connection

Recording on Terracotta DB cluster using Cluster Tool

Apama Configuration for TC Store

TCDB Configuration in Apama
  1. Create an Apama Project.
  2. In Select Reduired Bundele Instance page select Distributed MemoryStore and click Finish
  3. Under Distributed Memory Store instance, create a new Instance
  4. Select the store provider as TCStore and provide the store name and click finish.
  5. Provide the basic configuration as

    If TC server is in clustered/distributed then provide the url as:
    VM_1:<TCDB_PORT>,VM_2:<TCDB_PORT>...VM_N:TCDB_PORT

Note: To configure the TC Server in cluster and/or distributed mode please refer the TC Server clustering & distribution setup section.

MashZone Configuration for MashZone

Connection Configuration
  1. Navigate to the Masgzone NextGen url: http://hostName:8080/mashzone
  2. Key in the Credentials.
  3. Open the Admin Console.
  4. From the left side Navigation under Server, select TerracottaDB
  5. Click on Register New Terracotta DB Connection.

    and keyin the TerracottaDB Alias (TerracottaDB connection name) and use the DB server url like terracotta://VM_1:<TCDB_PORT>,VM_2:<TCDB_PORT>...VM_N:TCDB_PORT
  6. Click on Add this connection button to save the conncetion.
  7. Click on the  icon to verify the connection from the TerracottaDB connection page.
Creating a Dashboard
  1. Go to Mashzone NextGen Administration Home Page and click on 'Create a Dashboard'
  2. From the left widgets pannel add any widget to the working area.
  3. Select the newly added widget and click on Assign Data 
  4. From Data Source list select the Terracotta DB.
  5. From the Dataset Alias drop down select the required data set. The listing format is <ConnectionName>.<DatasetName>

 

Load Balancer Configuration:  A custom load balancer can be used for tcdb Adapter cluster. Here we have used Apache Load Balancer. The load balancer should be configured for Integration Server External ports where the API Gateways are hosted. Load balancer httpd.conffile should be configured as flows:

 

Listen 7200

<VirtualHost *:7200>

        #Create a load balancer named "DBPISbalancer-clustermanager"

        <Proxy balancer://DBPISbalancer-clustermanager>

                # Add the two nodes as load balancer members

                  BalancerMember http://<IS_01>:<IS_PORT>

                  BalancerMember http://<IS_02>:<IS_PORT>

        </Proxy>

        #Define the path for which the request should not be sent to the balancer

        ProxyPass /server-status !

        ProxyPass /balancer-manager !

 

        #Send all remaining requests to the balancer named "balancer-clustermanager"

        #Possible values for lbmethod are "byrequests" , "bytraffic"  or "bybusyness"

        ProxyPass / balancer://DBPISbalancer-clustermanager/ stickysession=JSESSIONID|jsessionid nofailover=On lbmethod=byrequests

</VirtualHost>