Issue 1, 2016 |
Download pdf |
Clustering approaches
Customers were introduced to Terracotta, Software AG’s in-memory caching and clustering solution, with webMethods 8.2.2. Since then, usage of Terracotta across the webMethods suite has expanded, and several new versions of Terracotta have been released. How does webMethods work with Terracotta? What architectural changes have occurred now that we are using Terracotta instead of Oracle® Coherence? How do you install, upgrade and configure it? What clustering options are available? And are there any tips for troubleshooting? We’ll explore these questions in this series of articles.
Introduction
You learned about architecture, version compatibility, installation and fix handling in our first webMethods and Terracotta article. You saw the advantages of using Terracotta within the webMethods suite and opportunities for additional functionality.
Our second article focused on configuring, deploying and troubleshooting Terracotta with webMethods.
In this third and final article of this series, you’ll learn about options for clustering webMethods Integration Servers with the Terracotta Server Array to achieve scalability, reliability and high availability.
Why cluster?
There are many reasons you might consider clustering your Integration Servers. For example, you may look to clustering to deliver:
-
Scalability when you have more traffic coming through and want more Integration Server (IS) nodes to handle it
-
Reliability when one of the IS nodes goes down or is unable to handle the load and you want others to pick up the load seamlessly
- High availability or failover when you want to ensure your IS app is available 24/7, so if one node goes down, there will always be another one that is set up and operates exactly the same way
But what type of IS cluster do you need? Do you even need a cluster to achieve your goal? Let’s walk through a number of common use cases and look at recommended architectures for each of them.
Clustering approaches
To determine your best approach to clustering Integration Servers, consider these use cases:
Stateless services
Some webMethods Integration Server deployments may not require a cluster. A load balancer in combination with multiple webMethods Integration Servers provides scalability, reliability and high availability. If all services are stateless, you may be able to run in a stateless architecture that does not require a Terracotta Server Array (TSA). Instead, a load balancer can direct traffic to your different servers using standard techniques such as round robin or least connections.
Figure 1: Stateless Clustering
This “stateless clustering” solution, as shown in Figure 1, does not require your Integration Servers to be clustered at all. However, all services must be stateless. That is, each request must be treated as an independent transaction that is unrelated to any previous request.
Clustering in place
Implementing an Integration Server cluster with the TSA can provide scalability, reliability and high availability while maintaining stateful services. The Terracotta Server Array is an executable that can run on the same machine as webMethods Integration Server or on a separate machine. When a service is set to stateful, Integration Server establishes a session and passes the session ID to the client in the session cookie. After the service invocation, Integration Server keeps the session object in memory for future calls. See “Stateless and Stateful Services in Integration Server” on the Tech Community wiki for more information.
You can cluster Integration Servers by connecting them to a single TSA stripe. A single TSA stripe consists of two TSA nodes, active and mirror, that provide failover. Figure 2 represents “clustering in place,” an architecture for clustering two Integration Servers.
Figure 2: Clustering in Place on Existing Hardware
While this solution does not require additional hardware, it may be limited depending on the amount of resources available on your existing hardware. Sizing is especially important when Terracotta shares hardware with Integration Server to ensure memory and disk space are sufficient for both applications.
In addition, if there are custom distributed caches on the TSA, these will consume more resources. Note that custom distributed caches also require additional BigMemory licensing.
Clustering on a separate server
To isolate Terracotta hardware from your Integration Server hardware needs and reduce the risk of hardware failures, you should install the TSA on a separate server, as shown in Figure 3.
Figure 3: Clustering with the TSA Stripe on Separate Hardware
You can implement this solution with the TSA stripe on a single server or with each TSA node on a separate server. If the TSA nodes are on separate servers, the latency between them must be low in order to maintain data integrity.
Considerations for multiple data centers
If you are running Integration Servers in multiple data centers, you may wish to provide failover or disaster recovery in case you lose your entire data center.
Stateless services
As mentioned previously, some deployments may not require a cluster. A load balancer in combination with multiple Integration Servers also provides scalability, reliability and failover. If all your services are stateless, the simplest and most appropriate architecture is to load balance across your data centers without clustering. This approach does not require a TSA in your architecture.
One-way data replication for application data
If you want to provide for disaster recovery or failover, you may want to replicate application data from one data center to another. This use case assumes that custom applications are storing and retrieving distributed data in the TSA. Note that this capability does require separate Terracotta licensing in addition to what comes with the webMethods platform.
The Terracotta WAN replication module is designed to provide disaster recovery and data replication for applications between data centers. In this architecture, each data center has its own TSA stripe (active and mirror). The application data is replicated from one data center to the other. Integration Servers are not clustered across data centers, but any applications using the TSA for caching can take advantage of the data replication functionality.
Figure 4: One-way data Replication for Application Data
The WAN module is not intended for clustering Integration Servers across multiple data centers, but purely for replicating application data. Replication requires the WAN Replication Service (an added component to BigMemory Max), which is an additional licensing cost. The WAN module is available with Terracotta 4.1.4 and higher, so only customers running webMethods 9.7 or later will have this option.
Approaches not recommended
We do not recommend or support use cases that involve deploying a TSA stripe across two data centers, with the active TSA in one data center and the mirror in another. This architecture can cause data loss or a “split brain” scenario in the TSA. Similarly, we do not recommend you use a single Integration Server cluster spanning more than one data center.
Figure 5: Clustering across data centers with a cross-data-center stripe is not recommended.
Further information
To learn more about the topics discussed in this article, check out these resources:
- Integration Server Clustering
- Terracotta Server Array 4.1 Architecture
- Terracotta Server Array 4.3 Architecture
- Terracotta BigMemory WAN Replication Service
- webMethods and Terracotta Compatibility Matrix
- Getting Started with webMethods and Terracotta
- Integration Server Clustering Guide – Search on “Terracotta“
- Integration Server Administrator's Guide - "Configuring Ehcache on Integration Server"
- Integration Server Built-In Services Reference - “Cache Folder“
- TECHniques articles: webMethods and Terracotta and webMethods and Terracotta Part 2
Continue the conversation
Join the Terracotta community and use the Terracotta discussion group to ask questions, share experiences and learn about new ideas from other customers.