webMethods and Terracotta part 2

Issue 3, 2015

Download pdf
 

Configuring and troubleshooting: The fundamentals
(Second in a series of three)

With webMethods 8.2.2, customers were introduced to Terracotta, Software AG’s in-memory caching and clustering solution. Since then, usage of Terracotta across the webMethods suite has expanded and several new versions of Terracotta have been released. How does webMethods work with Terracotta? What architectural changes have occurred now that we are using Terracotta instead of Oracle® Coherence? How do you install, upgrade and configure it? What clustering options are available? And are there any tips for troubleshooting? This is the second of a trilogy of articles in which we will explore these questions.

Introduction

In the first article of this series, we discussed the basics of webMethods and Terracotta: architecture, version compatibility, installation, and fix handling. We also reviewed the advantages of using Terracotta within the webMethods suite, and opportunities for additional functionality.

This second article in this series focuses on configuring, deploying and troubleshooting Terracotta with webMethods.

Understanding Terracotta caching

When installing Terracotta for the first time, it’s important to understand the basics of Terracotta functionality and how the components of the webMethods product suite use it in order to avoid common pitfalls.

Terracotta Ehcache is a standards-based caching API that is used by Integration Server. A cache on Integration Server by default occupies a portion of the Java Virtual Machine (JVM®) as an “on-heap cache.” You can optionally extend a cache beyond the heap to the following locations, or tiers:

  • Local disk storage. This is a designated directory to which cached objects are written. It can be configured either as an overflow for the on-heap cache or to persist all cached objects to disk. Note that writing to disk doesn’t guarantee persistence; only using the Terracotta Server Array (TSA) will provide guaranteed persistence of a cache.
     
  • Local off-heap or BigMemory (requires a Terracotta license). BigMemory enables a portion of the cache to reside within the JVM process memory, but off-heap. The off-heap cache is not subject to the JVM garbage collection process and performs more predictably. It can also be much larger. An additional license must be purchased by webMethods customers for this configuration.
     
  • Terracotta Server Array (TSA). A distributed cache can be created on a Terracotta Server Array. Clustered integration servers require a distributed cache to share data. A distributed cache resides completely on the TSA. A TSA consists of one or more Terracotta Servers. The data is spread across the servers using a technique called “striping”.
! For webMethods customers, a single stripe consisting of an Active and a Mirror server for each cluster is included with the license. Adding more stripes to expand the capacity of the Terracotta Server Array or more mirrors for replication requires additional licensing and may affect behavior and performance.

This tiered approach enables you to extend a cache beyond the size constraints of the heap. When the cache is extended to the Terracotta Server Array , the cache can be shared with other integration servers.

Custom application caches

To create and manage a custom local, BigMemory or distributed cache, review the “Configuring Ehcache” chapter in the Integration Server Administrator’s Guide. Note that custom local on-heap and on disk caching is supported by the default licensing for webMethods customers, while custom BigMemory and distributed caches require additional Terracotta licensing. Deployment, tuning and managing these caches won’t be discussed here.

System caches

webMethods Integration Server and other Software AG products use Ehcache for system caches in many of their internal processes. For a list of all system caches used by webMethods products, see Getting Started with the webMethods Product Suite and Terracotta. This document includes descriptions of the system caches and provides sizing information critical for deployment.

Each cache has a cache manager. A cache manager serves as a container for grouping a set of caches into a manageable unit. Each cache manager has an associated XML file that specifies configuration parameters. For webMethods Integration Server and products that run on Integration Server, the system cache parameters can be edited via the Administrator user interface. The actual configuration files, of the form SoftwareAG*.xml, can be found in the following directory:

<IntegrationServerDirectory>\instances\instance_name\config\Caching
!
The only time you should manually edit these configuration files is to set parameters that are not exposed in the Integration Server Administration User Interface.
 

More guidelines on these configuration files can be found in the “System Caches” section of the Integration Server Administrator’s Guide.

Distributed caching

A distributed cache resides on the Terracotta Server Array and can be shared with other Integration Servers. The behavior of the Terracotta Server Array is controlled by the tc-config.xml file on the TSA server. This includes information about which servers constitute the array, whether they are mirrored, the server-client connectivity parameters and other options. This file is located in the following directory by default:

<TerracottaHome>\server\bin
!
When two Terracotta Server Arrays are deployed in a single stripe, their tc-config.xml configuration files must be identical.
 

After you set all the parameters in tc-config.xml and start Integration Server, Integration Server downloads the file to obtain the information necessary to connect to the TSA and use distributed caches. More guidelines on configuring this file can be found in the “Configuring tc-config.xml on the Terracotta Server Array” section of the Integration Server Administrator’s Guide. A sample with a recommended default configuration is available in the Getting Started with the webMethods Product Suite and Terracotta document.

Configuring Terracotta Server Array for webMethods products

With Terracotta 4.1 and above, the license provided with webMethods products requires users to run Terracotta in Hybrid mode. In this mode, all cached data is stored to the disk on the TSA server.  However, some off-heap storage is still needed on the TSA server to store the keys associated with cache entries for performance.

!
Hybrid mode is required by the Terracotta license provided with webMethods products. Removing the hybrid flag will cause Terracotta to use memory, not disk, for data storage.
 

The Terracotta footprint for webMethods is minimal. When setting up the initial configuration for Terracotta, the recommended sizes are 20GB data storage and 2GB off-heap, or memory. Data storage, off-heap and hybrid mode are specified as part of the data storage section in the tc-config.xml file:

<dataStorage size="20g">
  <offheap size="2g" />
  <hybrid />
</dataStorage>

The data storage and off-heap parameters can be adjusted depending on the webMethods products connected to the Terracotta Server Array and what they are being used for. Many webMethods products cache data for their own internal processes. These system caches are listed in the Getting Started with the webMethods Product Suite and Terracotta document.  In this document, the purpose of each cached element and its size is described.

If the Terracotta Server Array is shared by several webMethods products, care should be taken to evaluate cache elements, usage patterns, size of each elements, eviction policies (time-based, size-based, count-based) to properly size heap, off-heap and data storage (disk) needs. After this sizing exercise is completed, allow for at least 20 percent buffer for storage purposes.

!
Testing with the proper load prior to going to production is critical to ensure sizing and configuration is accurate and effective.
 

Version compatibility and fix management

The Terracotta Server Array must be running a version of Terracotta that is compatible with the Terracotta client libraries installed on your webMethods products. Version compatibility and fixes are documented in webMethods and Terracotta Compatibility Matrix. The installation and fix application process will vary depending on the Terracotta version.

Terracotta 3.7.10 and earlier


Installation
 
Terracotta is downloaded directly from the Software Download Center by navigating down the Software AG product tree.

Fixes
 
Each point release includes all prior fixes. There are no separate fixes to apply.
 

Terracotta 4.1.x

Installation Terracotta 4.1.4 can be installed using the Software AG Installer. However, Terracotta 4.1.5 must be downloaded from the Software Download Center. It is found by navigating down the Software AG product tree.
Fixes Fixes for 4.1.5 can be found on the Software Download Center Fix Explorer under the Terracotta product tree. Only the latest fix is shown, as it includes all prior fixes.

                  

Terracotta 4.3.x and later


Installation
 
Terracotta 4.3.x can be installed using the Software AG Installer.

Fixes   
 
Terracotta fixes for 4.3.0 and later are available on the Software Update Manager (SUM).

              

Operational considerations

High availability for distributed caching

One of the goals of using Terracotta for clustering is high availability. The rejoin behavior of a distributed cache enables a cache manager to automatically reconnect to a Terracotta Server Array from which it has been disconnected. The Integration Server cache managers are rejoin-enabled to handle this contingency, and you cannot disable this option. However, the restart parameter must be set in tc-config.xml to take advantage of the rejoin functionality:

<restartable enabled=”true”/>

Another high availability feature of the Terracotta Server Array is the “Enable High Availability” parameter, which places the distributed cache in nonstop mode. This mode enables the client to take a prescribed action when an operation on the distributed cache doesn’t complete within a specific timeout. The Integration Server distributed caches operate in this mode; it cannot be disabled.

Because Integration Server distributed caches operate in nonstop mode, you must configure related timeout parameters when you create a distributed cache. When the Integration Server starts up, it will wait for the amount of time specified by the watt.server.cachemanager.connectTimeout parameter.  The Integration Server Clustering Guide has a section on how to update this parameter and related parameters determining the action that will be taken on timeout.

!
Setting the proper connection timeouts is dependent on network speed, connectivity, and stability. Thorough testing is recommended in the target environment.
 

Stopping and starting

When stopping and starting two servers in a Terracotta stripe, it is always recommended to stop the Mirror server before the Active, and start the Active server first. This ensures the Mirror server doesn’t unintentionally become the Active server.

When the Integration Servers clients come into the picture, the stopping and starting sequence for an entire cluster can be a little more complex. The set of steps to shut down an Integration Server cluster for scheduled maintenance, for example, might be as follows:

  1. Make a backup of the data if persistence is enabled
  2. Shut down all Integration Servers in the cluster
  3. Shut down the Terracotta Mirror server using the stop-tc-server script on the TSA
  4. Shut down the Terracotta Active server using the stop-tc-server script on the TSA
  5. Perform scheduled maintenance
  6. Start the Terracotta Active service
  7. Start the Terracotta Mirror service
  8. Start the Integration Server(s) and ensure they reconnect to the TSA

Troubleshooting issues

Cache issues can cause a variety of unusual behaviors in the related applications. Symptoms such as running out of memory, being unable to connect to the cache, disk space issues and more can indicate a configuration or sizing issue. The first step in troubleshooting is usually to review the log files.

Ehcache logs its messages to the console. Integration Server redirects the Ehcache log messages to the following location by default:

<Integration Server_directory>\instances\instance_name>\logs\ehcache.log

Cache manager activity on the TSA is logged by Ehcache on the server. Ehcache creates additional log files on the server when 1) an Integration Server in a cluster starts up and connects to the TSA; or 2) a public cache manager containing distributed caches reloads or starts up.  By default, these logs are written to:

<Integration Server_directory>\instances\<instance_name>\logs\tc-client-logs

For more information

To learn more about the topics discussed in this article, check out these resources:

Coming soon

Look for our final article in the series on Terracotta and webMethods fundamentals in the next issue of TECHniques where we will review clustering options.