Issue 3, 2013
JCache, being specified in JSR107, provides a common way for Java® programs to interact with caches. Terracotta has played the leading role in JSR107 development, acting as specification lead, and final approval is expected later this year. BigMemory, Terracotta’s flagship product for managing big fast data, will be fully compliant with the specification early next year. Stay up to date on the latest news for BigMemory by visiting http://techcommunity.softwareag.com/terracotta
Why Create a Specification?
Open source caching projects and commercial caching vendors have been out there for over a decade. The distributed kind, which is often called a Distributed Cache, has entered wide adoption. Each vendor uses a very similar map-like API for basic storage and retrieval yet each vendor uses their own API.
With the introduction of the JSR107 specification, developers can program to a standard API instead of being tied to a single vendor, eliminating a major inhibitor to the mass adoption of in-memory technology. Other areas of Java have solved this problem through standards. Some successful examples are JDBC®, JPA® and JMS®.
Indeed, the analyst firm Gartner reported last year that the lack of a standard in this area was the single biggest inhibitor to mass adoption.
A cache is a place where you put a copy of data that is intended to be used multiple times. Caching implementations, being in-memory, are much faster than the original source of the content. This means you get a performance benefit from using a cache instead of the original source. You also offload the resource that the original data came from.
Caching is an important technique used within the Big Fast Data family. Because caches are key value stores held in memory, cache operations are lightning fast. They are also very simple and consequently have far fewer features than a database.
To be effective, data needs to be used multiple times. There is no value in caching data that is written once and never read or only read once. The efficiency of the cache can be measured by the hit ratio, which is defined as the cache hits/cache requests.
To provide the maximum offload, caches need to be distributed so that the work done by one application server gives benefit to the others and eliminates any duplicate requests for the same data to the underlying resource.
Finally, the affordability of servers with memory capacities of 1 terabyte (TB) and higher, combined with vendor innovation to utilize that memory for cache storage, is resulting in a new trend where the cache has increased operational significance. Instead of just caching part of a dataset, the entire dataset is placed in cache and is used as an authoritative source of information—the cache in essence becomes the operational store for the application. In this use case, the cache is often referred to as a Data Grid.
Each of these areas has requirements that the standard must address.
Where Can I Use It?
JCache will work with Java SE 6 and higher and will run in Java EE 6 and higher, Spring and Guice enterprise environments.
From a design point of view, the basic concept is that there is a CacheManager that holds and controls a collection of caches. Those caches in turn have entries with keys and values. The API can be thought of map-like with the following additional features:
- atomic operations, similar to java.util.ConcurrentMap
- read-through caching
- write-through caching
- cache event listeners
- caching annotations
- full generics API for compile time safety
- storage by reference (applicable to on heap caches only) and storage by value
There are also a number of optional features that not all implementations may provide, such as:
- storeByReference - storeByValue is the default
To determine if these features are present, call the capabilities API with the following statement:
The Java Caching API defines five core interfaces: CachingProvider, CacheManager, Cache, Entry and Expiry.
A CachingProvider defines the mechanism to establish, configure, acquire, manage and control zero or more CacheManagers. An application may access and use zero or more CachingProviders at runtime.
A CacheManager defines the mechanism to establish, configure, acquire, manage and control zero or more uniquely named Caches all within the context of the said CacheManager. A CacheManager is owned by a single CachingProvider.
A Cache is a map-like data structure that permits the temporary storage of key-based values. A Cache is owned by a single CacheManager.
An entry is a single key-value pair stored by a Cache.
Each entry stored by a cache has a defined duration, called the Expiry Duration, during which they may be accessed, updated and removed. Once this duration has passed, the entry is said to be expired. Once expired, entries are no longer available to be accessed, updated or removed; just as if they never existed in a cache. The function that defines the expiry duration for entries is called an Expiry Policy.
A Simple Example
This simple example creates a default CacheManager, configures a cache on it called “simpleCache” with a key type of String and a value type of Integer and an expiry of one hour and then performs a put and a get.
//resolve a cache manager
CachingProvider cachingProvider = Caching.getCachingProvider();
CacheManager cacheManager = cachingProvider.getCacheManager();
//configure the cache
MutableConfiguration<String, Integer> config =
//create the cache
Cache<String, Integer> cache =
String key = "key";
Integer value1 = 1;
Integer value2 = cache.get(key);
To learn more, visit JSR107’s project home page
To obtain a copy of the full specification, visit http://jcp.org/en/jsr/detail?id=107.