Most efficient caching solution

Hi there,

We currently have several java services in place which are being used to get common data from SAP as follows:

  • if data is not to be found in hashtable A, then SAP FM z_xxx is being called to repopulate it, otherwise get data directly from hashtable A.

The problem is that the java hashtables using the heap are storing a huge amount of data causing long gc cycles which have a big impact on the performance.

What would be a wiser approach if you have a lot of services which require the same data from SAP?

Thank you in advance,

Terracotta distributed cache will fit your requirement very well, have a look at that.

There are OOTB services to put/get key-values to/from cache.


I will give it a try.
So, basically I should set up a public cache manager, add the teracotta server arrays and add as many caches as I need with the Distributed flag enabled.
Is there a detailed documentation out there on Empower so I know exactly which settings I have to make to suit my needs?
Of great interest to me is to find out what value to set for ‘Maximum Elements in Memory’ and I also would like to know what happens if I want to set a scheduled job which would update the values in the cache at specific hours (as data in SAP is constantly changed).
And I would also like to know what happens if for some reasons a server leaves the cluster (will the service which would take data from the distributed cache fail?).


Refer the below docs:

Integration Server Administrator’s Guide
Integration Server Built-In Services Reference

For refreshing the cache with new values, you need to write a custom logic using the OOTB services which will pull out the data from the datasource (source of truth) and load them in to cache.

I would suggest you to do a POC/T and let us know if you have any issues.

Note that Terracotta does require a separate license.


I started to test the services.
Unfortunately we do not have a license for BigMemory, only for distributed cache is working.
When having caches of type key as String and value as String I was flawlessly able to use the service.
Now if I have a cache of type key as String and value as String table (lookup table) it is not working as intended. When using the pub.cache.get service I am getting as value only a part of the String table which is not good.
Is there any workaround for this type of cache (relationship one to many) - Key:String and Value:Lookup table (String table) - String [] []?

I can check on the putting the string table to a cache but how about using a IData (IS document) define fieldname and field value and them put it to the cache as on object? Will that work?


Already used as workaround to pass the value as JSON
It also works as IData. Afterwards, I created a Java service to convert the IData to Lookup table.

Now I am currently struggling to understand how to use the service as I am afraid that there are no clear tutorials provided by SAG for this.
Let’s say I have a key -value pair of type String:IData (IS document).
The document has below structure:

And let’s say I want to search the cache for A=

On search attributes I have:
Expression: valueOfA and Expression: value.documentA.A (is this correct?).

What should I pass to the search service as input besides cacheManagerName,cacheName,includeAttributes,includeKey,includeValue and maxResults?
Is the criteria (valueOfA=%value%) correct?

I tried to run the search service several times but always end up with getting the result list output which is of type How can I retrieve the output in documentList form if it’s possible?

I managed to find a tutorial on Empower: Pub.cache_Search.pdf

I have followed it but no matter what I try I am still getting error Search attribute referenced in Query.addCriteria unknown (yes, the attribute is defined in the cache as specified in the tutorial).
I tried to restart the server, reload the cache manager but same result…

Update: We manged to use the search service by creating a new cache. So it did not work either clearing the old cache or reloading the cache manager. This seems to be a bug from my point of view which should be fowarded to SAG for investigation.
Thank you for your support!
Now I have a clear overview of the wM caching logic


Hi there,

Trying to use pub.cache:search for the first time but without luck.

I’d configured the cache to be searchable ( you can refer to the print screen)

When tried with these search criteria:-

  1. OP1 = name
    OPR = eq
    OP2 =

ERROR returned:-
Could not run ‘searchCache’ Search attribute referenced in Query.addCriteria unknown: name


  1. OP1 =
    OPR = eq
    OP2 =

ERROR returned:-

Could not run ‘searchCache’ Search attribute referenced in Query.addCriteria unknown:

Any advice??


I am new to caching implementation.
Could you post the solution you did for the IData caching. Do you have some kind of steps that i can follow to implement the same