I am working on a project where I need to augment the data in the document with data retrieved from an Oracle table. For performance reasons, I have created an integration that will load the required data from the table into an in memory hash table. Once the data is in the hash table I don’t need to do additional database reads to get the data and therefore speed up the processing times. The problem I’m having is when I try to load all the table data I get an “out of memory” java exception. When I load only a subset it works fine. I am trying to load approximately 1.5MB of data into the hash table (15,000 rows, 100 bytes each). I have increased the java heap size to 64MB and still run out of memory. I have monitored the adapter process and see that it increases from approx. 10MB of memory to 74MB before the memory exception occurs. Any ideas on why the oracle adapter is using so much memory or how I can resolve this problem. Thanks in advance for any assistance.
If the maximum heap size is utilized, then there is no other way you can control the out of memory error. You need to spllit the data into small number of batches for processing.
Increase the heap size in the adapter
Have you tried the integration without the hash table? It’s logical that preloading the data would make the integration faster but I’ve been burned more than once with such assumptions. It may be that the database and the adapter may be able to keep up just fine with your integration, and the bottleneck may exist elsewhere–if at all. Pre-optimizing without testing (perhaps you have) is a crap shoot–you never know what you’ll get.
Case in point–the current “optimization” reduces integration throughput to zero since it doesn’t work Chewing up the adapter’s memory with a hash table may adversely affect its performance. Measure, measure, measure.
can u people pls tell me what is hash table?
You might try using Google, or your favorite search engine, to search for information on the topic of hash tables.