Setting java heap to 384m doesnbt always work

I have an adapter that reads a file from a directory. The file can be anywhere from 21MB to 29MB. I’ve been able to succesfully pull the file with the adapters java heap size set to 384m. However, on occassion, I still get an out of memory error even when the file is around 21MB.

This is on solaris so I’ve been watching system resources w/ the ‘top’ program. The box has 4GB of ram and 4cpus. Only about 500-900M though are free and the free swap space is about 3804M.

When the failure occurs I can watch the adapter memory utilization go up to about 250M and then the adapter shuts down.

I can’t understand why it’s shutting down before it reaches the max of 384, and why it happens sometimes and not others. If I bump the size up to 430M I still get the same behavior.

Can anyone offer any insite or has anyone experienced the same problem?

Increasing the heap size does not always solve.

I suggest modify the integration to read the file in pieces of say 5000 at a time. This helps make sure you donot get the OutOfMemory and also not eating up on the memory resources.

how do you modify the IS configuration to break up a document into smaller sizes? I’ve had similar problems but I’ve always resorted in increasing heap-size. Also if we do break a message into smaller fragments is a noticable overhead in terms of total processing time of the document or minimal difference?

In terms of theoretical maximum can anyone reply the largest file they have ever processed successfully based on reverse-invoke setup and using https transport through trading networks? Curious on this.

“how do you modify the IS configuration to break up a document into smaller sizes?”

Please someone help with this … it will help us too …

TN, EDI module and RosettaNet module all support large document handling. Each of these provide documentation on configuration and development.

Unfortunately, handling large documents isn’t simply a matter of configuration. As several people have suggested in various threads, increasing the memory allocation to the JVM can sometimes address the problem, depending on a variety of factors (physical memory, garbage collection, document size, server load, etc.). Increasing the JVM memory moves the out of memory error to a higher number, but at some point you’ll likely run into the problem again. So…

The basic approach to handling large documents is to stream them to disk as they come in, rather than storing them in memory, and then passing an InputStream object to a service for processing. TN does this based on doc size. The EDI module does this based on content-type. This means that the normal stringToDocument, documentToRecord, map to another record, recordToDocument approach isn’t sufficient–this technique loads the entire document into memory.

So you have to process the stream in chunks. There are built in services to help (node iterators, content part reads, etc.). Basically you’ll need to read in a block of data, convert that block as needed, and stream it to some location (no sense in creating the target all in memory–you’ll run into the same problem you’re trying to avoid on the input side). Needless to say, this makes mapping a bit more complex than the normal “small” document handling.

The wM docs are a good starting point for understanding how to do the work. It’s not trivial. And you may end up needing to rework a good portion of your existing integration code to accomodate large documents.

HTH