I always get this error from adapter_errors.
While the adapter throws two errors for one event.
One is system error - Unable to process a publication event type, “AdapterName : Operations:MyNotification”
and another error is - out of memory exception while processing “MyNotification” java.lang.OutOfMemoryException.
The buffer table created by the dbadapter for this notification currently has more than 10K rows, due to this error it can’t be processed. Could the error be due to high vol of data in the queue?
Any advice on how to solve this prob… Thanks.
How are you trying to process the notification information in the buffer table? One at a time or all at once.
What does the notification event consist of? Just the identifying information or all the information? How large is the notification event?
What is your process once the notification adapter publishes a notification event?
I strongly recommend that notifications are processed one at a time as the ES is not a batch processing tool but an asynchronous event driven integration tool. You probably already know this.
The quick solution, but probably the wrong one, is to just increase the memory allocated to the JVM by using the -Xmx command line parameter for the JVM. This is configurable for the adapter.
The quick solution, but probably -->the wrong one<–, is to just
Andreas… it is the right one
~Formal Known As Technical Support Dude~
I will stand by my statement that it is probably the wrong solution unless you really understand what is going on.
Why is it you are eating up soo much memory? You don’t want to let that question go unanswered as you move to production, or you run the risk of running out of memory again when you deal with larger sets of production data. Of course you can avoid it with proper load/performance testing before going to production. You do have time allocated for load/performance testing don’t you?
You can increase the heap size of the adapter and it should be able to handle the case.
I agree with Andreas… increasing the heap size is a quick but dirty solution. We still need to get to the root of the problem to determine what is causing the out-of-memory exception, because if it is indeed due to some bug, then we cannot guarantee that the increased heap size will suffice the next time.