Insufficient system resources broker error

We are sending a large document through the enterprise Broker 5.0.1 and are receiving this error:

ƒlƒbƒgƒ??[ƒN I/O ‘€?ì ReadFile ‚ÉŽ¸”s‚µ‚Ü‚µ‚½?B-- Insufficient system resources exist to complete the requested service.

The broker crashes and the monitor restarts it. The adapters lose their connection to the broker and they get restarted.

We have two documents that have caused this error in our testing. One is about 14MB in size. I’m not sure of the total on the other document, but it may be larger. The first document is being created by an Oracle adapter reading a bunch of records and putting them into an array in the document. A B2B adapter picks it up and tries to send it to an Integration Server. The second document is created by an IO adapter and is also trying to publish to a B2B adapter and an Integration Server.

We have tried this in our test environment and or QA or Certification environment with the same results.

Both are being sent to IS to create XML files/documents that will be sent to other packages/companies. We have just upgraded our B2B servers to IS 4.6 servers and our brokers to Enterprise 5.0.1 brokers (with service packs). We have not yet upgraded our adapters from version 4.1 which are on a different server.

We have increased the heap size in the adapters to eliminate OutOfMemory errors. The servers have at least 1GB of RAM and at least 2GB of paging file space. We set the guaranteed storage size to 1GB when installing the new broker. We noticed that when the adapters were processing these documents the memory they were using was approaching 1GB.

We’ve read about the new document limit of 1GB or the memory sizes, but these documents are much smaller than 1GB.

Anyone have information on document size limitations other than those listed in the documentation?

Based on the documentation and our settings, I should be able to send a document through of up to 1GB in size.

I don’t have any of the information you are looking for, but I do have a suggestion.

[Warning: Ascending Soap Box]

Do try to avoid implementing a batch solution on top of an asychronous event driven architecture.

If you ever need to persist the 14MB of data in a database you will have to lock it for the entire duration, which can cause problems. If you don’t lock it and only half of the records are written to the db, then what is your current state and how do you recover.

Your situation may not run into this kind of a scenario yet, but I can almost guarantee that you will later on or in the next phase. It would be better if you broke the information into the individual records.

Even wM made this mistake in their earlier version of the tools in combination with the old engine. They would bundle several documents into one for efficiency reasons and they ended up breaking their own document size limitation and caused a lot of trouble, even for people trying to avoid batch implementation.

[Climbing down from soap box]

Rgs,
Andreas Amundin
www.amundin.com