Multi Publish Issue

Hi,

Recently we migrated the enterprise server from v3 to v4. After that migration came to know there is a broker publish limit to multiple events not more than 2560.

Now one integration is facing an issue which is polling a buffer table using a oracle standard db adapter. The buffer table contains more than 5000 records. But it polls and publish only around 1000 records each time and exits with the following error, but deletes all records from the buffer table after publish 1000.

06/11/2002 16:27:48.161 MC000004_AS_PRIME_FSS_Oracle8_ST_01 (oracle) - Broker Error:
#1-34 Could not publish event of type “Canonical::JournalEntry”:
Timeout (112-1450): The request timed out. (awPublishEvent awbroker.c:2247)

Here my questions.

  1. Is this related to the multi publish limit?
  2. How to control a db adapter to publish only batch in each poll time? I have tried ROWNUM <= 250 condition. It publishes 250 but deletes all records from buffer table.

Any ideas and suggestion are most helpful.

Regards
Jeyaraman

Jeyaraman,

I have not heard of a limit to the number of events published. What I do know is that there is a limit to the size of guaranteed events. Now you have not included enough information the accurately determine the problem that you are running into. Some good questions to answer would be.

  • Have you configured the adapter to publish all the information for the records or only identifying information?
  • Are you using the intelligent adapter that has a built-in notification operation or are you using the older notification adapter?
  • Is the event/document that you are publishing from the db adapter guaranteed, persistent or volatile?

I am including a writeup of the 8MB limitation that might shed some light on your problem.


"8MB Maximum Transaction Limitation

‘Maximum document size of a guaranteed document type is restricted to the size of the Broker-guar.log. The guaranteed log file is 8 MB.’ – Administration and Analysis Tools, p107.

The implications of the 8 MB transaction limit on guaranteed events is great. In addition to just preventing single events larger than 8 MB combining many small events into one larger for performance reasons can also cause problems if the combined size is greater than 8 MB. The practice of combining several small events into one large one is referred to as buffering. webMethods tools themselves make frequent use of buffering.

The import process uses of event buffering and can encounter the 8MB limit when importing large adl files. Adl files larger than 8 MB runs the risk of encountering the 8MB limit, but adl files as large as 15 MB or greater have been known to successfully import. The content of an adl files seem to play a part in whether or not an event buffer greater than 8 MB will be created and published during the import process.

For brokers greater than 8MB this means the broker_save and broker_load cannot be reliably used for backups and migrations. The only solution is to break up the adl file by manually exporting it in pieces. For more information about migration please refer to the Migration Technical Brief. Backups can still be accomplished by backing up the data directory.

The 8 MB limit can also be encountered when publishing from adapters. When configuring an adapter to publish event using the work flow, those events will automatically be buffered, which means the total size of all the events published in one Integration Component must be less than 8 MB.

The same is true for the ATC Blueprints that are configured using the BI/ATE/Blueprint Editor. A simple workaround with its own implications is to select the ‘publish immediate’ option that is available, which avoids the problem. Please note that ‘publish immediate’ in reality means publish ASAP, not immediately."


One recommendation is to publish only one event per record in the database. The Enteprise Server architecture is event driven and is not really meant to support a batch process. Breaking the information out into individual events will most likely avoid the 8MB limitation for guaranteed events. Now making this change can have drastic implications on the rest of your implementation.

Another recommendation would be to change the event storage type from guaranteed to persistent. Of course this might be an unacceptable business risk.

A last recommendation that address your second question. You can modify the stored procedures that are created by the adapter. This would make it possible for you to prevent it from deleting all records and only delete 250. However you have to be very careful to delete the correct 250 records from the buffer table. A word of warning is to be very careful when you modify the sql triggers and stored procedures. Make s

Jeyaraman,

The problem is related, somewhat, to the limit. The adapter was attempting to publish a lot of events and around 1000 the call is timing out. If the limit was, like, 256, the number would publish before the timeout.

That the adapter cleared the buffer table subsequently is an error to report to webMethods.

The second question … I think between the adapter configuration and document plugin there are settable parameters for the maximum number of notifications, and the timeout value. I don’t have the Oracle Classic Adapter at hand. So I suggest scanning all the tabs for the adapter configuration and document plugin.

If my memory is wrong, if there is insufficient control for your scenario, then there is another approach to consider. I notice you are publishing a canonical event. Many fields? Perhaps something following another pattern. Have that adapter publish notification events, as light as possible. Ideally just the key value from the journal. This will allow the adapter to process and publish quicker, perhaps not hitting the timeout. Then, those notifications are subscribed by an ILA, intelligent ILA, or ATC. That agent goes back and does the more complex query for every notification, forms the canonicals, and publishes them.

Best wishes,

Mark

Jeyaraman,

I would agree with Mark on this one. I am probably just leading you on a wild goose chase. I looked up my old error information and the 8MB problem is indicated by an “Maximum transaction size exceeded” error message.

You still have to pay attention to some of the 8MB comments if you decide to go the ATC route as Mark suggests.

Rgs,
Andreas Amundin
www.amundin.com