Redelivery of documents in Broker 6

Hi All

We have a trigger set up which triggers on a document and calls a flow
service. When this flow service fails an exception is raised. However,
even though the trigger is set to continue to redeliver the document every 10 seconds we do not see this trigger firing again.

We have contacted webMethods about this and they tell us that this is the correct behavior. The document is considered delivered as soon as it is passed to the service - whether the service is successful or not. webMethods have recommended we turn on auditing and that we use the monitor to resubmit the documents manually.

Obviously the manual solution is not acceptable to us - this worked perfectly well in our existing implementation based on Enterprise server, and we do not wish to introduce a manual process where one was not needed before.

The real life scenario for this is that the database connection that the service uses may be down - often this is a transient problem, and a retry has a good chance of later being successfully.

This seems to be a huge oversight by webMethods, and we can’t believe that such a fundamental change has been made moving to version 6. Does anyone else out there have the same problem as us?

Are there any work-arounds (not involving manual intervention) that anyone can suggest?

Many Thanks in advance
Steve Dalton

In v5 (and I assume it hasn’t changed in v6), if your integration throws a normal exception, the incoming document will be considered delivered and removed form the broker. But, if you trap this exception, and throw an error in the following manner, the document should be redelivered to your integration:

    AdapterException newEx = new AdapterException(ex); 

Just be careful you don’t get into an infinite loop, where there is an error that can’t be corrected by a redelivered, it causes a redeliver, and because nothing will change, the process never stops.

Thanks for that, this if similar to another solution that has been given by webMethods (and on the advantage forums) to catch the exception and rethrow as an ISRuntimeException.

This causes the same behavior.

While causing extra load on the system, the infinite loop scenario is actually ok for us - as the support team constantly monitor our error logs and the brokers for this scenario and once the problem if fixed, delivery will continue automatically.


I know this is an old thread but I would be careful with this solution. That infinite loop can persist even after a restart of the server instances. The only way to clear will be to clear the queue on the broker which may cause data loss. It can work, but as Michael says be careful.

Or have some sort of “reprocessing strategy” built into the solution.

We had a similar integration designed, where on a failure, we mark the reprocess flag to Y. A separate DB scheduler job scans all the rows at some defined time interval and retriggers the same document.

The other benefit we got from this approach was to send alert emails on crossing some reprocessing error threshold (say after 5 times it is reprocessed, send alert email to XYZ).

and yes, the volume for this integration was quite low which was major factor for this choice of design.


Can you explain me details about your reprocessing strategy.