I have an integration scenario where an IS trigger processes documents off a broker queue.
Some documents on the queue may have data issue which cause the trigger service to fail. (Lets call them ‘bad’ documents.) If the trigger hits a ‘bad’ document, I would like it to move ahead and process other ‘good’ documents, but keep the ‘bad’ document queued on the broker. This is so we can later browse/export/delete/reinsert bad documents at leisure (through the MWS broker management interface), while not holding up processing of ‘good’ docs, even if IS is down.
For eg, assuming this initial scenario:
[b]Broker Queue [/b] (good1,good2,bad3,good4) [b]----> IS [/b]
… I want the trigger processes all the ‘good’ docs but keep the ‘bad’ document in the broker queue:
[b]Broker Queue[/b] (bad3) [b]---->IS[/b] (good1,good2,good4)
However the trigger properties only offers up two behaviors on trigger ‘retry failure’ (trigger service exhausts all retries and still fails) - neither gives the end-result above:
b Throw service exception [/b]
The “bad document” is removed from the broker queue is essentially lost. The pub/sub developer guide refers to the possiblity of persisting the pipeline and using Monitor for restoring/reprocessing the bad document. However, I am not sure how MWS fits into this (I understand there is also a DB performance impact with using MWS in this manner)
b Suspend and retry later [/b]
This is the best option for guarenteed delivery but the problem is any “bad document” freezes processing of the entire queue (since the trigger is suspended). Until the document is cleared, the good documents are held up.
The documents do not have to be processed in order.
Can anyone suggest a good solution for this issue?
Side note: this new webMethods integration replaces a legacy integration that would ‘move ahead’ & process the ‘good’ docs, but automatically keep the ‘bad’ docs in a seperate ‘bad-doc’ queue. It would automatically empty the contents of the ‘bad-doc’ queue into the main queue the next day.