Can a trigger keep 'bad' documents on the broker queue and process others?

I have an integration scenario where an IS trigger processes documents off a broker queue.

Some documents on the queue may have data issue which cause the trigger service to fail. (Lets call them ‘bad’ documents.) If the trigger hits a ‘bad’ document, I would like it to move ahead and process other ‘good’ documents, but keep the ‘bad’ document queued on the broker. This is so we can later browse/export/delete/reinsert bad documents at leisure (through the MWS broker management interface), while not holding up processing of ‘good’ docs, even if IS is down.

For eg, assuming this initial scenario:


[b]Broker Queue [/b] (good1,good2,bad3,good4) [b]----> IS [/b]

… I want the trigger processes all the ‘good’ docs but keep the ‘bad’ document in the broker queue:


[b]Broker Queue[/b] (bad3) [b]---->IS[/b] (good1,good2,good4) 

However the trigger properties only offers up two behaviors on trigger ‘retry failure’ (trigger service exhausts all retries and still fails) - neither gives the end-result above:

b Throw service exception [/b]
The “bad document” is removed from the broker queue is essentially lost. The pub/sub developer guide refers to the possiblity of persisting the pipeline and using Monitor for restoring/reprocessing the bad document. However, I am not sure how MWS fits into this (I understand there is also a DB performance impact with using MWS in this manner)

b Suspend and retry later [/b]
This is the best option for guarenteed delivery but the problem is any “bad document” freezes processing of the entire queue (since the trigger is suspended). Until the document is cleared, the good documents are held up.

The documents do not have to be processed in order.

Can anyone suggest a good solution for this issue?

Side note: this new webMethods integration replaces a legacy integration that would ‘move ahead’ & process the ‘good’ docs, but automatically keep the ‘bad’ docs in a seperate ‘bad-doc’ queue. It would automatically empty the contents of the ‘bad-doc’ queue into the main queue the next day.

I think your best bet would be to turn on service auditing (you can turn on document logging as well) with the save pipeline option. When you service throws an error it will save the input pipeline (the message) and then allow you to edit, resubmit, save to file all via MWS.

Choosing the error only option would decrease the amount of saved data ie it would only save the input pipeline when you throw a service exception. You didn’t say which version you are on but I’ll assume 6.5. The MWS in its 6.5 version is not the best piece of software :mad: . It gets much improved in the 7.1 release.

If you need long term storage of both good and bad data then my personal opinion is MWS (or rather the auditing tables that MWS allows you access to) is not the way to go.

Thanks Mark. Yes, that’s what I’d been told as well - MWS isn’t that great when storing data in the pipeline.

The MWS I’m working on now is from Fabric 7.0 (not 7.1) - any thoughts on advantages of MWS 7.1 over 7.0?

Hi All,
Not sure if this would be right thread, but still as it is mws issue…

We are using IS and MWS 7.1 version. Some of the document are not properly logged ie. The document ID is incorrect. Instead of the unique document ID, we see some weird string. Because of this we are unable to search documents using doc ID.

The interesting thing is this happens only for documents logged from few particular IS, but there is also inconsistency. Other ISs are always ok. All fixes, extended settings,etc seems similar.

Is there any specific setting that is not turned on? Or any suggestions on how to investigate this?

Thanks