Warning::Simple question about "Guaranteed Delivery"

[we are running webMethods product suite 6.5.1]
I have this simple requirement:

“Provide a service that processes incoming documents in a reliable manner, while preserving the order in which they arrive.”

If you think this sounds simple, please read the following and let me know where I go wrong…

I tried the following:

  • defined a publishable document (guaranteed) with a single field “input”
  • defined a trigger with following settings:
    … process mode = serial
    … suspend on error = true
    … deliver until = max attempts reached
    … max attempts 2
  • trigger service that fires when document arrives and:
    … logs the value of the “input” field in the document
    … throws “exceptionForRetry” when value is <= 10
    … finishes successfully when value > 10
    … audit properties set to : always

Then I published two documents. One with value 5 and one with value 15.

This is what I expect to happen: The document with value 5 should cause the trigger service to fail and the trigger to suspend after 2 retries. The document with value 15 should be banked up until 5 has gone through successfully. I expect the trigger to be suspended and that I need to fix the service before document with value 5 can go through successfully.

This is what happens
This is what the log showed me:

  • service attempts to process document 5 three times (one for initial request + 2 retries)
  • then the trigger is suspended
    so far so good…

I then went to broker admin and reactivated the trigger. The Log showed me that it processed the document with value 15.

Questions:

  1. What happened to document with value 5, that caused the trigger to be suspended?
  2. I haven’t got myWebMethods set up yet, but I understand that I can use this to resubmit documents back to broker. However, if I resubmit the failed documents back into the queue, how do I preserve the order in which I process the incoming documents? Looks like as soon as I reactivate the trigger, it processes document with value 15 which arrived later than the one with value 5, but is now processed earlier.

Any response would be much appreciated.

I think you are close, good detail on the question by the way. Look at your trigger properties, what do you have Retry failure behavior set to? Should be Suspend and retry later. The suspend on error property does not do the same thing as the Retry failure behavior property.

I don’t have anything to add to what Mark provided but given the relative rarity of posts that provide the right detail when asking for help, I had to jump in and say that this is an excellent example of how to ask for help to maximize useful responses.

Remember: I don’t have the option of “Suspend and retry later”… We are using 6.5.1 not 6.5.2.

Also, if I would use the “Suspend and retry later” option, would that give me the opportunity to look at the message that causes the problems, possibly “change” it so it is processed successfully?

What are the options with 7.1.1? We are in the middle of an upgrade (probably 2 months away) where we are going to have an HA environment based on 7.1.1. Is 7.1.1. giving me anything extra besides the “Suspend and retry later”?

Okay since you have a service exception, the way you have it setup now will work. Have it suspend on error which means that message that causes the error will be ack’d back to the broker and deleted. However, you can resubmit(and edit it) through MWS. You can do that before you re-enabled the triggers so your document ordering will be preserved.

In order to do that you must have auditing turned on with the save pipeline option turned on. Resubmission of an audited service through MWS does not go through the broker so you order will be good to go.

Ok thanks that is the bit that I missed… Since I haven’t got mws running properly here yet, I must have mis-interpreted the “resubmit” I thought it meant “resubmit to broker” but it just means re-executing the same trigger service without having to enable the trigger.

Ok, that will work

What is the chance that once the message is sent to the service and it fails, it won’t write it to the audit log, but it is not on the broker anymore… that means we loose it right?

is this possible?

It will always write to the audit data store assuming you have the flow service and auditing setup correctly. You can also turn on document logging as well although that adds a lot of overhead in my experience.

You haven’t gone into details about your integration solution you are putting in but I gather that the solution is sensitive message ordering. There are other patterns that might be more appropriate depending on what you trying to do.

The requirements are simple:

  • we get delta change messages from one system and have to apply them to another system.
  • time is not so much critical as is the fact that the messages are applied (guaranteed) and in order of arrival.

In my last post, I was worried about the following scenario:
broker ivokes the trigger service, tries is for the max retry count, then it is off the broker and the it is up to the service to write it to the audit log. What if this fails (problem with IS repository)? What are the chances that the message is lost between broker and the IS audit store?

The IS is responsible for writing to the audit sub-system and it is pretty robust. If you are using a database backed repository the IS writes to local disk first and then an asynch thread picks it up and commits to the database.

You can also turn on document logging as well, which will give you two separate mechanisms for storing that message although the first should be enough.