Issue concerning message from source system that goes from webMethods Integration Server 6 and guaranteed to Broker 6. Understand IS publishes to outbound IS document store (in memory?) and then persisted to Broker.
Consider IS has sent document to IS document store but then Broker and IS shutdown before a) message gets to Broker and b) acknowledgement gets back to source system. I presume, due to storage in memory, that we lose our message? Ack doesn’t get back to source so source may send duplicate? This is assuming source doesn’t queue its messages.
Have seen workarounds where initial service on IS writes message to file/db before calling publish service (which writes to document store and then Broker). Assuming source isn’t queuing its messages, is there webM 6 functionality I’m missing here to ensure we don’t get duplicates and we don’t lose our message?
Have looked at GEAR Transaction Analysis white paper but assumes source system maintains a document queue of its own.
One assumption that we can possibly make here is that there will be a few micro or miili second difference between the broker-shutdown and IS-shutdown.
From my limited understanding,
a) If broker goes down first and if the execution of ‘pub.pub:publish’ completes successfully, the document will be persisted in the IS outbound store (this is a file-system based store & not a memory based store). The document will be sent to the broker once the connection between the broker and the IS is re-established.
b) If the IS goes down before the ack is received from the broker, you should be able to catch this exception in your IS flow and be able to handle this exception using custom procedures. I am not sure if this should necessarily result in a duplicate document.
c) If the broker goes down before the ack is received by the IS from the broker (and after the document is published), this will probably not result in the IS flow receiving an exception. IS will assume that the document has been published successfully to the broker.
d) Doing duplicate detection on the target side (using client-side inbound queuing or a custom mechanism) should potentially resolve issues with duplicate messages.
- thorough testing for different “micro” scenarios can only accurately answer some of these questions.
- these are questions that webmethods-support may be able to give a more “webm-internal” response.
You can get a copy of a useful technical note called “Optimizing Publish / Subscribe Solutions” from Advantage at this link: http://advantage.webmethods.com/bookshelf/Best_Practices/DetailedProdTechInfo/TechNotes/TNote_PubSubOptomization.pdf
This document states that “The Integration Server continuously monitors its connection to the Broker and alters its publishing behavior if it senses that the Broker is not connected.”
It goes on to say “If a publishing service publishes a guaranteed document while the Broker is unavailable, the dispatcher routes the document to the outbound store (volatile documents are discarded). When the connection is re-established, the Integration Server automatically sends documents (in batches) from the outbound store to the Broker”
Note that volatile documents are not routed to outbound storage and that the outbound store (disk-based) is not used when the broker connection exists.
I think the correct solution somewhat depends on how your external application (client) is sending requests to the IS. If it is using a fire-and-forget approach whereby it assumes that IS will never lose any requests it sends then it must also deal with the situation that arises when IS goes down at the instant the request is sent. A better approach is for a client to wait for a functional acknowledgement of some sort to be returned by IS and then and only then to assume that the request has been safely received.
In your situation above, this functional ack would only be returned after the broker had acknowledged receipt of the guaranteed document (or the doc had been written to the outbound store).
I think the issue on the table now is one in which you assume that the broker is unavailable, the guaranteed document is written to the outbound store but the IS fails before a functional ack could be successfully delivered to the external client.
In this scenario, it is possible for the external client to resend a request which could lead to a duplicate document being sent to the broker. I think there are numerous approaches to dealing with this unlikely event including implementing some form of duplicate checking for the inbound client request based on a transaction ID or message ID.
To arrive at the correct solution you should consider both the risk (or cost) of processing a duplicate message as well as the effort required to create an adequate duplicate detection capability for inbound requests. You should also carefully consider the negative performance impacts of any solution that includes the client-side queuing and duplicate document detection features of IS.