Is it possible to execute some database operations and publish a document to the Broker within a single transaction? I’d like to read few records from DB, process them in IS, publish to Broker and then change their status in DB. But I need to be sure that if commit fails then the documents are not provided to subscribers.
If such functionality is unavailable, how can I avoid publishing duplicated records. As far as I understand, the Notification Adapter utilize trackId field in document’s envelope. Unfortunately, this field is unavailable for developers. Are there any other solutions?
One. If you publish a document, and then you are planning to a commit. I do not see how you can prevent the subscribers from getting previous document. What you can do is, if commit is successful, then publish the document.
It would be quite simple if Broker were compliant with JMS 1.1 and supported JTA. Because of its hub-and-spoke architecture it would be easy to hold a document before distribution among subscribers until global transcation is committed. Now there is always a hazard window between commit and publish functions. The only solution I see is to set delayUntilServiceSuccess property of publish function and handle errors skillfully
The second problem is elimination of duplicate documents (once published). IS can eliminate them by storing trackIDs of processed documents and checking any new document against stored history. This is the way the Notification Adapter works. However, I’m not able to set trackID manually, because it is always overwritten by IS publish function. I wonder if there are any other solutions for this issue.
What you are looking for is only once guaranteed processing design.
The wM platform as is today provides guaranteed delivery in
all cases - that is, the messaging system will deliver a message to the
intended service for processing at least once. From a processing
perspective, the IS guarantees that at least one attempt to execute the
service associated with the document will be made. Under certain
conditions, further attempts will be made if the original attempt fails.
However, in many cases coding/development will be required on the part of the user/developer in order to obtain the desired guaranteed processing behavior. That being said, let’s get on to your questions…
Are you looking for
Case1:
Do some DB Operation
Publish the Document
The trigger Subscribes the document.
The process service of the document fails.
If Failed -> roll back up to 1.
If suceed -> commit till from step 1
or
Case 2:
Do some DB Operations
2)Publish Document
The trigger subscribes the document.
Process service of the document executes successfully.
Some DB Operation.
If Failed rollback till 1
If Success commit from 1
Both these involve coding to handle your transactions.
Case1:
Once the document is published successfully.
If the service exits successfully, there is no problem: the trigger
dispatcher acknowledges the message in trigger queue, which then
acknowledges the broker (if the document was guaranteed), and the document is removed from the broker queue.
a)The trigger dispatcher will look at the type
of exception associated with the service failure. In IS 6, there are two kinds of exceptions that can be returned from a service failure. The first is the default Service Exception (e.g., using the Flow “Exit” operation with a value of $failure). This exception is thrown whenever something is functionally wrong and causes the IS to log failure after logging the error, the trigger dispatcher acknowledged the trigger queue which causes the broker to be acknowledged.
b) The Runtime Exception is signaled by the IS 6 adapter runtime in
conjunction with adapters that are able to determine that an adapter service failed due to a transient error - an error that is temporal in nature, like unavailability of a resource due to network issues - that may be resolved if the service is retried again later. When a runtime exception is received,the trigger dispatcher will negatively acknowledge, the trigger queue. This AdapterRuntime Exception cannot be signaled by using flow services until we use a custom adapter built using ADK.
Based on these signals, we can rollack or commit the original flow service.
Case2: It needs a couple of coding at the subscriber end and also at publishing end to have only once guranteed processing.