Pubpublishpublish flow of execution


We were trying to understand what happens exactly when pub.publish:publish is called in a flow service. If one publishes to broker then does following happen

  • IS sends the document to dispatcher
  • Flow from which it was published starts executing next step in the flow
  • Dispatcher then asynchronously sends it to the broker

Or is the until the document is send to Broker and it sends back an ack the next step in the flow service is not executed.



Once you invoke pub.publish:publish service in the flow then the publishable document will be pushed to Broker and who ever subscribed this documnent will gets triggered.Ofcourse nextstep in the flow will be executed unless publish service doesnt throw any errors.

Sorry if i understood your question wrongly.


I think you can control when exactly it will be published on the broker by setting the delayUntilServiceSuccess flag apprpritately. This is one of the inputs to the pub.publish.publish service. If you set it to “true” it will delay publishing until after the top-level service executes successfully. Look in the Built-in-services guide for more info.


Actually what we think is happening when we call pub:publish, is it hands the document to the dispatcher and continues executing the next step in the flow without waiting for the dispatcher to successfully send it to Broker and get back an ack.

So the flow service may execute and exit successfully, but if there’s any transient error in IS or the dispatcher then the document is lost. (even if we use guaranteed documents)

In short we wanted to ensure in our flow service (calling pub:publish) that it has send and persisted the doc on Broker before exiting but seems like its not the case. Neways we would better like to believe that above is incorrect.


Sorry, I understood the question wrong.
You have a vaild point there.

IS is using a singleton class called ControlledPublishOnSuccess that takes care of publish on success. Whenever a publish-on-success is used, the document is “queued” into this class by using a method called addDocument on this class to queue the document for publish. The setting watt.server.control.maxPublishOnSuccess controls the maximum number of documents that can be in this “queue”. These “queues” are not actually broker queues.

So IMHO there is a chance that the published document could be “lost” before the broker is “aware” of the existence of the published document. I think it does not matter if the document is of “persistent” storage type. I can think of so many thing that could go wrong (like outofmemory exceptions) while the document is handled by these classes before it is actually published on the broker.

Nice catch though… kind of makes us think twice about words like persistent storage, guaranteed delivery etc.,


Just to make sure, does your below explanation

“there is a chance that the published document could be “lost” before the broker is “aware” of the existence of the published document”

Does it apply also when delayUntilServiceSucces is set to default “false” ?

Your explanation has led me to question entire design of our code where we receive documents in a flow service call pub:publish and return back an ack signifying successfully received and persisted. But now as per the explanation calling pub:publish and the document being actually sent/persisted to Broker is asynchronous activity.

I guess, if you set delayUntilServiceSucces o false, it actually does the publish before it starts executing the next statement in flow. Here is what lead me to believe that.

I decompiled Dispatcher.class (unjar server.jar) and looked at the code. In the code, if the delayUntilServiceSucces is false, it actually publishes it before it returns. So, if something has to happen at this point in the dispather before it is able to publish, it would throw an exception. If there is no exception by the time pub.publish.publish finishes, I think it is safe to say that the document has actually reached the broker. I may be wrong, I am just going by how much ever I could dig into the decompiled code… but it is interesting to see what actually goes on under-the-wraps though.

dear all,
just to make my point here is that if the publish service is successful it is guranteed that the message will be delivered to broker even if the broker is down or IS crashes just after giving ok from the publishing service (ofcourse only for guranteed documents). this is implmented using IS level document storage which takes care that even if the broker is down it writes it into it (disk) and then ack back to the publishing service else exception. so we are sure that guranteed documents are safe (almost).

Atri, That description supports my understanding of IS/Broker interaction as well. wM refers to this as “client-side queueing” (though I think the feature is misnamed–the “real” client is the system that talks to IS and if IS is down, that system needs to implement some sort of queueing as well).

The IS server uses a process called the dispatcher to handle interaction with the broker. This dispatcher works differently depending on how you have configured your integration server. The dispatcher handles putting documents into the broker queues and pulling documents out of the broker queues. If you have client side queuing turned on(really don’t need this if you are using the broker and doesn’t work with clustering) then the document is persisted to disk on the IS side after the document is retrieved from the broker. I don’t see a lot of value to client side queuing when using the broker. You are in affect, maintaining persistent queues in two different places, overhead, overhead, overhead.

As far as publishing the document to the broker, the IS server will still persist the document to disk(outbound document store) if it cannot contact the broker(regardless of the client side queue setting). It will then preserve first in/first out when the broker comes back up by pulling from the outbound document store before processing new requests.

The original question brings up the critical element when designing integrations. Failure points exist in many places, knowing where these are and how to handle them is the job of the architect, not the software. Some are more risky than others, and these have to be weighed against cost and probability. I’ll give you the following example: When the IS server dispatcher retrieves a document from a broker queue, it hands it off to a IS trigger which in turn invokes a service. After that service successfully executes, an acknowledgment is sent back to the broker via the dispatcher to delete the document from the broker queue. It is possible(it doesn’t really happen that often or at all but it could) for the service to complete successfully and then have an IS server crash before the acknowledge is sent back to the broker.

In this case when the IS server comes back up the document will be reprocessed. This could be bad depending on how your integration is architected. Numerous examples of this can be found all throughout typical integrations including issues with your source and targets. Knowing where all of these situations are in a given integration and the risk associated with them is a critical component of the integration design.


I misunderstood the facility. I thought “client-side queueing” referred to persisting documents to be published, not documents that have been retrieved from the Broker. I was wrong. Apologies for the misinformation. Looks like the facility has been eliminated in 6.1.

hi Mark ,
very good explanation it is.

Hi folks,

I have a problem when setting “delayUntilServiceSuccess” variable to true.
appearently IS6.1 ignores that and the document is published instantenously to the broker ( or document dispatcher and then broker).

Has anybody ever tested this?
my test case is like this
|----------serviceA (some service)
|----------serviceB (some service)
| (delayUntilServiceSuccess == true)
|----------serviceC (some service)

I expect the document to be pulished when the whole sequence is completed with success ( upon successful completion of serviceC )
I checked this with both documentTracker and a simple trigger, result was the same.

Please advise.


Hi Masoud

Did you find a solution for your question? I’ve noticed the same situation in 6.1 - where the delayUntilServiceSuccess is set to true but it publishes before the service is complete.



What will be happened if the disk on Broker server is full. I have the code like following, when IS publish the document, the disk on broker server is full, but I didn’t catch the exception, and serviceB was processed anyway.

serviceA (some service) (delayUntilServiceSuccess == true)
serviceB (some service)

any hint for that?