publish one document and wait for multiple replies

I have a scenario were I would like to publish one document and wait (async=true) for multiple replies (w/ different documents). this is a federated query type of requirement. I’ve been thru the Publish-Subscribe guide and the Built-in-service reference.

We thought of using doThreadInvoke, but that would force us to use some DB caching system and then the parent service polling it in a loop. we found two many db calls can be expensive IO (over 470 calls) until it gets all the results.
we have some SLA’s we need to consider. Unless there is another way for the threaded services to communicate back to the parent service, we are starting to look at the broker capabilities.


When you publish a document, the publisher service will wait for single reply document. When there is a reply, the service completes its processing. It will not wait for some more documents…

If you have one reply but with different types of data, then probably, you can create a reply document like below

doc1 (optional)
doc2 (optional)
doc3 (optional)
doc4 (optional)

here, each doc acts as different data structures…

If you have more replies for a single publish, then it cant be achieved using pubandwait service… Create a trigger which can subscribe multiple documents using OR and invoke a service… Handle your logic inside the service…

The constraint you’re encountering is with IS. IS explicitly discards replies received after the first. From the Publish-Subscribe Developer’s Guide:

“A single publish and wait request might receive many response documents. The Integration Server that published the request uses only the first reply document it receives from the Broker. (If provided, the document must be of the type specified in the receiveDocumentTypeName field of the pub.publish:publishAndWait service.) The Integration Server discards all other replies. First is arbitrarily defined. There is no guarantee provided for the order in which the Broker processes incoming replies. If you need a reply document from a specific client, use the pub.publish:deliverAndWait service instead.”

It appears that you may need to resort to developing a custom Broker client, possibly one that could be hosted in IS. Tech support may be able to provide guidance.

Guys, thanks for the response. Like I said in my post “I’ve been thru the Publish-Subscribe guide and the Built-in-service reference.” So I knew about the waitForReply only getting one document. That’s Why I had made the post to see if there was any way to accomplish this with the broker. I’ve been looking at DeliverAndWait and doing soming like this:
DeliverAndWait – to client 1 {prefix_triggername1} saving tag1
DeliverAndWait – to client 2 {prefix_triggername2} saving tag2
DeliverAndWait – to client 3 {prefix_triggername3} saving tag2
then trying to map each tag and call waitForReply but I’m having time out problems.

If I do a doThreadedInvoke, I believe I’ll get better performance but is there a way to do a notify or to the main service thread to return all the results??
Think of this as a federated query to 10 repositories simultaneously but need to return to the main calling service and displayed in real time to some UI.

Thanks for your help guys

Sorry if the reference from the Pub-Sub guide was redundant. You’d be surprised at how many people say they’ve read the documentation but then have apparently missed big sections. :slight_smile:

I think you’re fighting an uphill battle trying to do this with built-in IS facilities that were never intended to handle multiple replies.

Using deliverAndWait is interesting but is probably ill-advised since a primary premise of pub/sub is that publishers don’t know who the subscribers are. But that might be workable if the responders rarely change.

For the doThreadInvoke, it returns a ServiceThread object. You call getIData() on that object to force a join. I assume you’re calling deliverAndWait in each doThreadInvoke? That might work though you may still end up with timeout issues–how long are you waiting and what is the time-to-live (TTL) value on the reponse document types?

Perhaps another tack is appropriate here. One of the items in the How to get help at wMUsers post is to describe the goal instead of the implementation step. Perhaps in this case, backing up to describe what you need to do rather than how you’re trying to do it might lead us to another approach. Might be worth a shot but the nature of forums like this can make it difficult to do.

Okay, Sounds Like a plan.
But I have to be generic in the explaination because of the sensitivity of the project.
here are some critical requirements:

  1. This is a federated Query type web service that will hit up against up to 14 different Partner owned systems and data sources.
  2. Some may have different interfaces than others. These systems do not use wM technologies and we have no control over them.
  3. we will be developing a UI that will capture the most common criteria to query all the systems sources.
  4. Since it’s real time, the traditional SLA in place states that will will try to meet an average response time of around 8 secs for any portal or web site application.
  5. The UI will capture the search criteria and call a web service.
  6. the web service will invoke a federated search in the form of ESB type implementation to gather all the results into an aggregated single result set. sort of like an index listing.
  7. the User will then select from the list to get more details.

I know you didn’t want to talk about implementation but I thought you should know that we’ve done stuff like this before. But only about 3 to 5 backend sources at a time and tried different approaches depending on the technical contraints. Our friend Mark Carlson was even on site to help out for our very first project implementation of another federated query. In that one we used doThreadedInvoke and had all the threads cache the results to a database in which the main application then queried the get the results on a result page. It was very fast but the problem with that was not all the results were in when to UI was quickly directed to the results page. We had to put a delay on the page and then a refresh button in the case that the rest of the results were still pending.

Our users didn’t really like having to click the extra refresh results button. So I’m trying to figure out another way to get the dispatched queries to return all their results back to the Calling Parent Service to save them a step. You said something about using the returning thread object to call getIData() on that object to force a join. Maybe we expand on that Idea.

Thanks so much for any ideas you may have.
Maybe Mark C can jump in here too.

F. Caloiaro
JNET Architect
JNET Office (Justice Network)

So, there’s your first mistake.

Great description!

So the web service in step 5 is hosted on IS, correct?

When the request is published, I assume there is an IS-hosted subscribing service for each of the partner systems. That subscriber then does whatever it needs to communicate with the partner and get the data and then does a reply. Is that an accurate?

If that’s the case, and you end up needing to do deliverAndWait then the pub/sub approach isn’t providing any benefit–since the main service now needs to know about all the targets and deliver to each of them.

Two alternatives come to mind.

  1. The Java API of Broker can be used to do this. You’ll need to write some Java code and be a direct Broker client rather than relying on IS facilities. If you want to host this within IS, which is doable, you’ll need do avoid using the built-in services to do the work. You might be able to leverage the IS connection to Broker, but you may also need to manage a connection yourself.

The upside is this is that it preserves the intent of pub/sub by making the main service unaware of all the partners. It just knows its going to get multiple responses and it doesn’t care where they come from. You’ll have one component managing the request (via pub not deliver) and all replies, instead managing multiple threads.

The downside is it won’t be a “normal” IS-Broker interaction which might confuse others down the road.

  1. Another option is to have the main service call all the partner services directly, rather than via the Broker deliverAndWait. If you have to increase the coupling by using deliverAndWait, might as well go the tiny bit further and just call the IS services directly (if my assumption about the interaction with each partner being done with an IS service is right). You’d use the threaded approach here with doThreadInvoke of each service and then join each later. My concern here is the managing the timeout of each join. I’m not sure if you can control each individually and one slow partner could hold up the whole works–and if multiple partners are late, then your SLA will be toast.

Upside is that this would be a more “normal” IS implementation. Downside is the thread management funkiness that might torpedo the whole thing.

Hope this helps!

I have to look at the Broker API and see how we could accomplish option 1. My guess is that we could create a Java Service and manage the connections to the multiple replies hopefully there. thanks for input.

Anymore thoughts you have, please message me.

“My guess is that we could create a Java Service…”

Be careful that you don’t underestimate the effort. It’s not a huge undertaking but it’s unlikely that a robust, IS-friendly approach will be contained in just 1 Java service. If you’ve never done a Broker client before, you may want to engage someone who has to help guide the effort.

Sorry for going back to the initial description.
Is that not possible to have paging concept implemented in your GUI?? Instead of having a Refresh button which users didnt like to use…


I assume the paging concept would require more work to manage the session state and force again the use of a database where we would also have to manage the cursor paging… I’d have to go back to the requirements but I think they want a scroll bar and not a paging mechanism. this data could be transported to mobile device as well so navigation on the gui needs to be simplified.

what about writing flow services that will gather criteria, validate it and then publish to broker. then after the publish, call the Custom Java Service called something like getAllResults that can make it’s own connection to broker using the API and try and get all the results from the triggered services. then it can return back to the main service with a documentlist of the replies. the main service can then handle how these replies are aggregated and send back to client. just and Idea…

Any thoughts???

I wouldn’t split the handling of the publish and getting the replies. I’d put those into a single component.

Thanks everyone for the input. I’m looking into the Broker Java client and Threaded approaches.