I have implemented the asynchronous deliverAndWait using the default IS client ID and all my triggers (3 at this time) pick up the document as I expect. Each of these triggers invokes a service that replies. My idea was to use a simple repeat in the deliver service containing a waitForReply to get each reply as the arrive until I find the successful one at which point I exit the loop.
However the behavior I see is the first reply comes back fine but if I call waitForReply again then I just get NULL and an exception [2049]07-09-06 14:54:51:0647 [ISS.0098.0036E] webM IS RequestReplyHandler encountered Transport Exception: com.wm.app.b2b.server.dispatcher.exceptions.CommException: [ISS.0098.9010] No waiting thread for Document Id: 39. Requestor might have timed out.
It seems that once the first reply is received it drops the rest which is the behavior I am trying to avoid by using deliverAndWait over publishAndWait. Anyone have any ideas on how in this scenario I can receive multiple replies?
Wow. I had no idea they had dorked up the request/reply and deliver/reply facilities this way (IS acts on the first reply, tosses the rest). Talk about limiting. It’s rare that an integration needs multiple replies but when it does, it looks like they have made it impossible to do so using out-of-the-box services. Yikes.
If you’re after a reply from a specific trigger, use the client ID for that trigger in your deliverAndWait call, not the default IS client ID. The client ID uses a name of the format clientPrefix_triggerName, where clientPrefix is the Broker client prefix configured for the IS and triggerName is the name of the trigger of interest. You can view the list of clients using Broker Administrator.
publishAndWait definately only picks up the first reply and drops everything else. The documentation and the SR I referenced lead me to believe that deliveryAndWait should allow me to do this but it seems to be doing the same as publishAndWait.
Unfortuntely in this specific scenario I dont know which trigger will return the result I want, or if any of them will return a valid result in fact so I need to see all responses valid or not up until I see the one that is valid at which point I dont care about the remaining.
This is not a show stopper, there are always multiple ways to do this sort of thing however the other methods I have thought of are not as clean and logical (in my mind at least).
This sounds to me like something better suited to BPM (Modeler). Within the model, you publish the document, and wait for replies to come in. You will have to:
* Create [I]correlation service[/i] to manage the [U]correlationId[/u], such that all the replies will be identified as belong to the same process (or conversation if you will).
* Use a join to wait for the pre-determined number of replies, or model a loop to handle each reply as it comes in.
* A process level timeout if the desired reply doesn't come during the timeout period.
Of course, creating a model will open up a whole can of worms if you aren’t already using models. And also that handling many thousands of processes a day is something that requires a bit of planning, and maybe performance tuning. If it’s something that you need, it’s a bit more flexible and more tunable than what the publish & subscribe mechanism provides.
On a side note – One of my colleagues has done something quite similar. He had to use models, but the requirement is such that the system is only processing dozen of such messages a day.
Sorry, should have been clearer. Success is a data level decision. So I will publish a document via deliverAndWait which will be picked up by a number of triggers. They will all reply to this (with the same reply documenttype) and one and only one of these will be the sucess data. the others will be failures but I need to know about them as well for some boundary cases. The real issue is I cant seem to get all the responses when using deliverAndWait.
ychang,
Modeller could definately do this however in this case I have some non-functional requirements that mean I cant afford the overhead the BPM stack adds. This is a real time service that will be called alllllooootttttt. I am just interested to see if anyone else had hit this issue as well or if it is just me thinking outside the commonly accepted square:)