We have a situation with a process model where the data is lost in the transition between two steps.
We have a receive step for an IS document and a direct transition to a IS service step. If we execute this part of the process multiple times, the behavior is:
In the receive step, the data is always correct: the last data sent/received (the IS document) is available
In the second step, the IS service step, always the data sent the first time is available
Additional info: the second step, the IS Service step is a join with “OR” condition.
We made sure we are dropping the data in question from the pipeline.
Does anyone have any idea where to look for the problem ?
Also, we added an additional step before the IS service step and here the data is still correct. Unless we find the cause of the problem, we will change the implementation to this.
Your description is a little short to get a clear picture, but when you say you drop the document in doubt this may be the problem. All services called in a process operate on a common pipeline. So if you drop a document in the service pipeline which is named like a document whih shall go from one process step to the other, this document will be dropped to.
I am aware of the behavior you mentioned. In our situation, the document is dropped in the second step, after the document should have reached it’s destination.
Maybe this can make it more clear. This is a “picture” of the process model:
The document “ONE” (one means first set of values) is published to Broker:
In the service executed in “IS Service Step” we make sure to drop the document “ONE” from the pipeline.
The document “TWO” (second set of values) is published and here is the problem:
“Receive Step” ------------------- > “IS Service Step”
doc “TWO” present ------------------- > doc “ONE” is still present - the new values never reach this step
When I say the document “x” is present, I mean we saved the pipeline and saw the values.
So after the first transition/execution, in the “IS Service Step” we always have the initial data, doc “ONE” in my example.
And, as mentioned in the original step, if we add an extra step between the two, the data is still correct there :
if I understood correctly, you are talking about a complete new instance of the process, which got triggered by document 2. If this is the case I would check (probably it’s not that simple but that#s always my first check in those situations) if there are any restore pipelines in the flow code or if the values of the document are set/chaneg by a mapstep in the code.
First a clarification. We are talking about the same process instance. The “Receive” step in question is not the main receive step which triggers the process instance. This is a “secondary” receive step for which the option “Allow this step to start a process instance” is disabled.
The published document reaches the correct process instance via the process instance correlation id (the default process correlation functionality). And yes, we are talking about a circular flow, ie multiple documents will be received and this flow will be executed multiple times until the received document holds the success values.
Regarding you suggestion, that was the very first thing we verified: forgotten restore pipelines or bad mappings. Everything looks good. We are sure our implementation (our flow code) does not overwrite or remove the values.
We also received a suggestion that the process cannot correlation to the correct step instance since we have multiple step instances. We are also analyzing this possibility.
You should be able to easiliy check if the document is correlated correctly by inserting a log step into the correlation service.
One thing about correlation id’s: Those must be unique for ALL processmodells. webMethods does not distinguish correlation id’s for different models, so if a cid is aused again (for a different process) it will try to correlate into the already existing and probably fail. We are using the process model id as part of correlation id’s.
We verified the correlations and they look correct.
In the end we were still not able to find the reason for this behavior and due new requirements, we completely changed the implementation.
With the new flow, this problem is no longer possible.
Thank you Martin for your involvement in this topic.