I’ve got an odd situation. I have two documents, with the same basic structure, and sometimes I want the info from one, sometimes from the other.
Lets call the documents A and B. There are two situations that can occur - A and B both arrive at around the same time (within a few seconds anyway) or B arrives by itself.
In the first situation, I want to use document A, but in the second situation, I obviously don’t have A to use, so I want to use B. The trick is - how can I recognize this situation via triggers? Any thoughts?
I think what I want is a “NOT” on my filter, which isn’t supplied (for obvious timing related reasons)
Thanks in advance!
Looks like one trigger with an AND and a second with a XOR should do the trick (atleast conceptually). The XOR trigger will ignore both the documents if they come at the sametime and the AND trigger will pick them up.
I have not tried this myself. Just a thought.
You mention that the documents are similiar in the basic strucure. However, for the trigger to work properly, you will need to use identical documents and then a filter.
The other option, is to create two triggers, one for each document. Then on the input signature to the receiving flow service from the trigger, make both optional.
In the receiving flow service, create a branch statement that evaluates each document using evaluate labels == true.
Then, in either a map or sequence step you can continue processing, or you can exit the flow with no processing. This moves the logic from the trigger to the flow service and makes it easier to change criteria.
I think of it like this: The Broker is like the postal service, a delivery mechanism; the trigger is the postman who actually delivers the doc by subscribing and referring the document to an underlying service.
This should meet your criteria and allow you to use multiple documents against a single flow service.
Both ideas sound good - my only question is with timing.
How much time does the IS give before deciding that the XOR is a go?
The two documents could arrive within seconds of each other, but obviously if the IS has already processed the first document, then it will process the second as well (deciding that the two didn’t arrive at the same time)
With Ray’s solution, it’s a similar problem, but this time it’ll wind up processing both documents, which we don’t want.
The situation is that document A is always correct, when it arrives, we want to use it’s data. Document B is slightly broken (but better than nothing) so when A and B both arrive, we definitely don’t want to process B, but if there’s no A, then processing B is much better than nothing.
I think the solution we’re going to go with is to try to “fix” document B’s data with direct database calls, and always use it, to avoid the timing issues. Ideally, we wouldn’t be dealing with broken documents in the first place, but unfortunatly we don’t have full control over the originating system
Thanks for the suggestions!
a) You can specify a join time out in your trigger settings.
b) As I think more about what I suggested, I see a lot of loop holes in it. XOR essentially invokes the service (immediately) when it sees one of the two documents and waits for the ‘timeout’ period to make sure that the second document (that comes during that interval) gets ignored.
c) ActivationID’s should be the same for A and B for the join manager to be able to recognize that they should be AND’ed. So, that adds more confusion and more source-side processing.
There are four scenarios here
DocB comes before DocA and DocA does not get published during the join interval
DocA comes before DocB and DocB does not get published during the join interval
DocB comes before DocA and DocA does get published during the join interval
DocA comes before DocB and DocB does get published during the join interval
My earlier suggestion (and any other suggestions based on the join manager) will probably not be able to meet your requirement for all these scenarios.
Now that you have given us more to chew on in your scenario, I can think of additional steps or considerations.
You still haven’t divulged whether or not doc A or doc B are the same exact document (by namespace). In either case, you will use triggers and submit to a flow service for processing.
I would log the documents to a db table if possible and then as documents are processed, check the table for prior processing.
You could take it a step futher by just persisting the document contents to a table(s) and when the next document arrives, you can conduct a full compare of both, select the document of choice, or introspect both documents and generate a single document for processing.
Without knowing anything further about your situation, I still think you need to separate the logic of filtering away from the triggers and place it where you have more choices to work with.
Just my very cheap two cents worth.
They’re virtually the same document, but not quite. There are a couple of slight structural changes. Right now, we’ve got a single process set up to process document B only.
We recently realized that document B is missing a key piece of data, that’s only available on document A, or by a sloppy process of inferring it from various pieces of information on document B.
Instead, what we’ve decided to do is to treat document B as a trigger, ignoring all of it’s contents, and going back to the datasource to get the absolute 100% correct data at that point. From my perspective, it’s cleaner, in that all the data is now controlled by my (webMethods) processes, so I’m freed from the restrictions of the originating platform (and more specifically my lack of knowledge in the platform)
Thanks again for all your thoughts on this strange problem.