My entire processes involve publishing the document to the Broker .
There will be Subscribing service will process the document .Subscription service will Place the document on MQ System (via MQ adapter using PUT Adapter Service)
I have a scenario where if MQ is down for some reason. Then the subscriptions service will fail. As we can continue publishing the documents . In this case MQ is down our Subscriptions service keep failing.
My question is how to persist the document so that it can be resubmitted
Once MQ is back up and running. Because customer doesn?t want to resent those documents again.
Is there any Trigger configuration so that document can be persist or
Can we persist the document in Broker If we can how can we do it?
Is there any other method to persistence in the documents and once MQ is up we can re process all the failed documents.
The components I am using is IS 6.5 ,Broker & MQ Adapter 6.0
One approach would be to configure the service that gets invoked by the trigger so that it logs the pipeline to the Monitor on errors. Once MQ was back up, you could then login to the Monitor and resubmit the failed documents from there.
HOWEVER, as you can imagine, this approach can become a nightmare depending on the amount of data you have flowing through that service. If there’s a lot of volume going through, then you could potentially bring your system to its knees with the amount of data that would get logged to the Monitor’s database. Not to mention that having to manually go back to the Monitor to reprocess all the failed documents can be a pain.
A better approach would be to configure your trigger with Deliver until set to Successful. Once your service detects that MQ is down, it could then invoke the service throwExceptionForRetry, which would cause the document to be retried.
NOW, just recently webMethods introduced a new retry feature for triggers in 6.1, which would be very handy in the scenario you’re describing. I am guessing that they also made this feature available in 6.5, but you would have to check.
Anyway, the feature that was introduced by SP2 (for 6.1) allows you to configure the trigger so that after it has retried X number of times, the trigger gets suspended and a resource monitoring service gets kicked off. The monitoring service runs at a configurable interval and it checks to see if the resources (in your case, MQ) are available again. Once the resources become available again, the monitoring service returns a flag indicating this and the trigger is resumed.
It’s on the Trigger Properties in Developer. Under the “Retries on error” section, you will see that webMethods added two new properties: “Retry failure behavior” and “Resource monitoring service”
Another cool thing that came with SP2 is that now, if you go to the Administrator page and click on Resources, there’s a new link called “Trigger Management”. There, we can do things like suspending processing and/or retrieval for all, or individual, triggers among other things.
Is is possible to enable/disable trigger by scheduling service.
This integration we are trying to develop is we are publishing the documents to broker and we would like broker to queue up the documents for specific time(end of the day or something like that )and we want to schedule the service (trigger ) to process broker files at the end of the day and at the end of the week. I research advantage and I didn?t found any good responses to enable/disable/resume triggers.
Is WM 61 with SP2 provided this feature or we can?t queue (persist) the documents on the broker.
Yes, depending on your IS version. But from what you describe, you’ll want to suspend, not disable. A suspended trigger keeps its subscription on the Broker. A disabled trigger does not.
Research the services that are available for trigger management.
That said, I think the reason you didn’t find info about processing in batches is that Broker isn’t really intended for this. Triggers aren’t meant to be constantly started and suspended. The whole point of integration tools is to enable near real-time integration, not provide yet another batching mechanism. If it were me, I’d reevaluate the desire to be using Broker for this particular integration. It doesn’t seem to be a good fit. Of course, I don’t have all the info so my assessment may be off-base.
Look for the folder pub.trigger in the WmPublic package. This folder contains services that allow you to suspend and resume the triggers. I’m assuming it exists in IS 6.5.
Now, having said that, I must say I agree with Rob in that the Broker isn’t really intended for this. Please let us know what problem you’re trying to solve and we may be able to help you arrive at a better solution.
Firstly I thank both you guys for the quick responses.
I will try to give a brief over of the process…
When ever User want to send the data(batch) though notification. User insert the row or rows into database table.webmethod picks document via Insert Notification then, invoking Service will do some business logic and again publish the document to broker queue. Finally schedule the trigger service to send to destination( at the end any given time)
The reason we went with this process is if this interface will change to real time we dosent have to make huge change to the code. As we get the documents in real time we send to destination (with out suspending the triggers).
The other reason is we need to deliver the document with the most recent update
I will try to give the example I hope you guys can understand.
I inserted a PO ( PO # =001 Amount = $12) into the Trigger table.webMethods Notification service get kicked off and process the document. After apply business rules we again publish to the broker queue(to send to destination at particular time ) .
Now there are chances that user can again insert the same PO with different amount
( PO # = 001 Amount = $20) webMethods will process the document and publish to broker and so on.
Now we have 2 documents on the broker queue. My requirement is just to send the most recent document to the destination I need to discard the 1 st PO (i.e with amount $12 ) According to my understanding with webMethods We can see the content of the queue and make a decision send the most recent document.
Do you think is this ideal approach ?.Also they dont want to go with any database or Flat file for getting the most recent update file.
If you have any better recommendations please let me know.
In this case, if it is the target that needs things in groups, then the target process/adapter should do the queueing/batching. The source process should have no idea that it is happening. As far as it’s concerned, the PO is sent. The Broker’s job is to get published documents to the subscribers as quickly as possible. Don’t make it hold on to documents for an extended period of time. Such an approach is not scalable.
IMO, this is a broken process. If the PO isn’t ready to go immediately to the end points, it shouldn’t be published. Holding on to multiple versions and trying to sort out what was the last one within the integration layer is asking for trouble. IMO, you need to convince the right people to not do things this way.
A couple of questions: What are the “business rules?” Generally speaking, business rules in the integration layer should be avoided. Why are you publishing the PO? Are there multiple subscribers? I’m not a big fan of pub/sub for most business processes.
Since the source system (or its users) apparently still operates in a batch mode, instead of having the source publish things right away don’t publish anything until the “batch window” begins. Then have the source system insert into the buffer table only the current record for each PO.
Do the users actual insert row(s) into the tables? If so, you may need to try to get them away from doing that. They shouldn’t do anything that indicates “do integration now.” They need to be in an “application mindset.” Meaning, they edit the POs and such to their hearts content. At some point they say its “done” or “committed” or something. To change it, they need to indicate a “cancel” or “modify” or some other indicator. These state changes should have significant meaning to the application itself. Integration can then monitor for these state changes and do the right thing. Implementing “oh I changed my mind on that PO” within the integration layer is a bad, bad idea.
Thanks Rob !
You were right regarding the queuing the document on the broker that not a good practice.
The reason why we were planning to go with that approach was to check the document to get most reason one. It seem that Broker is not meant for that purpose.
The other thing was I was under the impression that guaranteed delivery that webMethods promises is some thing like if target system is down webMethods will send the documents as soon as the system is up and running (with out no manual process)
But according to webMethods When ever when ever the subscription service is invoked
webMethods job its done weather or not subscription service success or fails.
We talked with webMethods PS and they are going to add this feature in newer versions of webMethods.
The business rules I was mentioning was for PO that comes we need change unit of measure to another (Fields like dimensions, Lengths of the orders)
I also know that interface program is not supposed to do that but this is requirement as Target system is unable to interpret that unit of measure.
Also we will be looking into different optionsfor this process.
webMethods IS services that are invoked by a trigger will retry if configured correctly. The issue is whether the exception that is encountered is interpreted as a system or service exception by the IS server and whether or not you are catching the errors.
If the flow service tries to connect or use an external resource that is not available it will usually throw a system exception. This exception will trigger a retry if you have set up the properties correctly for the document and trigger. If you however are catching the exception as most people do through the normal try/catch sequence, then the service will not be retried because the exception is caught.
To work around this limitation, you can throw a runtime (System) exception out side of the try/catch and it will retry the service. So if the database you need is down or MQ or HTTP or whatever, get the error it returns, make sure it is a system exception and then throw it outside the try/catch.
The type of error you get back from the offending resource is going to vary so you will have to interpret the error. The IS server does this for you on some adapters like JDBC but not on others that HTTP or WmDB. You have to make sure that it is interpreted correctly.
Not entirely accurate. If the IS service fails it can indeed send a ‘nak’ to the Broker indicating (more or less) to try again. The Broker’s job is to get it to the subscribing clients successfully–if the client doesn’t accept the document for some reason, Broker cannot throw it away.
This is confusing because the IS/Broker interaction already allows for the “hold the document until the end-point is available” capability.
The reason I asked is because many times things are labelled “business rules” when they are not. Changing UOM is not a business rule, IMO. It’s a representation/translation issue, which is squarely within the realm of the integration layer. A “business rule” is more along the lines of “when the PO exceeds this amount and includes these line items then do X.” It is this type of logic processing that should be avoided in the integration layer if at all possible.
There is an issue with the Configuration of Broker
i click setup->monitored broker server
in the install directory:path whr broker is installed ie webMethods\broker
install data directory:path whr broker data dir is installed ie webMethods\broker\data
after that it gives an error:
failed to connect to broker server
ERROR:wbm2006 Error connecting to server
Unable to open connection to host:Computer name
Error:Java.net.connection Exception,Connection refused
Connect was reported by the socket call
Error getting broker list
Does anyone know the solution for this?
Thanks in advance