Is there a way to automate the pausing of a Trigger so that if the target system is down for scheduled maint., the documents will queue up on the broker? I noticed on a WM forum that you can set the trigger document store capacity to 0 via the node file and reload, but this doesn’t seem like a very elegant solution.
In theory, the Source application is ignorant of the Target application. If you implement a “trigger pause”, you could paint yourself into an architectural corner if there is ever a second, third, or nth Target application.
If the Source application’s adapter publishes a Notification document, it should be classified as Guaranteed. If the Target application’s adapter is down, the Broker will store the document until it can be delivered.
Along with your scheduled maintenance, be sure to bring down the adapter, too. When the adapter is restarted, the queued Notification documents will be delivered.
Hope this helps.
If I disable the Adapter, the JDBC Adapter for example, won’t that cause my flow to fail and generate large number of error messages? Since the Document Trigger is enabled, it’s going to pull the doc off the broker and send it to my flow, which uses the Adapter. If I could disable a flow’s Document Trigger, wouldn’t that allow the docs to be queued on the Broker?
An example of what I have is the Mainframe sending customer info that gets sent to a few other apps and our Data Warehouse. Our DW has a defined maint window that is different then the other apps. When the mainframe sends this info and we publish it via MIS, I’d like all the other flows to receive it via their Document Trigger while the DW flow leaves it’s copy in the broker queue until the maint window is passed.
If I could disable a flow’s Document Trigger, wouldn’t that allow the docs to be queued on the Broker?
What happens if a Trigger is disabled? Will newly published documents for this Trigger still be queued or are they thrown away by the Broker because the Trigger is disabled?
What is the best practice to temporarily halt document delivery for maintenance while still queuing new documents (so that no documents get lost)?
When a trigger is disabled the documents stop being queued for it so you will loose new documents coming through your broker.
If you disable the trigger the Broker client created on the Broker for this trigger is not destroyed but only disconnected from the Broker. This is because the client group used by IS is explicit-destroy life cycle.
This means that any documents will be accumulated in the Broker and when you enable the trigger it will pick up all the documents. They will not be lost.
Disabling the trigger does not destroy the client but it removes the subscription to the document type. If you have documents already in queue they should stay there, but new incoming documents will not be sent to the queue.
Does anyone know of a way to pause de-queuing?
I experienced a similar problem. The only thing I did was bring the Integration Server down and bring it up again. Then the Broker server was also automatically down and was brought up with the Integration Server. Then, I could not see the docs in queue. I don’t know whether it is the right way, but that solved my problem.
I am talking about Integration Server 6.0.1
Thanks, for your reply Sherry.
Unfortunately, in our case, shutting down the IS is not a viable solution since we have some processes that must remain active. I thought about only disabling the package that contains the trigger and see what that does. It still seems a little unclean…
I know that bringing down the IS and Broker is not a best practice. But I had to get rid of the docs in queue and did not find any other way. I tried disabling the package but it did not help me much.
Well, let us see whether we can find any solution.
Sherry, in your case, you are trying to stop the trigger getting rid of the documents currently in queue?
Is that right?
Yes it is.
Also, we can do this way. We can delete the trigger and create a new trigger and then restart the server and broker and then subscribe to the doc. The docs in the queue are lost and the subscription is done to the new doc. I tried something similar to this some time back and it worked.
Hope this helps
For pause the trigger you can lock the queue on the broker.
You create an Admin client with the broker classes (COM.activesw.api.client.*
new BrokerAdminClient(broker_host,broker_name, null, client_group, app_name, null);
You lock the queue on the Broker by the clientId
qlock = adminClient.getClientQueueLock(clientId);
Since this moment the trigger will work as with an empty queue.
But new published document will be queued.
- For unlocking the queue you can do:
boolean flag = qlock.releaseLock();
or if your admin client is with life_cycle valatile you can just disconnect the client:
Since this moment the trigger will work normally.
I hope this helps.
I’m also interested to know if it works in a cluster environment, may be somebody as information on that.
Webmethods 6.1 has now available a fix that allows to pause a single trigger.
I didn’t test yet the solution but seams like the trigger control that is working in 6.0 for all the triggers now can be used for a single trigger. (pause the retrieval and pause the execution)
Should be fine
Can you please elaborate on this option in 6.1 fix. Are these documented anywhere?
This fix is also available for IS 6.0.1 SP2 (fix 144), and is included in SP3. It provides “built-in services that you can use to suspend/resume document retrieval or document processing for specific triggers”. The services are under pub.trigger.
Well I am very new to webmethods but have faced the similar problem and the workaround which i can think about is that you can stop the ES-IS listener through a flow service
This will stop the execution of your flow service.
I hope this addresses to your problem
I am fairly new to webMethods so I apologize in advance if my question is a bit juvenile. Nevertheless, I am looking into using the scheduler to kick off triggers that will then invoke a service. Currently, we have the scheduler invoking the service directly, but that tends to put a bit of a strain on the system. While I dont see how this could work, it was suggested that we use a thin client wrapper to publish a server that will thus invoke a service. Any insight would be appreciated.
You can very well use trigger, which will subscribe a document published to broker, and invoke a serivce which is subscribibg the published document as an input.SO in this way, no scheduler is required, and system would be less strained.
Kindly set the properties of trigger like “Capacity” and “refill Level” as per your requirement.