JDBC Adapter - Error Handling

Hi All,

I have two servers. I am publishing from one and subscribing on the other. Once i publish a document and if the transactions is successfully processed i get an ack from the subscribing server that would inturn update my staging table. My question is…

If my adapter is down, I dont want the ack to be lost from broker. Is there a way where i can configure the broker,trigger or JDBC adapter so that i dont loose my ack from subscribing server.



Is the interaction currently a synchronous request/reply interaction? If so, change it to be async. Server A publishes the doc using data from the staging table. Table is updated to “published.” Server A subscribes to “ack” events from Server B. Server B gets the doc A published, does its work and publishes an ack. Server A gets the ack and updates the staging table with “done.”

Monitor the staging table for “stuck” documents–in “published” status for longer than expected.

Hi Reamon,

  The problem i'm facing is....i am trying to avoind any manual intervention where we check for records that are stuck in hold state. The same would hold good for inbound records coming in from server B. I was thinking of an option of publishing the ack to broker again, if my adapter fails to update the staging table. But the problem would be my trigger will be in a infinite loop which i don't want to happen....is there a way out of this.....? or can u please suggest any other solution



Perhaps a higher-level description of what you’re trying to do would lead us to a useful approach. We started at “how can I make sure I don’t lose the ack” when perhaps we need to be reviewing the solution approach.

With the info so far, this is an example of ill-fit for pub/sub. You’re trying to track what the subscriber does with the event. If you care that the subscriber is successful, just call it. Don’t publish the document. Pub/sub shines in a “fire and forget” mode. It is less compelling when the process must track what happens to a document.

Checking for stuck records can be automated. Just query the table for the right status and last update > acceptable period of time. Reset the results to “started” or whatever to have them picked up again.

In order not to fall into a long running loop, you can configure the trigger so that Deliver until is set to Max attempts reached instead of Successful. Then, set Max attempts to the number of times you want the document to be retried.

If the service being executed by the trigger is configured properly for auditing, you could then use the Monitor to resubmit the failed acks.

Another option for reprocessing the failed document(s): starting with IS 6.1 SP2, you can set the Retry failure behavior of the trigger to Suspend and retry later. This means that once the document is retried Max attempts, the trigger will actually be suspended. Once the database is back up, you can then resume the trigger to process the documents where it had left off.

You can resume the trigger manually via the Trigger Management page of the Administrator console or by executing the resume services in WmPublic/pub.trigger; OR you could set up a Resource monitoring service to do it automatically (although one could argue that using a monitoring service is not a whole lot different than the long running loop you were trying to avoid to begin with).

  • Percio

sorry guys…i should have explained what i am looking in a bit more detail…

I will try the approachs that were suggested…