JDBC Adapter Polling Loop

I am using a JDBC adapter to poll a DB2 table on an AS/400 system. The table is polled and the Basic Notification operation is used to begin the process of inserting the data passed in a document to a Siebel database. I am running into an issue if an error occurs within the component that polls the table on the AS/400. This table is used as a temporary table. Once the adapter picks up the data from the table, checks the XRef table (SQL Server) and publishes the document, the row is deleted from the temporary table. If an error occurs within this component, the row still gets deleted in the temporary table but the process does not publish the document; it is stuck in a loop because of the error.

Has anyone come across an issue similar to this? If so, what was done to resolve it?

As of right now I go through and remove the row from the temporary table using a delete operation if an error occurs, however I do not have a good way to store the data so it can be sent off to Siebel at a later time. If anyone can offer some good advice, that’d be great.

We’ve approached this problem by not deleting the row in the temporary table but by setting a flag so that if set, the next poll of the adapter will not pick it up. Our main reason for doing this is that the application groups that owned the data did not want it deleted.

So what happens, is that the integration component that polls the data sets the flag just prior to publishing the document. Therefore, if an error does occur the flag doesn’t get set and the data will be picked up in the next poll.

We had a couple of issues to address with this scenario. First, the JDBC adapter basic notification does not allow you to specify a WHERE clause (unless a newer version does now) so there was no way to filter out already processed rows. We built a custom adapter to fire a trigger event every X seconds. Then that initiates a normal select via the JDBC adapter where we were able to specify a WHERE clause and bypass previously processed rows.

Secondly, we had to perform some type of clean up on the processed rows in the temporary table. So that is basically a repeat of the previous paragraph except it does a delete on rows with the flag set and that are older than Y number of days.

In the long run you end up creating 4 integration components to manage all this but it fully meets our needs.