An Oracle database is populated with 50 news rows of data at a time. The data is inserted by an application over which I have no control. After all 50 rows have been successfully inserted, a commit is performed. My goal is to publish all 50 rows as a single UDM.
The Database Adapter notification events seem to publish only one row at a time. Is there a simple way to gather each of the 50 individual notifications and publish them as one UDM?
Source table: The table containing the real data that we want to process. The trigger will be written against this table.
Buffer table: Interim table containing key data for a group of rows in the source table. The Basic Notification operation will operate on this table.
Operation:
When writing the rows to the source table, write them all in a single transaction. You stated that the app already does this so that’s good.
If there is a field that can be used to identify the group of rows, great. If not, then add a new column to hold this group key. You can use a sequence to populate it. Make sure all rows in the group have the same group id.
If you can’t add a new column to the table, which I imagine is probably the case, then we’ll have to try another strategy.
Write a trigger against the source table. The trigger would look something like this (assuming a group id field named GROUP_ID in both the source and buffer tables):
AFTER INSERT ON (source table) FOR EACH ROW
DECLARE RowCnt INTEGER;
begin
SELECT count(*) INTO RowCnt FROM (buffer table)
WHERE GROUP_ID = :new.GROUP_ID
IF (RowCnt = 0) THEN
INSERT INTO (buffer table)
(GROUP_ID, whatever other identifying columns you want, ...
VALUES
(:new.GROUP_ID, whatever values you want to assign, ....
END IF;
END;
This trigger writes an entry into the buffer table only once for a given group. The basic notification configured operation picks up this buffer entry. Configure the notification to automatically delete buffer table entries.
I’m not the greatest SQL jockey around so if someone has an alternative trigger to write only one record to a table after a commit of several rows, but having access to one of the fields in one of the rows, please post the better solution!
At this point, you have two options: 1) have the intelligent component that is kicked off by the notification do the select to get the rows from the source table, using the GROUP_ID from the buffer table record, and publish the “big” event; 2) publish a notification event containing only the data from the buffer table record. Option 2 gives some additional flexibility but adds some overhead.
If using option 2, configure an integration to subscribe to the notification event. In that script, invoke a configured op to do the select to retrieve the rows from the source table using the GROUP_ID data from the notfication. Once you do the select to get all rows, then you can do whatever you need to complete the integration.
As mentioned above, if there is not a field that can be used to identify the group, nor can one be added, I’d have to know more about the nature of the data in the source table to be able to determine a usable strategy.
If you are talking about the oracle adapter we have experienced this also. Our solution was to take the ‘notification’, i.e. the polling, away from the Oracle adapter. We built a custom adapter that has a single operation to fire an event every X seconds. When that fires the Oracle adapter receives the document and performs a normal select operation. This works fine for our needs because we typically never delete rows from tables but simply update a status flag after selecting the data.