We have a batch which runs at night 11 PM and finshes around 6 AM. This Batch processes and continuously inserts data in Source staging table.
JDBC picks up & delete the records from the staging table.
JDBC notification is configured for maximum records - 5000 and scheduled for 10 sec.
At 2 AM, records pending at source - 26231 but webMethods processed only 10376 messages between 02-03 AM.
At 6 AM 54982 records were pending at source, at source batch job finish around 6, which stopped insertion of new records in source staging table. JDBC notification was able to process 55743 messages.
It seems due to insertions by source and select & deletion by webMethods on the same table at the same time is resulting in slow processing of JDBC notification.
Because when insertion by source is stopped, JDBC notification was able to process very fast.
Could you please advice how we can improve this situation at webMethods end.
Is reducing or increasing the maximum records for JDBC notification will help?
please provide your wM Version, JDBC Adapter version with FixLevel.
There should be at least 2 tables: the real data table and the table used by the Notification to pick up the data, which in turn is filled by a trigger when inserts are occuring on the real data table.
Dependent on how many fields the tables have it might be a better approach only to select some few fields required to indentify the rows in the real table and then do a select for all fields required for transport in a separate adapter service.
Sounds like a locking issue. I would start by having a DBA look at it first. Perhaps there’s something he can do on his side. If not, then he should be able to suggest changes to either your side, the source side, or both.
Adapter notifications work on a “per record” basis, meaning each record operation turns into a single publishable document, which explains why DELETE’s are also issued on a “per record” basis.
If you’d like to do bulk operations, you can use a scheduled task instead of a notification. In other words, create a Flow service that is executed periodically by the IS scheduler and does something like this:
UPDATE SET STATUS = ‘PROCESSING’ WHERE STATUS IS NULL
SELECT * FROM
WHERE STATUS = ‘PROCESSING’
Process records (e.g. publish them)
4a. DELETE FROM
WHERE STATUS = ‘PROCESSING’
… OR …
SET STATUS = ‘DONE’ WHERE STATUS = ‘PROCESSING’
A couple of “words of caution”:
If you’re dealing with a lot of records, it’s wise to move the logic above inside a REPEAT step and then only process X number of records at a time. You can use the Maximum Row attribute of the Update adapter service to accomplish this.
If doing native publishes of the records to the Broker, you should remember that the Broker doesn’t support transactions. This means that if your DELETE fails and you try to rollback, the publish won’t be rolled back. This can result in duplicates. For this reason, I usually move the publish after the DELETE in scenarios like these.
If you’d like this process to be able to pull records in a multi-threaded fashion, you can change the first UPDATE so that it also updates a column that uniquely identifies the server and process issuing the update. The SELECT and DELETE should then take that column into account as well. With this design, you can have multiple Integration Servers pulling and processing data in parallel.
I will discuss your solution with team.
We thought of below solution, not sure whether it will help or not. Your opinion pls.
We have requested the source application to do bulk insert into staging table, instead one a time.
This way we will have less transaction and DB locking. This might help webMethods notification to work faster.