Improving JDBC notification processing

Hi All,

Message flow:

Source Staging Table → JDBC Basic notification → Notification document → Subscriber service → Publish canonical document → Broker

We have a batch which runs at night 11 PM and finshes around 6 AM. This Batch processes and continuously inserts data in Source staging table.
JDBC picks up & delete the records from the staging table.

JDBC notification is configured for maximum records - 5000 and scheduled for 10 sec.

Time : Records pending in Source : wM processed
00-01 : 18556 : 14662
01-02 : 26231 : 12505
02-03 : 42716 : 10376
03-04 : 45787 : 17167
04-05 : 53819 : 14377
05-06 : 54982 : 24482
06-07 : 0 :55743

As per above data

At 2 AM, records pending at source - 26231 but webMethods processed only 10376 messages between 02-03 AM.
At 6 AM 54982 records were pending at source, at source batch job finish around 6, which stopped insertion of new records in source staging table. JDBC notification was able to process 55743 messages.

It seems due to insertions by source and select & deletion by webMethods on the same table at the same time is resulting in slow processing of JDBC notification.
Because when insertion by source is stopped, JDBC notification was able to process very fast.

Could you please advice how we can improve this situation at webMethods end.

Is reducing or increasing the maximum records for JDBC notification will help?

Thanks,
Pawan Kumar

Hi Pawan,

please provide your wM Version, JDBC Adapter version with FixLevel.

There should be at least 2 tables: the real data table and the table used by the Notification to pick up the data, which in turn is filled by a trigger when inserts are occuring on the real data table.

Dependent on how many fields the tables have it might be a better approach only to select some few fields required to indentify the rows in the real table and then do a select for all fields required for transport in a separate adapter service.

Regards,
Holger

Thanks Holger for response.

IS & Adapter versions are :
Version 9.6.0.0
Updates -
IS_9.6_Core_Fix6
IS_9.6_Core_Fix5
TNS_9.6_Fix5

Build Number - 294

Adapter Version 6.5
Updates JDBC_6.5_Fix48
JDBC_6.5_Fix49
JDBC_6.5_Fix50

Yes, we have two tables at source and design is exactly how you explained it.

Regards,
Pawan

Sounds like a locking issue. I would start by having a DBA look at it first. Perhaps there’s something he can do on his side. If not, then he should be able to suggest changes to either your side, the source side, or both.

Percio

Thanks Percio

JDBC notification is configured to pick 5000 records and delete after processing.

We did investigate with DBA and found out that JDBC notification seems to be deleting one record at a time from staging table. Which is consuming so many transactions and taking time.

I hope this is how webMethods internally processing JDBC notification. please confirm.

The suggestion from DBA to perform bulk delete to improve processing.
Any suggestion to achieve this.

Thanks in advance,
Pawan

Adapter notifications work on a “per record” basis, meaning each record operation turns into a single publishable document, which explains why DELETE’s are also issued on a “per record” basis.

If you’d like to do bulk operations, you can use a scheduled task instead of a notification. In other words, create a Flow service that is executed periodically by the IS scheduler and does something like this:

  1. UPDATE SET STATUS = ‘PROCESSING’ WHERE STATUS IS NULL
  2. SELECT * FROM
  3. WHERE STATUS = ‘PROCESSING’
  4. Process records (e.g. publish them)
    4a. DELETE FROM
  5. WHERE STATUS = ‘PROCESSING’
    … OR …
    4b. UPDATE
    SET STATUS = ‘DONE’ WHERE STATUS = ‘PROCESSING’

    A couple of “words of caution”:

    1. If you’re dealing with a lot of records, it’s wise to move the logic above inside a REPEAT step and then only process X number of records at a time. You can use the Maximum Row attribute of the Update adapter service to accomplish this.
    2. If doing native publishes of the records to the Broker, you should remember that the Broker doesn’t support transactions. This means that if your DELETE fails and you try to rollback, the publish won’t be rolled back. This can result in duplicates. For this reason, I usually move the publish after the DELETE in scenarios like these.
    3. If you’d like this process to be able to pull records in a multi-threaded fashion, you can change the first UPDATE so that it also updates a column that uniquely identifies the server and process issuing the update. The SELECT and DELETE should then take that column into account as well. With this design, you can have multiple Integration Servers pulling and processing data in parallel.

    Percio

Thanks Percio.
I will discuss your solution with team.

We thought of below solution, not sure whether it will help or not. Your opinion pls.

We have requested the source application to do bulk insert into staging table, instead one a time.
This way we will have less transaction and DB locking. This might help webMethods notification to work faster.

Sounds reasonable.

Pawan,

We also implemented similar kind of solution and it worked most of the time.

HTH,
RMG