Problem with parallel processing

Hi Experts,

I have a situation something like this:
There are multiple publishers and a single subscriber. All the publishers publish a document with a different trasaction id. In the subscirption there is an insert into a database header table which has transaction id as primary key and a detail table which does not have any constraints key. The document which is published will have a single header information and multiple detail level documents. So on subscription there was batch insert performed in detail table, which made to use a local transaction. If the transaction id is same in two documnets, the information should be updated in header table accordingly. The no.of documents can go upto 10k per hour. So, we have made the subscription trigger parallel. No we see that many of documents are failing to make tot he target because of unique constaint in header table. Actually in the subscription side the code will select the data based on transaction id and then either insert/update. I doubt because of parallel trigger processing, one thread is selecting the data before one is commitng it for the same transaction id. So, please advice on what needs to be done. I have few ideas but not sure on which to implement

Commiting the header transaction as soon as it finishes using transaction management (i believe if there is no transaction management then the transaction is commited after the theread execution only), but owing to the huge volume can i be sure that the scenario wont occur

Using a separate connection with NO_Transaction as connection type so that the header table data wil be committed

Fidling with threads to make a thread to stop its exectution if the threads are of same transaction id (not sure on how to implement, probabaly i can use notify and wait services in pub folder using transaction id as key or write a java client using broker API to retrieve from broker based on transaction id)


Please remember. When you in one transaction insert a value to unique constraint/index, then in any other transaction, inserts with this value will be block untill the first transaction is commited.

As there is no way to silence the errors from DB, the best solution I see, is to use stored procedure with algorithm as follows:

insert into table …
if row already exists then select * from table where tid = :tid for update

update table set … where tid = :tid

Or alternativly:

insert into table …
when alreardy exists
update table set a=a+:a, b=b+:b … where tid = :tid

But with the second one, you must have updates providing you atomicy.

Another solution is to do SQL MERGE from temporary tables…

What RDMS you acctualy use? And what the updates do with the already inserted rows?

Thanks for your response.
But the real problem in this scenario is a row is inserted in a session and is not comitted. Meanwhile another session is trying to select the same row. Le t me be more clear take two threads running paralleely for the same transaction id thread A and thread B. The code looks something like this:

Select count(*) from table Header where trasaction id =‘transaction id in the incomin doc’
If count is 0
then insert
update the row

Now suppose thread A is executing, it does not find the transaction id, so goes ahead and inserts it, but does not commit there itself as it is a local transaction. Now thread B running parallelly select and gets count as 0 as the first thread A is not committed. Not when the thread B tries to insert the row, thread A may get committed throwing up a unique constraint error.

My question also here is if i use transaction management for insert/update then what is the possibility that i may run into the same situation owing to the high load

Select count(*) from table Header where trasaction id =‘transaction id in the incomin doc’
Start transaction
If count is 0
then insert
update the row
Commit trasaction

And this is not thread safe. You may not “trust” the select count(*). You MUST INSERT the row ALWAYS, and if this fails (UNIQUE constraint violated) then update the row. Then the solution will be thread safe. The DB will serialize inserts on the same id, and allow mechanism to work.

But as I said, it’s the best, when all is done in procedure, as handling specific exceptions from WM side is non-trivial.

This seems to be a good solution. Thanks a lot, will try out and let you know