i am getting duplicate entries in Trading Networks (6x)
for a single xml file . The records are picked up by the JDBC adapter services for whom a particular column or flag is set to say P.
For those records which are sent to Trading Networks from IS the status changes to Y.
There is no problem with the code as its functioning properly for last 1 yr without any problems and this is the first time i am encountering this problem.
First thing is there something changed/fix all of sudden in your process flow??Did you checked debugging the flow for that particular record??Is this duplication occurred only for one record in the DB??
The thing is it’s happened only for one record on a day.
In any of the way the duplicates were transferred coz the Status field were not updated what i guess ! Bt why the status field didn’t update for that particular record.
What i feel is that it might be related to threads.
There are two servers(IS) on which these services are running.And these servers are running on Load sharing or Clustered basis.
Somehow The servers might hav got out of sync and the stat field didn’t reflect equal value for both the servers.Due to which the services might have run again.
Bt I hav to have a Resolution for this so that it mayn’t happn in future again.
Does any one know abt how The codes in webMethods can be synchronized as done in java so that at a particular instance there can be a single thread accessing it.
How does it get started? A JDBC adapter notification? A scheduled task?
Once started, can you give us a high-level process description? At what point in the process does the stat column get updated?
If a particular document fails to be processed fully for some reason, what happens to the stat column? Is it updated? Is it left alone? Does the next run pick up all records with a status of “P”?
as i have written earlier in my post the stat field gets updated when the records are sent to TN.The flow service (i.e a scheduled service)
includes the code for routing the records (WmPublic.pub.publish:publish)
to TN as well as change the stat field to say P from W in a SEQUENCE.There is no such cases of failure as all the records are sent to TN no matter what.
The problem is then how duplicates went into TN
To make sure I’m understanding correctly, with my questions as we go:
A scheduled task runs periodically. How often does this run?
When the service is run, it will:
2.1. Call a JDBC service to select records from the DB where stat=‘P’
2.2. For each record:
2.2.1 Call pub.publish:publish (Why is this used to get the doc to TN?)
2.2.2 Call a JDBC service to update the record to set stat=‘Y’
Are there any other steps? Are there multiple IS instances running?
I assume you have a trigger that is subscribing to the published doc type and then call TN receive. Can you describe your setup in TN? What does the service invoked by the TN rule do with the document?
Are you using try/catch blocks in your services? What do the catch blocks do?
There are a couple of ways where duplicates can be introduced:
If the above service fails after the publish but before the JDBC service successfully runs. On the next run, the previously published record will be picked up again.
If the interval of the scheduled task is sufficiently short, and allow overlap is true, and the time it takes to process all the records exceeds the interval, then a second run could get started before the first is finished. This can result in dupes.
It would be helpful for us to be able to help you if you can provide the specifics of your process. General statements like “the stat field gets updated when the records are sent to TN” aren’t all that helpful–we need to know exactly how you’re doing these things.