We have integrations between WM 6.5 SP2 & SAP 4.7 using wmSAPAdapter 6.5 SP1_Fix8. Data is exchanged in IDoc format. The IDocs processing in SAP is in batch mode.
When we invoke pub.sap.client:sendIDoc service to send IDocs from webMethods to SAP, sometimes SAP team says that they haven’t received the IDoc in SAP but the webMethods SAP logs shows that the IDoc is sent successfully.
Following are the steps in sending IDocs from webMethods to SAP:
In webMethods, we have the $tid from this transaction. Is there anyway that we can correlate this $tid with the corresponding IDOC Number in SAP? This gives room to the question of reliability of the webMethods transactions.
Appreciate if you provide your experience or approach to track the transaction.
Just a remark about the last step of RMG’s code snippet:
pub.sap.idoc:documentToIDoc
pub.sap.idoc:encodeSDATA
pub.sap.transport.ALE:OutboundProcess (has $tid out)— Posts IDOC to SAP
The service signature for ‘pub.sap.transport.ALE:OutboundProcess’ shows ‘$tid’ is an optional input, not in the output as you mentioned. However, testing this service without a ‘$tid’ input gave me a ‘$tid’ value in the output.
I thought this was an undocumented webMethods feature… until I saw this remark in the documentation for the ‘pub.sap.client:createTID’ service:
Important! If no TID exists in the pipeline, then the pub.sap.transport.ALE:OutboundProcess service performs the function of both the pub.sap.client:createTID and pub.sap.client:invokeTransaction services. For IDocs, it is recommended that you use either the pub.sap.transport.ALE:OutboundProcess service or the call sequence of pub.sap.client:createTID and pub.sap.client:sendIDoc.
Quick question – After calling pub.sap.transport.ALE:OutboundProcess, do i need to call ‘pub.sap.client:confirmTID’?
My purpose is to confirm handover of the IDOC to SAP (i.e. whether the IDOC reached the SAP system (tRFC queue?) successfully.)
My purpose is NOT to confirm if the particular IDOC processed successfully within SAP (e.g., the ORDERS05 IDOC persisted a sales order).
Must I call ‘pub.sap.client:confirmTID’, or can I just check that $tid was populated in the pipeline after the pub.sap.transport.ALE:OutboundProcess invocation?
We don’t need to call ‘pub.sap.client:confirmTID’ service once you call ‘pub.sap.transport.ALE:OutboundProcess’.
Once you call the service ‘pub.sap.transport.ALE:OutboundProcess’ to send IDoc to SAP, you can see the below logging in sap log, which confirms the tid.
Adapter Service - IDoc sent to SAP system “DR1” with tid “0A32105F007149F0A67F6D60”.
SyncALETransport - Commit 0A32105F007149F0A67F6D60
Transaction state changed: Created >> Committed
I have a question here, I also used the same steps to send Idocs to SAP before. But the problem is, if you see the transactions page on Admin console the messageType its showing is wrong. Instead of MessageType, its showing IDocType. For support persons, it will be difficult to identify which transaction belongs to which interface. Because they use MessageTypes to identify the transactions. Did anyone come across this and how you are handling this?
We had a review recently from webMethods PS. According to them pub.sap.client:confirmTID must be always called after pub.sap.transport.ALE:OutboundProcess “for transactional invocation to complete transactions on SAP.”
My recollection debugging these integrations was that SAP’s IDoc monitor (menu ‘we02’) showed it had ‘got’ the inbound IDoc before the call to confirmTID was done in Flow, but I’ve modified my code to call confirmTID just in case
Let me clear this up a bit: the tRFC protocol implements several steps. Basically during these steps the SAP system saves the TID into a database table (ARFCRSTATE), so that it can protect itself against duplicate processings. For example if an IDoc post was executed successfully inside the SAP system, but the network connection broke down before the SAP system was able to return an “OK” acknowledgement to the sender, the sender might (and in fact should!) resend the same transaction again (because the sender can’t know, whether the network problem happened “on the way to the SAP system” or “on the way back”). So the sender sends it a second time, the SAP system checks it’s status table and sees: “I have already processed this TID, so let’s just ignore this second call”. SAP just returns “OK” to the sender without processing the IDoc a second time, the OK makes it back successfully now, and everything is ok.
Now imagine an application that sends thousands (or millions) of IDocs per day. The ARFCRSTATE table would fill up pretty fast and the performance of tRFC processing (and of the entire SAP system in general) would degrade. Here is, where the “Confirm” step comes into play: it simply deletes the entry in ARFCRSTATE corresponding to the current TID, preventing infinite growth of that table.
So all ‘pub.sap.client:confirmTID’ does is performing this “Confirm” step.
Now some warning: calling the “sendIDoc” step and the “confirmTID” step in the same Flow is VERY dangerous. Just imagine the following scenario:
o HTTP client sends XML to wM
o Flow Service creates an IDoc, sends it into SAP and finally calls confirmTID
o At some point after the confirmTID is finished, but before the current Flow (or the Flow engine or the HTTP handler) is finished, an OutOfMemory happens in the current thread. (This can happen at any point.)
o Now the HTTP handler will send on error code (403) back to the HTTP client, so the client “thinks” the IDoc was not processed successfully and therefore resends it at a later time.
But in fact the IDoc is already processed in SAP. And because the confirmTID service has already deleted the corresponding TID, the SAP system will not recognize the IDoc, when it comes the second time…
Result: you have ordered 20 refrigerators instead of 10…
The HTTP client should trigger the confirmTID in a second HTTP request AFTER it is sure that the IDoc has reached SAP successfully. (This also implies, that the HTTP client should trigger the creation of the TID and keep it in its status information. Because what does it help, if it doesn’t keep track of TIDs and the wM server creates a new one each time… You will also get duplicates this way.)
We are running into the problem you mentioned should not ever happen. We are using pub.sap.transport.ALE:OutboundProcess to send an IDOC to SAP (Order). It appears that SAP POST the IDOC but we never get a response back fast enough so we send another one. And it also posts to SAP with no error. In the past SAP has stopped the actual Order because it detected it as a duplicate, but lately it has beed processing the duplicate IDOCS at exactly the same time so no dups are detected in SAP. We haven’t touched our WM flows in years so we think it’s on the SAP side. Is there a tRFC setting that we can look at to see if that has changed? Or could this be an IDOC setting that maybe is batching them up instead of processing directly?
No, pub.sap.client:confirmTID does NOT commit the transaction! The transaction is already committed, when pub.sap.client:sendIDoc (or ALE:OutboundProcess) returns successfully. confirmTID confirms the transaction, and this is a BIG difference from committing it. (See my explanation of the tRFC protocoll in my previous post form Aug-20.)
And again: NEVER do the confirmTID inside the same Flow that sends the IDoc! It will only lead to duplicates. (It is better to just forget about confirmTID and instead schedule a clean-up job in R/3 that deletes old TIDs from ARFCRSTATE, than to use confirmTID incorrectly…!)
Most probably the hardware of the SAP system has become faster or it has more work processes now, so now it is able to really process these two orders in parallel…
Mike: when you “send another one”, do you use the same TID as the first time, or do you let ALE:OutboundProcess create a new one? If you want to prevent duplicates, you need to use the original TID, of course!
Let me repeat: if you want to guarantee transactional security (exactly-once execution), then you need to put a bit effort into it and comply with the tRFC protocoll. Here is the basic outline:
First HTTP request:
HTTP client calls createTID and stores the returned $tid together with the IDoc data in permanent memory (file or database).
Second HTTP request:
HTTP client calls the Flow that creates and sends the IDoc, submitting the payload and including the TID in an HTTP header “X-TID”. (If you use the standard IDox-XML format and include the HTTP header “Content-Type: application/x-sap.idoc”, then this Flow does not even need the documentToIDoc step.)
If this request ends in HTTP return code 200, proceed to the third step. Otherwise keep repeating the second step like once an hour, but always use the same TID! If it’s only a network error, the submission will probably be successful one of the next times. If it’s for example a configuration error on SAP side, an admin should investigate and fix it before the external program makes the next retry.
Third HTTP request:
Eventually the submission will end successfully. Now the external program deletes the payload data and the TID from permanent memory and makes the last HTTP call to confirmTID, so that the SAP system can clean up it’s status keeping as well.
If there is no external HTTP client, but the wM IS (or SAP BC) is the creator of the IDocs, it’s even a bit easier. For example using SAP BC (which keeps transaction status in WmPartners in the “Message Store”), you could set up two Scheduler jobs like this:
First job:
Creates a TID and the IDoc and passes both to ALE:InboundProcess. (ALE:InboundProcess persists the TID and IDoc in the message store and then calls the Routing Rule, which will call ALE:OutboundProcess (“ALE Transport”) ALE:OutboundProcess then sets the status to “Committed” or “Rolled back”, depending on whether the IDoc reached SAP successfully or not.) Don’t perform the confirmTID in the first job!!
Second job:
Using the public Services in the WmPartners Package (wm.PartnerMgr.xtn.admin:list & get), you loop through the message store:
o If you find a TID in status Committed, call the Service wm.PartnerMgr.gateway.runtime:confirmTID. It will follow the Routing Rule, perform the confirmTID in the correct SAP system and also update the status in the message store. (The Routing Rule needs to have the flag “Forward ConfirmTID Event” activated.)
o If you find a TID in status Confirmed, delete it (wm.PartnerMgr.xtn.admin:delete)
o If you find a TID in status Rolled back, and the last state change is more than an hour ago, resend it (use wm.PartnerMgr.xtn.admin:getMessage and pass its output into ALE:InboundProcess again.)
o TIDs in Created and Executed should be left untouched. They are not yet completely finished.
In older wM IS releases the WmPartners Package does not provide the above functionality. (This was added only in SAP BC 4.7 and 4.8) In this case: upgrade to wM 7. (Or perhaps wM 6 is sufficient?!? Not sure.) In newer wM releases webMethods replaced the WmPartners Package with something better, and that should then provide similar functionality as the one I outlined above for the SAP BC.
We found our problem. We have jobs scheduled to run every minute send the IDOCS. One of the jobs did NOT have the check box for waiting until a job finished to run the next job, so when teh job ran past an minute another one kicked off and grabbed teh exact same records!
…and created a new TID for those same records! See how dangerous this is? If you had a TID tightly associated with each set of records, the SAP system would have recognized the duplicate.
You fixed one problem now, but you still have another one dormant in your scenario: if a job runs into an Exception AFTER it has successfully posted one set of records into SAP, then a later job will pick up the exact same records again and send them a second time (using a new TID, so again SAP doesn’t recognize it as a duplicate)!
A fix for this could look as follows: (I assume, the records are located in a database?!)
Add another field to the DB table, which can hold a TID. Your job first checks this TID field. If it is already filled, it just uses the existing TID for sending the IDoc into R/3. If not, ir generates a new one, updates the record with that TID and then sends the IDoc into R/3. If the sending returned successfully, the record can be deleted from the DB, otherwise it just stays in and a later job can pick it up again (and can use the already existing TID to protect against possible duplicates).
Hi - thanks for the very informative writeup. I didn’t know the exact reason behind the confirmTID operation - now I do
Just to confirm - is the sole reason for confirmTID just to clear up clutter from the SAP DB?
Regarding your tip - it sounds interesting.
Burdening the client with the additional responsibility of tracking success and issuing a second call to confirm the TID makes it onerous, especially if the client is an external entity.
To avoid this, our practise is to move the documents through a broker. However, since we use guaranteed broker delivery (the broker trigger retry behavior setting is ‘Suspend and retry later’ as discussed in this thread: http://www.wmusers.com/forum/showthread.php?t=17942)
Its theoretically possible that an exception in a Flow step after confirmTID could throw an exception (or the service could go OOM). This would cause the trigger service to be retried, and duplicates would be generated.
What worries me is not the exceptional case that affects a few transactions (the server went OOM just after confirmTID in one transaction), but the cases that can affect a lot of transactions over time.
For instance, consider this scenario executed by a trigger whose failure setting is ‘Suspend and retry later’:
Try:
<Build IDoc from input data>
pub.sap.transport.ALE:OutboundProcess
pub.sap.client:confirmTID
pub.flow:debugLog [b]<-- Disk is full so this always fails [/b]
Catch:
getLastError
pub.flow:throwExceptionForRetry [b] <-- service is retried due to trigger settings. [/b]
To avoid this, this code using an ‘inserted’ flag may help:
Try:
<Init inserted=false >
<Build IDoc from input data>
pub.sap.transport.ALE:OutboundProcess
<Set inserted=true >
pub.sap.client:confirmTID
pub.flow:debugLog [b]<-- Disk is full so this always fails [/b]
Catch:
getLastError
If inserted = true, set 'error'=false [b]<-- To avoid duplicates [/b]
Branch on /error
If true, pub.flow:throwExceptionForRetry [b]<<< retry only when IDoc not inserted. [/b]
It would also be fine, I suppose, to log all TIDs internally in webMethods and confirm them en-masse at intervals using a scheduled service – but that’s a bit of a hassle to write.
I aggree. But as a matter of fact it is absolutely the only way of achieving 100% end-to-end transactional security in a “multi-hop” scenario…! Or alternatively don’t use the confirmTID step (see below).
In fact, you could just forget about the confirmTID on webMethods side and instead schedule a periodic job inside the SAP system, which cleans up the TID database. SAP even provides a standard report for that: RSTRFCER. Of course it depends on your load. But if you define a variant of that report that deletes all TIDs older than 4 weeks, and then let it run every couple of days, you should be fine. (I think the risk, that wM will retry a transaction after 4 weeks have passed, is quite low…)