Could anybody please help regarding a query we have when IDOC’s are produced faster than they are consumed.
We have been told that when an IDOC is being processed by the SAP outbound adapter, any other IDOC’s behind it will fail. SAP will put the failed IDOC onto a background queue and a background job is then started to process each of these IDOC’s. This then leads to potential performance problems.
It appears that the SAP adapter cannot be set up to be multi-threaded. Does anyone have any suggestions to get around this.
Thank you in advance.
We use SAP’s qRFC which queues all of the IDocs sent to Business Connector in the correct order. It isn’t necessarily faster than the tRFC but it keeps 1000s of IDocs in the correct order as they come from SAP. I’m not a basis person so I don’t know what was done to change to qRFC from tRFC but it was the ticket for our installation. We have had a few instances when Idocs failed if they were queued for too long – 20 minutes or more – but that only happens to us in conjunction with connection problems, i.e. if one of the servers goes down.
Are you refering to the Enterprise SAP adapters?
I am trying to increase capacity for handling idocs outbound from SAP. IS6.01 SP2, Partner Manager/Sap Adapter 4.6
I need to process 100K+ material master idocs on a daily basis. However, so far I have not been able to increase the performance above 1.4 seconds per idoc. I want to multi-thread the receipt of idocs by partner manager. The service that partner manager calls is very small and quick, it merely publishes the idoc out to the broker.
I have tried increase the number of threads available to the SAP listener. I have tried increasing the number of listeners. I have tried changing the partner profile in SAP to send out ‘packets’ of idocs. Nothing seems to affect the bottleneck of idocs going into Partner Manager.
If anyone has been through this or has any insight I would really appreciate it.
Could you please answer the following questions?
Are you sending 100,000 Material Master idocs to the SAP adapter?
How many segments per IDOC on average?
What is your expected time-frame-window you need such a butch processed?
Are they all going to the same RFC destination/listener?
Are they all invoking the same Routing Rule?
Did you disable the WmPartners transaction storage, by doing routing only?
What is your general business integration design and you post all these messages/IDOCs to the Broker?
Do you do any mapping before you send each IDOC to the Broker?
How many subscribers do you have for these IDOCs?
Do all such Triggers run on the same IS?
What is the hardware configuration where the IS/SAP adapter runs?
How much memory have you allocated to the the IS?
Is the Broker running on the same host with IS/SAP adapter?
I think your questions are for acooper.21325.
We are using Enterprise Server adapters v4.2 (no SP’s).
Carl, can you explain more about qrfc.
- Do Idocs come out of it once a day, or all at once?
- Do they remain serial or could multiple threads pull them?
- Does SAP have to run a job or are they released automatically?
- How do you point webmethods to QRFC?
- How many Idocs do you process daily and what’s the average size?
How do you disable the Partner Manager Transaction storage I’ve been wanting to do that but don’t see where to do it.
You need to put in the pipeline the parameter “$routeOnly”.
You can set it to true.
You can register a routing service with your SAP adapter. So, when an IDOC comes to the adapter, the service will run, before the idoc gets passed to the WmPartners.
In this service, you can just add in the pipeline the above parameter.
WmPartners then will not log anything in the transaction store.
Hope this helps
Did you have any suggestions for our original problem? We use trfc because we do not want to block all idocs if one fails. If we generate a lot of outbound idocs, most of them fail (and go to the backgroud queue) because webMethods is still processing an earlier idoc.
The problem is worse than we thought because the idocs are not resent until the background jobs are restarted.
You can create an urgent, high priority service request with webm support.
We have a multithreaded ESAP ALES and ALEC adapters, which also include configurable run time logging, and significant performance improvements.
The existing version of the adapters is designed to leverage multiple parallel processes. Because it performs satisfactory the way it is, and customers don’t have any issues, we have not released the multithreaded version.
This multithreaded, multiprocessing version of the adapter, will certainly fix the issue that you have, and will greatly improve the overall throughput.
Thanks for this. We will raise an SR.
Could you please clarify what you mean by “The existing version of the adapters is designed to leverage multiple parallel processes”.
Is the adapter on general release (i.e. not a beta)?
From the Adapter Manager, you can select the number of parallel instances each adapter can run.
In case you have not done this before, it basically means that that you can run multiple identical instances of the same adapter, each in its own JVM.
The adapters are written in Java but they use the native rfc libraries provided by SAP. The advantage of the parrallel processes is that you isolate the native RFC/JCO libraries. So, if one native rfc library seem to have a problem or even crash, which is out of webMethod’s control, the other instances continue running.
Also, the AKD/Broker API are designed so multiple instances of the same “broker client” program can share resources at the Broker level.
In this way, multiple adapter instances increase the over all throughput of the Broker.
The adapter I refered to is not Beta.
It is actually the same adapter that you have, only with couple extra configuration parameters.
During a stretch testing exercise we did some time ago we looked at the load on SAP and BAAN adapters and the use of multiple adapters was tested.
The result showed (for example on the SAP Server Adapter):-
That creating multiple Instances of the SAP adapter reduced the ‘bottle neck’ we where experiencing through one adapter. [NB. The problem still arises where if two idocs attempt to use the same adapter instance then one Idoc will ‘roll back’ and be resubmitted from SAP 2 mins later].
The bottle neck from SAP did not reduce as each adapter was processed as a single entity ‘one at a time’, and was not deemed multi-tasking as was originally thought.
Depending on the rate that your SAP system generates IDOCs it might be unavoidable that some will be processed by the retry mechanism of SM58.
For example, if you system has 5 SAP app servers, running on different hosts and leveraging massive resources, all generating IDOCs using the same port/rfc, couple SAP adapters will not be enough for you.
However, I encourage you to upgrade to the just released ESAP4.2 SP6.
You can get it here in advantage.
It provides multithreaded ALES and ALEC adapters, performance optimizations, and run time logging based on log4j.
The first time you start it, the adapter will generate default log4j.properties files. You can edit those without restarting the adapter. Please read the documentation in the Readme for SP5/SP6.
You can view in the logs some performance numbers about the adapter if you set the line 18 of the log4j.properties as such:
This should help you find possible bottlenecks in your set up.
You can still run multiple instances of the same adapter, but you can configure them to be multithreaded. By allocating the right amount of resources( memory, threads) you will not have any SM58 backing up issues any longer.
For any issues with the SP6 you can email to me at: firstname.lastname@example.org
After you apply the SP6, I would be interested to know what kind of throughput you get, in terms of number of idocs per second, size of idocs, etc.
I am new to webMethods and i am working with SAP R/3 Adapter for IS. I am just wondering, does anyone can give me a diagram representation of the clustered SAP adapter? How does the flow works? In a per transaction point of view? I guess i need to see its representation to fully determine the advantage clustered adapter. Thanks in advance.