Integration Server: Performance and tuning

products versions - webMethods Integration Server version: all

If you are interested in performance and its tuning, please have a look at this blog post. It was written with Integration Server in mind, but is generally applicable.

4 Likes

Good info!

Sharing my personal experience, I’ve never encountered a case where Integration Server ended up being the bottleneck. And have never needed to directly manage parallel threads. In cases where “the integration is too slow” the root cause has been in the end points. This is not to say IS would never be a problem, but to note “don’t be myopic and only look in one place.”

Similar to the “3 rules of real estate” being “location, location, location” the rules of tuning are “measure, measure, measure.” And don’t optimize too early. Most guesses about where the speed needs to be are wrong. Or, more likely, speed doesn’t matter. Completing an unattended automation in 2 seconds instead of 5 or even 30+ likely does not matter to anyone.

Here is an item that may likely come up in this: the use of transformers to introduce parallelism. There is a thread discussing this. There is a debate about whether transformers run in parallel – my advice is don’t use transformers solely based upon the assumption they will be “faster.” First, it is not certain they will be. Second, it is not known whether it matters that the step with transformers is faster or not. Measuring is the only solid way to determine where a bottleneck is and measuring after is the only way to know changes have made a difference…

1 Like

Great details (not that I would have expected differently from you :wink: )!

One aspect about measurement is that it should happen under at least somewhat realistic circumstances. All too often I have seen people analyze the duration of the various steps during a single-threaded execution. If that is the load you can expect in production, that is of course ok.

But in reality you will need to measure when e.g. 300 transactions are running in parallel.

And I absolutely want to emphasize your point about measuring instead of guessing. I have seen too many (sometimes senior) people just assuming something. There is a saying “thinking is not-knowing”. To build a hypothesis thinking is ok. But then things need to be verified.

Thanks for adding your points!

A good developer will always start with an assumption that the problem is with the code. Afterall, that is the part that is the most specific to your usecase and the rest is almost always tested in other environments already, so less likely to have issues. First rule out the issues in the code before you start travelling up the chain to look for product and then infrastructure issues.

Here is the relevant text from a deck on performance that we present to our customers:

We often see that people panic and start taking wild guesses at the root cause. They start changing undocumented server settings and tweaking obscure JVM options, and when these changes don’t actually make things worse, which they often do, they usually make an insignificant improvement, and that’s because the root cause of most performance issues is in the application code itself. But how do you find that bottleneck in your code? Many, again, resort to taking wild guesses, while others resort to a more scientific but tedious task of adding logging statements and timing steps to their application in a desperate attempt to find the proverbial needle in the haystack.

The correct way to do this to use the proper tools to profile and observe your application behavior in its natural state of processing and then take measured steps to address the issues identified.

For full disclosure, I do work for a company that specializes in these areas and has two products that exactly provide the micro level profiling details or the macro level observability metrics. The products and the experiences are all based on years of customer engagement with the webMethods products.

Rupinder Singh

3 Likes