Reducing Memory Used in Flow

Hello All,
Firstly let me start by saying how great a resource this user group is. Thanks to all for the valuable insight that this group provides.

Now the Question.
We have developed a rather large enterprise application (100 million + transactions a year) in webMethods 6.01, as a political consequence a lot of our business code was developed in flow, rather than Java. Now we have a bit of an issue with our transactions chewing close to 20meg of memory. (calculated by writing out the memory used by the JVM at the start of the transaction and the end, not overly scientific but with multiple repeats gives us a consistent number). We are currently in production and are finding that our throughput is suffering as the garage collector is running very frequently.

We are currently stuck with trying to optimise our flow rather than rewrite large elements in java. Does anyone have some quick tips on what may be the best ways to reduce the amount of memory consumed? My current findings are that using transformers instead of invokes to reduce the pipeline size in specific locations throughout our transaction can make positive differences. Other than that I have not found too much.

Any ideas greatly appreciated.

Hi cameron,
Might be you know this. Dropping pipeline variables at every map step if they are not required helps in increasing your performance.
Regards,
Pradeep.

Hi Cameron, you may have seen this already…but just in case…have a look at the webMethods “GEAR 6 Performance Tuning White Paper” in Advantage (Select menu option: Best Practices > Best Practices (GEAR 6) > White Papers)

The other document that may give you some useful information is: webMethods 6.0.1 Technical Report - Platform Core Performance in Advantage (Select menu option: Best Practices > Product Technical Notes)

Most of these tips can likely be found posted in the wmusers forums.

Regards,
Wayne

Cameron

As an addition to pradeep kumar, its better to call clear pipeline as the last step in every flow and preserve only the expected outputs, since pipeline always contain runtime values that you wont see during design.

This would make the code cleaner also, since the services clearly tells you what the outputs would be and during runtime you will only see them.

This is especially useful when you execute loops, infact variables seems generated in a loop instead of getting overwritten at times. clear pipeline would solve all these.

My 2 cents

Thahir

Thanks Guys,

I guess I should have been more clear in the first post. I have done everything I can do in regards to cleaning the pipeline. Our developers have always tried to maintain clean pipelines. That was our first step. Second was transformers, to restrict scope. Have not really figured out scope yet.

thanks

Hello,
About the scope, it will embed new variables within the document you scoped. So like this:
legend

= comment

= step
-> = map link
|> = drop

[MAP]
“1” -> a/b # make document a with string child field b and set b to “1”
[MAP]{scope a}
b -> c # add c to a and set c to b
[MAP]
|> a/b # now only a/c exists

You should see that only c exists and is set to “1”

Cameron

This is what I have found working with webMethods regarding memory utilization.

  1. Clearing the pipeline should be followed very strictly and while invoking helping services from other services make sure clearpipeline is used so as not to multiply memory usage. Point is it should not only be in main service but all the services.

  2. I am not sure how are you getting the data ( files, http post etc). If you are receiving files always read files as stream instead of text. And instead of reading whole file at once use iteration to get one record at time. This will make sure your memory usage is optimized. This may require you to adjust your schemas and design.

  3. Between memory and performance third thing I have found useful is that always mark services as stateless if state is not required. And always cache in services which provide cross-reference data. This will ensure you are not everytime running the service to fetch data .

Hope it helps. I have been successful in two projects with this approach and servers which were being restarted once every week do not need the restarts due to memory and performance has improved a lot.

Is there a quantitative method to figure out the memory used before and after clearPipeline on a flow service basis. I was thinking of Runtime.getFreeMemeory() before and after the flow service. But this would be for the whole JVM.

Thanks
DG

Hello,
The services “are” running on the whole JVM. Java Services are only metaphors for methods of a java class and Flow services are a patch work of java services, predefined control constructs and other services.

There may be special techniques you can use that you can gather directly from the online forum of the implementor of your JVM. There are statistics for how much memory is used per object under the JVM and could help you to verify your results. One other thing to look at is how much memory is used before and after you accumalate your variables and then once more after the clearPipeline to see if you are being restored close to your initial memory usage. This should be implemented on an isolated IS if possible.
Good day.

Yemi Bedu

Hi,

Is there a way to pinpoint which services is using up memory before engaging in fixing one? Does WM provides such tool?

Something like a tool called Optimizedit from Borland.

With such tool, it is easier to identify where to fix and to gauge performance after fixing. Optimization could lead to more bugs in problem if not done correctly or blindly optimize something which leads to minimal improvement of memory usage. Sometimes it makes the code more unreadable too.

regards.

Hello,
To be clear, you will not be looking to optimize code beside java and c/c++. I would recommend that if you need performance out of your flow service that you convert it to the equivalent in java. And if you have a java service that needs extra performance that you look to first try to have your java code compiled outside of webMethods and linked in as a library, then look to have those classes compiled to native object files.
With all this you have to see that you are moving farther from easily modifiable and understandable “mapping” and flowing of data to a hard-core performance library. That is good when your code shows that it designed and implemented properly to give stable results. Just note that thinking like this will convince you to leave some of your various code at stage stop walls for you to focus on optimizing the way you think of implementing code.
Memory optimizing in a simple and straight forward sense was clearly pointed out by Khushhal. Overall performance gain will take creative thinking.
Good day.

Yemi Bedu