Memory utilization of Our Webmethod Integration server reaches up to 95% on every 6-7 days


Memory utilization of Our Webmethod Integration server reaches up to 95% on every 6-7 days.Then we will have to restart the server to bring normal working condition. we are using 8.2 version of IS and has installed on 32 bit windows 2003 server.

I checked setenv.bat. max memory allocation is set to 1023

Any idea what is the reason behind it??? what is the fix to it? what to do??

Based on Server load increase your maximum memory to 2 gig or 4 gig

Increase perm_size to 512 and restart server

Note: make sure you have enough diskspace on SAG Installed location


Hoping you are a wM Admin who can update the file settings in your env?

As mentioned above you can increase the mem size based on your IS resource needs it and always make sure you have enough memory/disk space also free available in any circumstances which should not affect the IS installed base and other apps shared running on the same OS level.


You will need to upgrade to 64bit OS and JVM in order to use more memory.
It’s common to use 4-8G or even more for a 64bit system.

Hi , Thanks to all for ur reply.But sometimes i doubt if there is memory leak.

I am using 32 bit window 2003 server.and in that only 1.6 GB can be allocated to JVM.
Is there any profilier tool so i can analyze the memory leak.Can it be done by taking thread dump?

Any idea?? please help

Also a side note you should always consider a 64-Bit machine for better performance results…



Get the IS gc log
To do this

  • add this parameter to or bat
    : JAVA_MEMSET=“-ms${JAVA_MIN_MEM} -mx${JAVA_MAX_MEM} -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:logs/gc.log”
  • and use gc log analyzer like a HP Jmeter or analyzer tool that made by jvm supplier.


Hi all,

Stop the MWS service

Please change the memory settings in 2 files under the below location



Initial Java Heap Size (in MB)

Maximum Java Heap Size (in MB)

If you need 4GB set max memory accordingly.

Restart MWS


I believe the issue is with IS mem not MWS based on what I understand starting of the thread. :smiley:


Hi samashti ,

Is this behaviour started recently or its been there for a long time?
Did you see any much user requests to IS when compared to earlier ?
Did you see any valuable info in logs which gives a small hint to step further ?
As many suggested, it’s always good to use 64 bit OS

Why you feel there is a memory leak ? Did you migrate any new interfaces/integrations recently into that specific IS ?

Kindly update the forum with the above details.


What evidence do you have of a memory leak? Does Integration Server ever crash as a result of running out of memory? If Integration Server is running out of memory, then the JVM will throw OutOfMemoryError exceptions which would be logged in the error or server log - have you seen any of those?

If there is a memory leak, then it’s likely due to some kind of non-Java operating system resource being allocated and not released properly, such as file handles for files opened but never closed, or socket connections opened but never closed. Does this sound possible for your use of Integration Server?

But perhaps your issue is just a misunderstanding of how the Java Virtual Machine (JVM) manages memory?

The JVM uses garbage collection for reclaiming heap memory from objects that are no longer in use or referenced. Garbage collection is a very expensive operation, and can result in small pauses in a Java application because garbage collection is a stop-the-world event where all threads are paused until the garbage collection finishes.

Since Java 1.2, the JVM has used a generational garbage collection algorithm to improve the efficiency of garbage collections. Generational garbage collection is based on the observation that most objects are short-lived (high infant mortality rate). The heap is divided into a number of generations, and, for the sake of argument, let’s just call it 2 generations (it’s more complicated than this in reality): the young, and old generations.

Objects are created in the young generation on the heap. When the young generation fills up, a minor garbage collection runs. Minor garbage collections are pretty efficient because it only checks objects in the young generation and most of those objects are likely dead. Any objects that survive a minor garbage collection are eventually promoted to the old generation.

Eventually the old generation will fill up as well, and then a major garbage collection event is required. Major garbage collections are much slower because they involve all objects, not just the young ones, and so major garbage collection events are minimized as much as possible.

For long-running JVM processes like Integration Server, we can observe the following pattern with the heap memory usage: the used heap memory will continue to grow until no more objects can be allocated, and at that point the JVM will run a major garbage collection and reclaim a significant amount of memory. On the Integration Server administration web page, this pattern looks like Integration Server is slowly using more and more memory all the way up to its limit, which is exactly what you described. However, once the heap is finally full, you should see the memory usage will drop right down due to a major garbage collection event. This behaviour is perfectly normal, because the JVM does not run a garbage collection until it actually needs to.

Does this sound like an explanation for what you’ve observed with your Integration Server?

Refer to this tutorial to learn more about JVM garbage collection: [url][/url].