Heap Size for a IntegrationServer


Is there a formula or rational behind setting the heap size of the IS java process. If so a machine dedicated to IS having 2GB memory what would be the -ms and -mx values?



It depends quite a bit on the JVM that you are using…

First find your idle server memory usage. If you are not using incremental GC, start the server, leave it idle, and wait until the server memory graph shows a sawtooth down. The low point of the graph is your approximate idle memory usage. The high point is irrelevant as it is caused by unreferenced memory. If you are using incremental GC (default in most java 1.4 jvms), pick a low point from overnight. It should be pretty clear.

You will want the min to be at least this value plus the minimum processing requirement just to make sure that your server can be allocated enough memory to run reasonably.

The jvm partitions the heap into a “young generation” and an “old generation”, and these and other settings determine how exactly garbage collection happens. I assume that your real goal in setting these values properly is to make your server perform as well as it can, so you should also consider the whole range of jvm gc tuning options in addition.

The definitive guide for tuning gc is at:
[url=“Oracle Java Technologies | Oracle”]Oracle Java Technologies | Oracle

It also includes descriptions of the heap, -Xms, and -Xmx

From Sun: “Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can’t compensate if you make a poor choice”

I’d agree with Peter’s technique, just multiply by a suitable percentage margin to take into acount the strange things that may happen, like a server outage somewhere which might result in a build up of things to process (this sort of situation can lead to a spiralling down of the integration server).

I’ve generally found that you set the min and max to whatever size you decide on to save the jvm having to allocate the memory as it goes. Also beware that garbage collection time goes up the larger amount of memory the JVM has to play with, so an increase to allow for more memory may require playing with some of the garbage collection parameters too…

Another thing: be careful that the values you set aren’t going to be a problem on the OS level: for instance on HPUX the default kernel settings aren’t exactly tuned for the requirements of java application servers (e.g. you’ll get out of memory errors talking about native resources that aren’t actually related to memory, more related to inability to spawn off new threads which is affected by kernel parameters setting limits). So I would check that there aren’t any underlying OS limitations that you may come across…
e.g. for HPUX:

Something useful I’ve found is if you have the ability to run end to end tests with the maximum load/data sizes then you’ll get to see what sort of memory requirements you’ll need.

Performance tuning is always a bit of a black art.

Nathan Lee

If the machine is dedicated to IS, I would offer that the easiest and most reasonable thing to do is to set min and max to the same value and to the highest that your OS will allow. This would be referred to as “sledge hammer engineering”–often quite effective in terms of time and effort in the absence of other factors that would help guide a more refined approach. For Windows, the min/max value you’ll be able to set will be around 1.5M, irregardless of the amount of physical memory.

I agree with Rob, with the caveat that if you know your IS workload will be on the minimal side of the scale, you should probably pick a different arbitrary heap size other than the max allowed by your JVM vendor / OS platform.

In addition to slightly longer server startup times, the other disadvantage of too large of a heap size, is that when GC does occur it takes longer, potentially introducing pause behavior to your application performance.

Finally, a reminder that there are good tools available that will help even the inexperienced JVM tuners see what’s going on inside the JVM. See this post for details.


The point from Nathan and Mark about GC taking longer when the heap is larger is definitely something to consider. I’d offer that for most deployments that GC time just doesn’t matter. Most integrations are behind the scenes, quite often scheduled (it’s a shame we haven’t been able to make more progress away from batch-oriented processing), and bottlenecks are more likely to be in the developed services rather than in system maintenance activities such as GC. Still, knowing how GC behaves is a Good Thing so as to know when one needs to pay closer attention to it.