IS Performance with UseParallelGC flag

Does anyone know if “-XX:+UseParallelGC” JVM GC option do any good to IS performance?

It will depend, in part, on the nature of the work your packages hosted on the IS perform. The intent of this option is “maximize throughput while minimizing pauses.” For unattended, non-user facing components, the “pauses” of the garbage collector are likely to go unnoticed.

As with any tuning effort, the key is to measure, change, then measure again to see the impact. I’d suggest that unless you have integrations that are time-sensitive an effort to tune GC is probably not necessary. Other may have different opinions of course.

Is there a symptom or problem you’re looking to address?

I’m exploring JVM memory and GC options to smooth IS memory utilisation displayed on the admin console.

I recall the default thread stack size setting alone would do the trick. I then came across this page about GC options and curious if there’s anyone being down the path.

The IS have mostly default settings, the graph shows closely matching of Chart 2 (from the link above) in 4 hours span while there’s next to none solution activity.

What would be the benefit of having a smooth graph on the admin console? (Others have noted in the past that this graph is not useful/accurate.) Is there something more concrete you’re looking to address? Are you just researching?

IMHO, tinkering with JVM GC settings should not be done casually or in absence of a business-impacting performance or latency issues.

If you are supporting an integration in which every millisecond counts (extremely rare in my experience), then perhaps you should begin a detailed analysis project staffed with experts on Java VM and GC to find out if adjusting settings will help or hurt.

If you don’t have a clear need then follow the old adage “If it ain’t broke, don’t fix it!”


It’s mainly for the peace of mind of some hands off employees. Apart from that, if it achieved to a level where a reasonably idle server produce a consistent flat line graph then it gives a sharper indication on usage pattern, peak/memory leak situations.

Also, of course… personal interest from the perfectionist side.

Just to share my findings…

Sun JVM 1.5



I get a nice stable graphs out of these settings for a almost default IS.

Will put up some graphs later.

" if it achieved to a level where a reasonably idle server produce a consistent flat line graph then it gives a sharper indication on usage pattern, peak/memory leak situations"

IME, that gives no such indication. I think assumptions are being made about the relative “badness” of a sawtooth graph and the relative “goodness” of a smooth graph. In isolation, these are meaningless. The only useful evaluations, IME, are those done when the server is doing real work. If memory issues are not encountered during peak loads, then don’t fiddle with the settings. More often than not, what intuitively is supposed to make things better doesn’t materially make things better and can actually makes things worse.

I’m reminded of tech support story I heard years ago from a consultant/analyst. The policy at the company he was advising was that whenever a server encountered some sort of error, the first step was to get another memory card from the closet and put it in the server. If the symptom disappeared, ticket closed. If the symptom persisted, then additional triage was performed. The lesson: it was far cheaper to put more memory into the box than for the tech to spend time investigating root cause and tweaking the box to achieve max performance with the least hardware.

I don’t mention this to discourage experimenting and gaining additional understanding. My note of caution is just to be careful about what is inferred by the observations made. And be sensitive to spending time on a problem that may not be a problem at all. In all the projects I’ve worked at various companies, memory config tweaking of the IS has never been an necessary activity. The approach is usually just max it out and go.

Perhaps I should jack my rate… My experience are mostly about client don’t want to spend more on hardware, it seems to be a capex vs opex issue.

Here are the graphs:
Sun JVM 1.5

IBM JVM 1.5 - IBM JVM seems like to hold on to more memory with the more you confirgure to it.

Good point on the higher rate! :slight_smile:

I completely understand about the insanity of different buckets of money. They will spend thousands in labor costs to avoid $500 of hardware spend.

If you want a (somewhat) more accurate view of memory and thread usage, review the stats.log file. It holds a snapshot of stats each minute. The graph on the Admin page is derived from this and is the average usage in each hour.

You might look at the JRockit JVM and Mission Control as another option (though I’m not sure it’s officially supported by SAG). Mission Control provides excellent scoping into various JVM aspects.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.