IntegrationServer full GCs every hour

I have been looking at a situation where most of our webMethod Integration Servers are performing a full GC every hour. While most are doing this, some are not. I have identical IS servers in the same IS cluster, sharing the same workload, and 1 may be doing the full GCs every hour, the other is not. This is happening on webMethods 9.6 with java 1.7 and webMethods 9.9 with Java 1.8.

These GC’s are happening every hour, and are in no way related to load on the system or memory available. It can happen on idle servers, along with production servers with high utilization.

If the hourly GCs are happening, I can prevent them with this option:
wrapper.java.additional.214=-Xdisableexplicitgc

If the hourly GCs are happening, I can control the interval with with these options:
wrapper.java.additional.214=-Dsun.rmi.dgc.client.gcInterval=360000
wrapper.java.additional.215=-Dsun.rmi.dgc.server.gcInterval=360000

If the hourly GCs are NOT happening, I can not force them to happen.

My Integration Servers are running on AIX 6.1 and 7.1. SoftwareAG has seen the hourly GC’s on their test systems running on Windows.

My question here is why is the full GC’s which are related to RMI not consistent. We can’t be having production IS servers behaving differently in this regard. Anyone have any information on this, or want to share their own research, I would appreciate it.

Here are some articles on the topic.
https://www.jclarity.com/2015/01/27/rmi-system-gc-unplugged/
https://coderanch.com/t/507068/java/Forcing-Full-GC
http://www-01.ibm.com/support/docview.wss?uid=swg21173431

Thanks,
Dave

1 Like

Hi David,

JVM issues and tuning its params right are always brain teasers to me. I cannot accurately tell you what is happening in your environment, but we did face a similar issue in production environment. After some analysis from our infrastructure team, they concluded that the reason for the full GC was due to the JVM memory initialization settings. The init memory (-Xms) and max memory (-Xmx) both were set to the same number which is equal to 8GB. Ideally, even this setting should not be a problem in a stable environment. But, there was a memory leak in the code which is causing the memory spike. However, JVM wasn’t performing the GC until almost maximum memory held consumed.
Now, you can imagine what would have happened. When JVM tries to play GC at this time, it causes a huge pause on the server as the memory to clean is almost full. So, to prevent this, we had to set Xms and Xmx with proper numbers that make sense and also retune the JVM with few new GC params below:


wrapper.java.additional.302=-XX:+UseConcMarkSweepGC
wrapper.java.additional.303=-XX:+UseParNewGC
wrapper.java.additional.304=-XX:+CMSParallelRemarkEnabled
wrapper.java.additional.306=-XX:CMSInitiatingOccupancyFraction=40
wrapper.java.additional.307=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.308=-XX:+HeapDumpOnOutOfMemoryError 
wrapper.java.additional.309=-XX:HeapDumpPath=C:SoftwareAG\IntegrationServer\HeapDumpLogs\dbdcrprd01.hprof

We also identified the piece of code that is causing the memory leak and fixed the same.

So, from my experience, I can suggest you below:

  1. Check min and max memory settings and set them to some balanced numbers
  2. If possible, Use tools like JVisualVM or JMC to monitor the JVM activity and look for memory leaks or thread leaks
  3. Try different combinations of GC params that JVM comes with and stick to the best combination that suits your environment and load

Hope this helps

1 Like

Hi,

please note that these parameters (-XX-style) are JVM-dependent.

This means that not all of them exist in the particular JVM-implementation.

You will have to check this with your JVM vendor.

Regards,
Holger