Max CPU utilization when JMS triggers enabled

We are experiencing a problem with webMethods 9.0 and Nirvana 9.0. Both are installed on the same Windows 2008 R2 Server.

We have created several Queues on the Nirvana Server, and on the Integration Server created corresponding JMS Triggers. What we found is that when these JMS Triggers are enabled, the CPU on the Windows Server jumps to 100% immediately. If we view Windows Task Manager and sort the Processes tab by “percent utilized”, there are two java.exe processes at the top which are using 100% of the CPU. The Nirvana java.exe process uses ~65% while the Integration Server java.exe process uses ~35%. Now, if we go into the Integration Server admin page and disable all the JMS Triggers, the CPU usage on the Windows Server immediately drops to almost 0%.

I have combed through log files on both the Integration Server and the Nirvana Server without finding anything of note. The only thing that has jumped out at me is something I see in Universal Messaging Enterprise Manager: If I click on the “nirvana” realm and then the “Monitoring” tab, and then click the “Threads” tab, I see two large numbers continuing to increase:

Buffers Reused = 2407420557
ReadPool = 802539376

The above is just a snapshot of the numbers right now. If I disable those the JMS triggers, the numbers stop increasing. I have no idea if this activity is what’s causing the CPU to peg though.

Has anyone seen anything like this before? In past years, we used IS + Broker, and only recently have started using Nirvana in place of Broker. So we’re still very much novices when it comes to Nirvana.

Thanks very much for any thoughts, comments, ideas, etc.

on IS, Server > Statistics > System Threads
check “Show threads that can be canceled or killed at the top of the list.”
see what’s running there.
Also Server > Service Usage, see which service is running.
hope you can identify what’s running all the time

Yes that is the best way to check the system threads vs what currently running more threads and slowing the process down…Nirvana replacing broker I think it still need to reach Broker style level in the future releases.


Thank you for the replies.

The IS Service usage is zero right now. There are no services actively running.

I’ve looked at the IS System Thread page, and attached a screenshot of the threads that have names which indicate relevance to Nirvana. When I disabled all the JMS Triggers, all but three of these threads go away.

I’ve noted that enabling just a single JMS Trigger results in constant 50% CPU utilization. Enabling a second JMS Trigger takes the utilization to almost 100%. What seems to be happening is a constant back and forth “thrashing” between IS and Nirvana, but I can’t imagine this is normal/expected behavior.

Please let me know if you can think of anything else to look at or try. Any help is much appreciated.

I would say please open a service request with SAG support as the issue seems between and IS and Nirvana layer and they can look more in to your environment and advise accurate.


Yes, I will do that.

Separate from that, I have continued to troubleshoot this and experiment on my end. I have found that this CPU issue is specific to Queues. What I noticed is that if I delete all the Queues and recreate them as Topics, then there is no CPU problem. I can enable all the JMS Triggers in IS (pointing to Topics) and the CPU utilization barely moves.
That said, I’m not sure I can leave things this way as I for consumed events to be removed immediately, and I believe that behavior is unique to a Queue… not sure I can emulate that with a Topic.

If I find a resolution or make additional discoveries, I will report back here.

OK…Good observation…there could be a fix related to it please touch base with SAG support also as there might be some additional daemon threads going on behind with the queues setup.


I wanted to circle back on this and provide an update. After working with SAG support, we determined that this problem was a known issue, and thankfully there is a resolution. We installed the following fix:


…and this resolved the problem. We actually installed “Fix4”, which includes “Fix1”.


Wonderful!..thanks for the updating the thread back :smiley: