Parallel processing

Hello everyone,

I have a need to process JMS messages with speed. I have configured a JMS trigger to spin up 25 threads, which reads 10 messages at a time from a queue, using the batchTriggerSpec.

My problem is that once all messages are consumed, I need to send a notification message of that to another service, and I have no way of knowing when the service threads are finished.

I have considered using doInvoke and spinning up 25 instances that repeat until the queue is empty, but I would still like to use batchTriggerSpec in order to read multiple messages at a time. Does anyone know how to make this work?

Thanks in advance.

How about using “pub.jms:sendAndWait” and “pub.jms:waitForReply” for parallel processing ?

Sorry, I should have indicated that I don’t expect replies on the messages.

Are you sure multiple threads would be faster than just a single thread doing 20 or 30 at a time in a loop? It may be worth testing.

One approach that comes to mind is using another component that keeps track of the workers. As each worker starts, it “publishes” (might be a real publish or some conceptual equivalent) a “I’m worker 123 and I started.” Then when each finishes it announces “123 is finished”. The tracking component keeps a list of starts and finishes. When all starts have matching finishes, publish the “all done.”

The code may need some timeout safeguards and such but may be a workable approach.

Disabling the trigger does not preclude the services from receiving from the queues. If I read from the queue within a repeat once the threads are running, I can continue to do that until the queue is empty, keeping the threads alive and not spinning new ones. When the queue is empty (using MQ peek) we disable the trigger. Although each thread will then send a disableTrigger, it is a negligible and acceptable consequence.

With this approach, we have increased from 9 txns/sec to 32 txns/sec. I would like to see us at 70+ txns/sec, so I am still open to any ideas.

Any comments on potential ‘gotchas’ would be appreciated as well.