Hi All,
I am trying to understand this section in the wm.io cloud messaging documentation:
Does this mean that if I have max concurrent processing set to 10 but also max prefetch size set to 10 then they are processed serially anyway so there is no benefit to concurrent processing? or does it mean 10 threads are prefetching 10 messages to process serially for a total of 100 being prefetched?
Could someone explain the max concurrent processing and max prefetch relationship for .io messaging as I am strugging to understand it from the documentation.
To be more specific, I am looking at the best way to set up a .io messaging subscriber to handle messages up to 750kb with a reasonable throughput, so any advice on that would also be useful as well.
Thank you!
Hey,
I believe you need to configure the Concurrent Message Processing and then that statement becomes erroneous.
Check documentation in the section a few paragraphs down titled “Configuring Concurrent Message Processing” I believe that documentation is correct.
Joshua
Hi Joshua, thanks for replying.
The concurrent messaging processing section actually points towards the section pictured above:
It isn’t extremely clear to me how exactly it works based on these statements.
Thanks again
Well that does become confusing, I consulted with the architects of this component and have asked to have the documentation reviewed.
I can say that Prefetch size and the entire Cloud Messaging works exactly as OnPrem Integration Server JMS Trigger, when using the Concurrent Mode setting Max Concurrent Processing(On Prem term - Max execution threads) to 10 will allow 10 message to be processed simultaneously NOT serially as the documentation mentions.
If Connection Count is set to 10, then 10 individual JMS Connection will be created, when this is combined with Max Concurrent Processing set to 10, then up to 10 Messages can be processed simultaneously each with their own JMS Connection (Which is associated with 1 TCP socket). A better explanation of the interaction between Connection Count and Max Concurrent Processing is in the IS documentation
Now to your question, the relationship of Max Prefetch Size, this is an optimisation on network calls, so that in a single request to the Messaging Provider multiple Messages can be received, this reduces latency in filling up the processing buffers when Messages are available. It does not change the consumption semantics in anyway but does assist in utilising network calls more optimally at the cost of putting more Messages into the Heap of the Consumer
I would recommend you enable the concurrent mode, set Max Concurrent Processing to 10 and test Connection Count at various values to see how it performs(you may want to see the difference between 1, 5 and 10) with the type of loads you are expecting. If you are still not seeing the performance required look at rerunning with a prefetch set to different values.
1 Like
Thank you for the detailed reply Joshua, it is much appreciated!
Thank you @Joshua_Buckton for your detailed response. @gazzaknight, we will work on improving the documentation with some example.
Documentation says that “prefetched messages are processed serially.”. I think this sentence is leading to the confusion that all messages will be executed serially. That is not the case here. Only the prefetched messages of a consumer would be processed serially as prefetch cache is per consumer. In case of concurrent processing, as Joshua mentioned, there would be multiple consumers running in parallel and each of these consumer will be prefetching their messages in parallel and then processing their messages serially.
Now this could, in some cases, impact the processing time especially when multiple consumers are idle waiting for messages to arrive.
Consider this scenario - you have 5 large messages in queue, you have subscriber with long running service and you have set concurrency to 5 and prefetch size to 5.
In the above case first consumer will fetch all 5 messages and start executing them serially. Remaining ones would be idle. On top of that, since message size is large, consumer has to wait until all the messages are fetched before processing them.
Now, if instead prefetch size was set to 1 in this case then consumer 1 would have pulled 1 message and start executing it. Since consumer service is long running, meanwhile consumer 2 would fetch the second message and start executing it and so on. So, in this scenario, 5 messages would be processed by 5 different consumers in short span of time and the services will be executed in parallel as opposed to one consumers fetching all the 5 and executing them one after the other.
The note was meant to warn for such scenarios. We must remember that prefetch is good as it reduces load on messaging provider and concurrency is good (if order does not matter) as it improves processing time. However one must come to right setting based on their message size and service execution time.
Thank you for sharing your concern with us. We will enhance the document.
1 Like
Thank you Aniruddha, this is good information to have