UM Queue/Topic Size

Hi All

I wanted to know size of the UM queue or topic so I can understand how large message I can publish to UM queue or topic.

While creating UM QUEUE or topic I can see Read Buffer size property. Is it the property for queue/topic size ?. If the value of the property is 10240 it means a queue/topic can hole a message of around 10 mb size.

Is there any limitation on number of messages that can be store on UM Queue/ topic when there is no active subscriber.

Can any one please let me know

AFAIK there is a setting (don’t know whether it’s per channel or per realm) which defines how large a single message can be. There should be no limit on how large a queue can get, just the disc space (since messages are held in files).

Couple of things we ran into with our current customer who’s using UM 9.7.

The maximum size of a message that UM can handle is controlled by the Connection Config> MaxBufferSize property. I believe this is an environment property and not configurable down to the queue/topic level. Something else to be aware of… in the event a message larger than the MaxBufferSize is published, UM drops the message and does not throw an exception to the IS. In fact, the IS connection to UM is actually temporarily dropped as well. This is a known “feature” for any UM version prior to 9.10 and is well documented on Empower: KB Article - 1772248. With version 9.10+, there’s a property called MaxBufferSizeClientSideCheck which controls the behavior when the message exceeds the MaxBufferSize.

Regarding fml2’s comment: "There should be no limit on how large a queue can get, just the disc space (since messages are held in files). " This is not entirely true. With version 9.7 we saw some odd behavior and performance issues regarding the .mem files associated with a queue stored here: UniversalMessaging\UniversalMessaging\server\umserver\data\namedsubqueues. In our scenario we had multiple publishers (native pub/sub - not JMS) of the same doctype and multiple triggers using “Provider Filters”. The trigger filters were mutually exclusive so one publish would fire only one trigger. What we noticed over time was that UM performance would degrade in relationship with the volume flowing through these integrations. Upon further inspection we discovered that when a trigger filter was NOT satisfied, a copy of the message was kept inside of the .mem file. Similar to a dead-letter queue. Furthermore, as these .mem files grew, OOM errors became more and more frequent. After consulting with SAG support, it was determined that the .mem files are in fact loaded into memory when provider-side filtering was used, and there is no built-in mechanism to purge these files. Again this was 9.7 native pub/sub. The only solution we were left with was to move to client-side filtering on the IS, which prevents the .mem files from growing.

2 Likes

good to know! we’re experiencing the same issue with version 10.1 and legacy pub/sub with mutually exclusive trigger filters…
got an UM shutting down because of space issue, leading to see we got large mem files for those old pub/sub.
i’ll look to test some client-side filtering and it may be time for some refactoring of all those pub/sub!

EDIT: just so you know, after deeper verification, in our case we found we add a second queue on all of our adapter notification publish document with nothing connected on them, and duplicating + waiting with all our documents… It seems it has been created during the migration from 9.7 to 10.1 (first mem file as the same date as the period softwreAG consultant did the migration) so maybe a wrongly executed script…
my client deleted the wrong queue under nenterprisemgr and voila :slight_smile:
we’ll keep an eye on it…