Used Space Increasing in UM Queue

####Universal Messaging used and 10.7 & 10.11 level:*

We have logging queue which will subscribe the log messages and publish to logging server.
We can see that in Enterprise manager used space keeps on increasing for this one particular queue till 32/31GB and UM shuts down.

Even when the messages are subscribed and Events are 0 in queue, the size will keep on increasing and we have to manually delete the queue.mem file manually from backend.

Ideally it should be cleared automatically, are we missing something. The queue type is MIXED and engine is default. The auto performance of queue is also set to true.

Please share any ideas or suggestion to avoid this space to go up and manual clean up.

Hey,

Do the server logs explain why UM shut down?

A little guess as what might be happening is due to a spindle size issue, depending on the size of events you may want to change the spindle size to something far smaller, like 1000 or upgrade to 10.15 where UM deals with this potential use case under item “File size limit on multi-file disk stores”.

Some information on Spindles and settings of them keep in mind to update this value is a destructive operation (the store must be deleted and created though EM shows as Edit) the specific value to update is “Events Per Spindle” you can see the documentation here

A quick summary Mutli-File stores do not have maintenance, the file is appended too until the Spindle size is hit (default 50000) and only then a new file created to continue appending, once all events are consumed/purged on a spindle and the spindle size has been met the file will be removed.

Hi Joshua,

The server goes down because it runs out of disk space. The file size will keep on increasing.
If there any way we can limit the file size, not sure why this is happening for this particular queue only.

Is there any property in UM which makes it store messages even after the are picked and processed by subscriber.

Hi Rohit,

Can you check what is the message size ? In case you want to limit the number of message in spindle you can change the spindle size

you can use the Enterprise manager(EM) to do the same, remember when you edit a property. EM will recreate this queue so this activity must be done during the maintenance time.

Moreover as suggested by Joshua, you can upgrade to latest 10.15 which has functionality to limit the file size(500MB).

If this is a channel/topic, check the durable subscribers and delete the inactive ones and make sure auto maintenance is enabled for the queue.

Adding ttls and capacity might also help. Auto maintenance in storage properties in channel properties. If all the subscribers consume the message, it won’t write to disk for mixed queue/topic. If it is writing to disk, you probably have a zombie durable subscriber.

Hi Engine,

Thank you for response, Its a queue and the triggers are always enabled for that.

The auto maintenance is also set to TRUE in properties but still the fill space will keep on increasing till 100% disk space of box is consumed.

Hey,

Have you managed to review the content around Spindle size(you can see the default of 50000 in the image above) lower that to 1000 as in most use case we find this really helps

Joshua

That certainly sounds like a bug then. I recommend installing the latest fixes and then if it doesn’t help, reaching empower support. The version you are using is not that new. If it is a bug, it should have been fixed by now.

@engin_arlak When spindle size on mixed or persistent stores is great than 0 Multi-File stores will be used meaning auto maintenance was designed to not remove events. see the documentation on this restriction

Never used a multi-file disk store. It doesn’t tell how to perform maintenance on those files in the documentation. It just says it won’t work. If you can’t use reliable channel or single file disk store I recommend creating a ticket. They will know how to perform maintenance, if they don’t they will ask RnD.

Sure, thanks I’ll try that.

In the image you attached earlier where spindle size was greater than 0 you had Multi-File stores enabled, see this page on how and when events and files are removed with Multi-File.

To summarize perform maintenance routines were used to rewrite entire files which causes pauses, with large stores, noticeable pauses, so spindles or Multi-File stores came in to address those concerns, they append to the end of a file until a certain number of events are added and/or in 10.15 until a certain file size is exceeded then a new file is created, once all events are marked for deletion on the older file that is no longer being written to the file is removed which is done without needing a maintenance cycle.

1 Like

Ok, that’s good to know. But why does OP having issues with files getting larger then if no maintenance is required? AFAIK mixed type queues shouldn’t write anything to disc as long as the message is consumed. OP says there is no zombie durables and no events are waiting in queue. Would reducing the spindle size help?

With the files getting large its to do with the product defaults, so up to 10.15 a single file was appended to until the spindle size is reached then a new spindle created, so to reach large files sizes is just a factor of message content size and spindle size, that is why I suggested earlier to lower it to something like 1000 given the exceptionally large file size being reached. In 10.15+ there is an extra measure which will use file size or spindle size to decide to complete appending to a spindle and create a new one.

Queues both Mixed and Persistent will write to the disk always in all releases as we need to have durability whilst transacting the event and do not depend on consumers to hold events, channels(Topics) if they have no consumers and have the JMS engine will drop events and not write to disk as they do not have such semantics.

Yes, as I stated first response, lowering the spindle size will resolve this issue given all the information we have been provided and this kind of problems is why we added a feature to 10.15 to stop this occuring

1 Like