Broker runs out of memory

We are having an issue in production where the Broker process exits with an out of memory error during a memory allocation request. The error is not quite clear what memory pool is being exhausted, although I suspect it is the memory footprint for the Broker process itself at the OS level. I have done some tests to re-create the issue and have only managed to exhaust the Data Storgae limit which throws a different error. Anyone know what may be causing the Broker process to exit with out of memory error? Large document sizes? Volume related? What is the deafult memory footprint for the Broker process and how can this be adjusted upwards?

Thanks
Peter

Need more details. Broker version? Any fixes? OS vendor and version? If OS is *Nix, what are the ulimit settings?

Mark

A large number of undelivered volatile documents could cause out of memory. There should be a core dump and hopefully an exit code. You can cross reference the exit code (but likely it just says Out of Memory or some-such). You can send the core dump to wM for analysis.

Also, if client connections build up and are never destroyed you may run out of memory, but if this is the case it would be typical for performance to degrade prior to the crash (unless you had a huge burst of client connections). This was much more popular as a cause when it was more fashionable to connect your own clients to the broker.

Of course, if it is a windows machine, and it has been up longer than 4 weeks - reboot…

Here are some more details as requested. Running on Solaris 8. Ulimit settings are:

core file size (blocks) unlimited
data seg size (kbytes) unlimited
file size (blocks) unlimited
open files 4096
pipe size (512 bytes) 10
stack size (kbytes) 8192
cpu time (seconds) unlimited
max user processes 29995
virtual memory (kbytes) unlimited

awbroker -version output is: 6.5.0.2.730 122005 SP2

The crash did produce a stack trace which I sent to wm for analysis. Still waiting to hear. Thanks

I believe that the stack size setting is too small and should be bumped up to “unlimited” or at least much larger than 8192K. File size is also questionable.

Mark

We have Broker on Solaris 8… our ulimit…

$ ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 8192
vmemory(kbytes) unlimited

Out of memory… hmmm… I do not recall seeing an out of memory on the broker before… The broker uses swap (disk) more than actual memory in my experience.

the above makes sense… it would also hold if you have brokers in a territory and for some reason events couldn’t be exchanged between the brokers (especially admin related events)

Hmm… I know I’ve never seen client connection numbers cause an out of memory error on the broker.

FYI… we usually built ‘medium’ brokers, which allocates 512mb of swap

One thing of note is that the open files setting is a bit deceptive. I belive in Unix the “open files” includes any running process. That is, running processes count against your “open files” limit.