I have a scenario where we have two Broker Monitors,Broker Monitor(A) and Broker Monitor(B), running on two different nodes(Boxes) and multiple broker servers under them. We have sun clustering(hardware) between these two nodes, and using common directories i.e. global file system.
Now if I create a Broker Server on one node it will be added on the Broker Monitor(A) of that node. In case that Broker Monitor(A) goes down or that newly created Broker Server goes down, Broker Server doesn’t fail over(come up) on the other node under second Broker Monitor-B until we restart Broker-Monitor-B. I assume that Broker Monitor keeps the awbrokermon.cfg parameters in cache and doesn’t refresh the cache until you restart that Broker Monitor.
My question is if there is any command or script to refresh the cache without restarting Broker Monitor.
I may not quite understand the details of the Sun clustering, but if node B is the passive node, why would Broker Monitor be running? Wouldn’t it be started on fail-over only and therefore pick up the current awbrokermon.cfg?
We keep both nodes up all the time and distribute load on both nodes i.e. multiple ISs and Borker Servers on both nodes. Although both ISS and Broker Servers share the same directory. Let me put it this way, if we have 4 ISs(IS-A, IS-B, IS-C and IS-D) and 4 Borker Servers(Broker-Server-A, Broker-Server-B, Broker-Server-C, Broker-Server-D), we run 2 ISs on one node and 2 on the other, same goes for Broker Servers.
So that is why we run Broker Monitor on both nodes.What i am trying to accomplish is if I create one Broker Server on node1, it automatically reloads/refreshes the cache of Broker Monitor B on the node2 without restarting since the cache of Broker Monitor A on node1 will automatically gets refresh.
I assume you’ve never run into any issues with this arrangement? When BRK-A goes down, how do the IS instances connected to it reconnect to another Broker? How are documents not lost? If all the Brokers are using the same storage files, do they not interfere with each other?
I know this doesn’t help with your question but I’m surprised at the cluster arrangement you’ve described–I must be missing something because it seems to me like it shouldn’t work. Two Brokers managing the same queues seems problematic. What am I missing?
We put the virtual IP(which is same for both Broker Servers) in broker settings. We don’t lose any document because we use the same broker directory on both nodes i.e. Broker Servers on both nodes points to the same directory and no we haven’t faced any conflicting issues yet.
P.S I know I am taking too long for each reply but I am on an international project in middle east and when its day here, its night in US.
No worries on the time it takes for the posts. Just the nature of the medium!
I’m really surprised that an active/active Broker cluster works without conflict. Normally in a cluster arrangement, using failover from a failed active to the passive node, only one node mounts the common disk/volume. I’m not sure of the role of the Sun cluster in this arrangement. What is it actually failing over? Is there a load balancer providing the VIP? Is it always routing to one of the servers except in case of error?
Sorry I’m not addressing your question still (I’ve not found anything that suggests a reload of awbrokermon.cfg without a restart)–but your arrangement has me intrigued. Can you share specifics? Obviously I don’t expect you to reveal any confidential info. But if the info is shareable I’d really like to know more.
I am sorry if I wasn’t clear earlier, it is active/passive clustering. Broker Server A runs on one node at a time. Its just we want it to fail over to the second node if it goes down but if it is newly created Broker Server and Broker Monitor on the other node was never restarted after the this newly created Broker Server then Broker Server doesn’t restart on the second node.
That was the whole reason we wanted to refresh the config file in cache without restarting Broker Monitor.
Hope this clears your confusion.
I’ve run into this same issue. Depending on what version you are running you have a couple of choices. With 7.1.x you shouldn’t have this issue anymore since you can run multiple broker monitors on the same host, which in that situation you could just failover the entire active broker monitor plus broker to the other node, leaving whatever broker monitors and brokers on node b instact.
If you are still on 6.5 you might need to get more creative. I was using Veritas Cluster manager plus my own customer broker scripts to manage that issue. In a nutshell the scripts had to manage taking the broker server configs in and out of the master config file and getting the broker monitor to recognize it. One of keys there was add a parameter to the awbrokermon.cfg file that kept the brokermonitor from auto starting the broker servers - Monitor-start-servers=no.