Getting the Broker to bind to a single network interface

It seems the default behaviour of the Broker (or perhaps the Broker Monitor) is to bind the broker process to it’s port on all network interfaces of the server. Below is the output from a netstat -an on Solaris that shows my broker bound to all interfaces on port 6849

  *.6849               *.*                0      0 49152      0 LISTEN 

Ideally, I’d really like it only to listen on a single IP address. The reason being is that I might want to have this IP failover to another machine – but if that machine also has a broker running on port 6849, this won’t work.

Sure I could have every broker use a different port, but this seems inane, tough to keep track of, and isn’t very scalable.

Can anyone help me in configuring the Broker to only bind to a specific IP address on the host?

Thanks.

Hi Fred,
The awbrokermon process controls all of the brokers. It’s port and the way it binds to an interface are all out of our control. The port is hardcoded and it will bind to the first available real ip address.

What type of clustering are you planning on doing? We have implemented a couple of different solutions with the broker that work pretty well.

markg
http://darth.homelinux.net

Thanks Mark.

My question isn’t pertaining to the Monitor process however, it’s to the actual Broker process itself. My understanding is that it’s the Broker (awbroker) that binds to a port, not the monitor.

We’re using Sun Cluster to achieve high availability with the Brokers. When the service fails over, Sun Cluster sends the IP address and the SAN mount as a group over to another node in the cluster.

Each node in the cluster is master for a single Broker (each with it’s own service IP and mount from the SAN backend), but we’d like each node to have the ability to run with the Brokers from failed nodes as long as they have the IP address and disk for that Broker.

As it is, if the Broker can’t bind to a single specified interface, we need to run all of our Brokers on individual ports, so there’s no collision in the event of a failover. If we can lock down, for example that Broker1 always binds to 10.0.0.1:6849 and Broker2 is always 10.0.0.2:6849 etc. this would be ideal.

I hope all of that makes sense.

Fred,
First my apologies if you are already know all this but others may not. But to review, the broker process is made up of at least two processes. The awbrokermon which is the master controlling process and the individual broker servers. Both processes bind to ports. The awbrokermon port cannot be changed, 6850. The individual broker servers can be set to any available ports. 6849 is the default for the first broker server created but you can change that. Each broker server can have multiple brokers associated with it.

There are lots of ways to cluster the broker and it really depends on how you want your cluster to run. Active/Active, Active/Passive etc. It also depends on if you want to failover an individual instance of an IS server and associated broker or if it is an all or nothing. I’m making an assumption about running the IS and broker on the same box, your implementation maybe different. If not then your IS would be hitting the broker via a virtual ip address which would switch along with the broker on failover.

For an example assuming you wanted to implement an active/active cluster(not load balanced) but each node running multiple separate instances of IS and broker (we currently do 4 on our pair of Sun boxes). For the broker piece, you could configure the awbrokermon process to run on the local file system of each node. The actual broker server would run on a shared file system, obviously only mounted by one node at the time. The awbrokermon process would be responsible for running the active broker servers for that individual node via its configuration file. Your cluster software would be responsible for the failover of an individual broker server to the other node and back. Each unique broker server has its on port. On a failover of the broker server, not the awbrokermon which is already running on the other node, the cluster software would run the shell scripts + the (webMethods provided configuration scripts) to stop individual broker server, remove it from the configuration file of the awbrokermon process. Mount the shared file system on the other node, switch the virtual ip address, add the configuration to the running awbrokermon process which would then start the broker server process.

There are more details I left out around dependencies/relationships in your clustering software, startup orders, primary/secondary node etc. In practice this is pretty easy to setup and is reliable. Keeping up with ports is really not a big issue for us at least. It is pretty flexible in that you can run in an active/active configuration or active/passive and allows you to failover individual instances while not affecting the other instances. It also scales pretty well. But like I said, there are many, many ways to do this. This is just a suggestion that has worked for us. Your mileage can vary.

I’ve talked to webMethods support in the past about binding ip addresses and interfaces. They did not have a solution for that. It’s really more important in the IS layer which uses hostnames in it’s error logs. etc. On failover you can lose visibility into those without some behind the scenes work.

Hope this helps

markg
http://darth.homelinux.net