Broker clustering/IS loadbalancing question


Wm version : 6.1

  1. IS load balanced (active-active) mode.
  2. Broker clustered in Active - Passive mode.

I was going through the webM clustering documentation. I have a doubt, which I have mentioned below:
—How many physical servers do I need. Is it required to have 2 servers for IS (Active-passive loadbalancing) and 2 more physical servers for Broker clustering?
This is recommended by webMethods but I would like to know is it possible to configure IS loadbalancing and broker clustering with lesser number of servers.

Has anyone done this kind of setup with lesser number of physical servers?

I would really appreciate some input on this.

Thanks in advance.


One IS server and one broker for an active/passive configuration. The IS server and broker are mounted via shared storage. Veritas or some other cluster software handles the mounting of the storage, virtual ip etc. When node A dies veritas failovers to node B, node B mounts the storage and starts the IS server and broker server.

The very nature of an active/passive cluster requires two physical servers, otherwise it wouldn’t make any sense to do it. If you want to add load balanced IS instances then you can have an instance on Node A and Node B talking to the same broker server.

There are other variations on this depending on your needs.

I need to have IS cluster(Active-Active) and Broker clustered (Active-Passive).

webMethods document says the best architecture is with four servers.

Two physical servers for Integration server cluster(using IS, no third party solution used).

Two physical servers for Broker clustering –
Broker cluster requires
–a Shared storage
–Single virtual name for both the servers
–Requires Third party clustering software (microsoft cluster).

I have not found any other architecture for this solution.

I am looking for some solution with lesser servers. and distribution of components.

You need two servers for what you are trying to do. The broker servers can reside on the sames boxes as the Integration Server. It certainely would benefit webMethods if you bought four boxes $ but you don’t need that many.

Thanks for the response.


The major advantage of broker servers being on different machines is that if Broker Servers Fail. They don’t bring down integration Server along with it.

The broker servers are software. If they fail, then you restart the process not the entire physical server. Running them on the same box as the IS does not take down the IS server along with them. Now a note about failure of the broker server. The broker servers rarely fail if at all. The IS server on the other hand has it’s challenges from time to time. But then again no need to restart the physical server, just the process.

If you have to restart your physical server to clear up a software problem, then you might want to take a look at your OS vendor. There may be some opportunities there. :wink:

One thing I would like to add is that you need a Load Balancer infront of the IS Servers. The webMethods Load balancing option doesn’t give you High Availability as one IS server just works as a load balancer. Internal Load balancing option is not recommended by webMethods and they are not providing the LB option in WM 6.5.

Here’s my two cents worth:

  1. You will need a minimum of two physical separate servers for Broker. The broker is clustered using hardware clustering like Veritas. In AIX, I’ve used HACMP from IBM. Works like a charm. You will have one hot node and one cold node. When the cluster fails, IP Address from box one gets assigned to box two.

  2. A minimum of three boxes should provide IS clustering capability. If one box drops out, you still have a cluster. Agreed that two will provide clustering capability, however, if one drops out due to overloading, what will the remaining server do with twice the load? IS binds each server to an IP node and the processes are bound to the server/node combo.

  3. Because of the hardware cluster, you cannot run IS on the same box as broker.

  4. We have always used an external load balancer like Big IP. Works great. I never use the IS.

  5. Bonus comment: You can cluster workflow server on the same box as broker for the same reason.

Did you mean that you never use the IS software cluster?

Also, can you elaborate on why you don’t recommend putting an IS node on one of the broker server nodes? Pretty sure I know your reasoning here, but wanted to clarify.


Ray are you working directly for webMethods now? :slight_smile: The architecture you are recommending is certainly nice and will work well. However you can (and we have) do the same thing on two servers. Brokers and IS can mingle. Your point about the third node is a good one. But most implementations can scale vertically on the two servers ie each has the capacity to assume the others load in a hardware failure situation if you size it correctly. It does put you at risk if the second box were to fail but that is a cost/risk situation that has to be looked at.

Two IS instances really are sufficient (with an external load balancer). The instances do indeed need to have a bit of breathing room in case one of them fails. The key is to do proper load analysis and build enough instances to handle the load in case of failure (a four-engine plane can fly with just one engine, at least for a little while).

They can indeed coexist on a single box. To work, they must use independent IP addresses/hostnames. Thus, HACMP (or whatever your fave OS clustering solution is) can cut the Broker IP/hostname over to another box and leave the IS IP alone. Doing this at my current project (was in place before I got here) and works just dandy.

IMO, noone should ever use the IS-supplied load balancing. It becomes a single point of failure. External LB solutions are the way to go.

I have an architecture where I have the broker and IS cluster on same host. :cool:

We are running on Solaris (8) and have the brokers in the typical HA active/passive setup on a two node hw cluster. … AND … I have IS instances, in sw cluster, running on each hw node as ‘normal’ apps. Meaning, if there is a hardware failure, the brokers failover and the IS instance on that node is unavailable (down).

So during an ‘issue’ our IS cluster is down one instance, but overall our service is still functional…