Multiple ISb single machine probs amp conbs

Hi all,

Here is the question I have:

I know it’s possible to install multiple instances of the Integration Server on 1 physical machine by
installing them in different directories (ie. ports). For the project I am currently working on the environments
need to be identical (ports etc), so I was thinking of the solution with mutiple network cards and the
watt.server.inetaddress setting to bind a specific Integration Server to a network card separating the
environments and keeping the ports identical. I have read some articles on the Advantage forum and internet with
regards to the outbound traffic, which can cause problems.

My question is if there is any additional documentation on the pro’s and con’s for installing multiple
Integration Server’s on one physical machine.

I can think of 3 ways to install the instances:

  • 2 IS, 1 Machine, separate folders, different configuration (ie. ports)
  • 2 IS, 1 Machine, separate folders, 2 NIC’s, identical configuration, bind IS to NIC
  • 2 IS, 1 Machine, separate virtual partitions (own environments, ie. vPar)

Any information would be appreciated.

-Jordy

Jordy,
I’m not aware of any docs on the pro’s and con’s. Multiple instances with different configs is pretty much a no-brainer on servers that scale well vertically such as Solaris and HPUX. We have multiple instances on a active/passive solaris cluster that works very well for us. We really haven’t run into any problems with that configuration.

The virtual partitions would work as well although I would questioned why you really need them. Separating the IS’s instances isolates them from one or another. Unless you just want the entire system isolated which would work.

I think I would be a bit hesitant about the exact identical config’s with just separate nic’s. I don’t know that this would actually work, there would have to be some configuration differences and that would probably defeat your intended purpose, which is?

Of course webMethods would like to see you distribute it out to multiple servers where the adapters sit closer to the source and targets. There are a lot of arguments for and against the distributed architecture. More complex to manage but more resilient to failure and a distributed load etc. Plus it doesn’t hurt webMethods’ share price either.

Thanks Mark.

When you say you are using Solaris active/passive cluster, does that mean that the two instances are not running at the same time, but will be used as failover for eachother? What about active/active?

In my case I just need two instances running at the same time (Integration Test/User Acceptance Test) and the configuration should be the same wherever possible. Since we are restricted to one machine I was looking at ways to separate the environments as much as possible, since the situation could occur where UAT will act as production (the chosen failover mechanism).

-Jordy

Yes the active/passive means that only one physical server is running the multiple IS instances at the time. Yes you can do active/active in a variety of ways.

Your failover requirements will help determine the architecture. How will you keep the UAT in synch with your Production Environment? Keeping the packages in synch is pretty easy but everything else is a bit more challenging ie broker logs,queues, Database, transactions etc. These are all pretty dynamic. You could use the built in cluster capability of IS and run your production instance and UAT production instance as a cluster. But that would make your UAT instance active as a production instance. Probably not the best way to go. Your testing could interfere with production.

You could use an active passive cluster if you don’t need immediate failover capabilities. It takes about 3 to 5 minutes for the failover to occur depending on the size of the IS instance and the horsepower of the machine. You would need a shared disk in between them. You UAT environments could then run independent of the passive inactive failover IS instance. The UAT environments would typically be shutdown on failover and the production IS instance that resides on the shared disk would be brought up.

Bottom line, there are several ways to accomplish this. It really depends on your requirements for failover.

markg