We want to difine a high availability environment with portal my webmethods.
Reading the documents:
The doc. states that
"A My webMethods Server cluster is an active/active environment in which multiple server instances run at the same time, sharing the same file system and My webMethods Server database. The purpose of a My webMethods Server cluster is to achieve high scalability by distributing the workload among multiple servers. This model is different from an active/passive environment, which achieves high availability through the use of a
standby server. "
Reading this, and as my interesest is having a HA environment I would like to have a active / passive solution. But this is not explained how to make.
Has anyone implemented My WebMethods with HA ?
An active/active HA environment is superior to an active/passive HA environment, in my opinion. There are various benefits to active/active, one of which is that when an outage of one of the servers occurs, there is no delay in switching to the other server. Traffic simply starts being routed to the available server(s), bypassing the down server.
Is there a specific reason you want active/passive?
The problem is that the active/active does not guarantee HA.
It depends on a master machine. If the master machines crashes, the other also crash because the code is located in the master machine.
All servers access the files in the master machine.
Web Methods says:
“The purpose of a My webMethods Server cluster is to achieve high scalability by distributing the workload among multiple servers.”
Here we don’t want at this time high scalability but HA.
If we could install several Mywebmethods and then access to a central data folder (or database) for me it was excelent. But the solution that WebMethods suggests is diferent.
As I read the Mywebmethods Cluster architecture I understand that if the master crashes all my webmethods crashes. Am I wrong?
Nothing you’ve quoted or stated here indicates that there is a master / slave relationship between MwS nodes. What leads you to that conclusion?
Active-Active configuration with a hardware load balancer should provide both HA and horizontal scalability. Of course, the load balancer, network, and database server availability must also be addressed, but that’s not a WM issue just dependencies.
My apologies. I was talking active/active and active/passive in the general sense without investigating MwMS in detail.
IMO, the “active/active” moniker mentioned in the doc is inaccurate. Active/active generally implies that both machines are capable of doing exactly the same work and that traffic can be routed to either of them and each will do the necessary job. For MwMS, they cannot be structured this way since many of the roles of MwMS can only be assigned to one server.
So if the server assigned the search role goes down, search goes bye-bye–probably negatively impacting the entire MwMS environment if not outright crippling it.
So an MwMS cluster provides nothing in the way of HA. The approach to providing scalability seems suspect too–why divide things up such that only one server can provide that role? Seems odd but undoubtedly there are technical constraints that either couldn’t be overcome or they didn’t have time to overcome. Bummer.
I don’t know if wM supports it, but to get HA you’ll need to resort to OS-level facilities. The MwMS docs don’t describe how to do that but maybe something exists on Advantage or is gettable from support. Or, like what is required for Broker HA, you may need to engage wM PS to set this up. It should be pretty straight-forward–identify what needs to be on a device that both the active and passive can access and put it there. Then use the OS facilities to switch active to passive and mount the device on the newly active server.
Other side notes:
Actually, the “master” is a conceptual item only. There is no “real” master machine. From the doc:
“The Master server role represents the default server…There is no actual Master server role in a clustered environment.”
The binaries do not need to be located on the master machine. They can be located on any network device (file server, SAN, NAS, etc.). I’ve never investigated but I would try to put the executables on a device that is physically on or dedicated to each server. I would not share the binaries amongst servers as doing so reduces the ability to apply patches/fixes incrementally.