Build a “poor-man’s” facility to make sure just one server runs a given task at a time. A DB table logical lock (the presence/absense/status of a record), a file-based semaphore, or any other type of “I’m running, you don’t need to” approaches can work.
For notifications:
JDBC adapter notifications are really just scheduled tasks (a thread polls for activity). So do your own polling. Use one of the mechanisms above to ensure just one server is processing. This is exactly what the IS cluster does–so you’ll want to decide which to use (wM-supplied or your own custom facility) if notifications are a big part of your environment.
Okay, now the the big question, how can I get the same behavior (i.e. two IS servers w/client prefix matching) to subscribe and only one getting the document) using JMS Triggers?? JMS does not use the Client prefix configuration in the Broker settings.
Both IS 1 & 2 are processing the JMS triggers twice for each server. not like the local trigger behavior
The JMS Adapter’s MessageListener notifications are not supported in a clustered environment. A new feature was planned to bring JMS capability into IS triggers. pls check with PD if its available in 7.1.1.
Mark, Rob - What has your experience been with IS712 and its utilization of Tangosol and Coherence? Do you still feel that the potential issues caused by using IS’s clustering far outweigh the benefits? Or have you found that wM has fixed the bugginess / querki-ness with previous IS-based clustering?
I haven’t had the opportunity to use IS clustering with the new facilities. From the docs it seems that the underlying technology has changed but the basic features provided are the same. I think it still holds true that one needs to understand what is provided by IS clustering and what is not.
We are currently looking into implementing High Availability for IS 7.1.2 and wanted to know if Clustering is a better option then say Active Passive through some sort of mirroring replication at the OS level.
The idea is to be up 24-7 with as little down time as possible. We don’t really need more processing power but if clustering makes high availability easy it would be a choice.
A load-balanced cluster would provide what you’re looking for. A 3rd party load-balancer in front of 2 or more identically configured IS instances will allow for planned and unplanned outages of any single IS instance. You’ll need to make sure your integrations function properly in such an environment.
Package replication does not rely on IS clustering. Critical to success is that each IS instance in the load-balanced cluster should have the same configuration and the same set of packages.
If you use a load-balanced cluster, you may need one or more features provided by IS clustering. The key is to understand what IS clustering provides and whether or not any of your integrations need any of those features.
We are currently planning on exploring package replication with an external load balancer.
The plan as it stands is to have two identical IS machines running and replicate packages between the machines (hopefully automatically through IS). Only one will get traffic (act as primary). In the event of a failure (IS or Hardware) we will reroute traffic to the second server. This should keep us within our SLA (we are still negotiating timeframes).
We only use IS and Trading Networks for document processing so we should be ok.
The secondary IS would be for emergency if the primary went down. Our transaction volume is low but we need very good up time. The idea is to have a secondary ready to become primary in less than a minute.
At this point we would not use IS clustering at all as I am looking for a really simple solution to High Availability. (Sorry this discussion probably doesnt belong in your Cluster thread… )
Did not know that yet. I will try to track down the docs on this.
I read up on the scheduled tasks and I am pretty sure they can be controlled per node.
TN doesn’t call it clustering, but it is basically both IS instances sharing the same TN DB and being configured to notify each other when TN items change (profiles, doc types, etc.). Just wanted to mention that in case the lack of “TN cluster” in the docs causes confusion.
“I read up on the scheduled tasks and I am pretty sure they can be controlled per node.”
Yes. Indeed, the tasks on one node have no idea that another node even exists. So you’ll need to decide if the tasks on the secondary node can be enabled or not. Some of your integrations may be such that only one at a time can be running.
“the tasks on one node have no idea that another node even exists. So you’ll need to decide if the tasks on the secondary node can be enabled or not. Some of your integrations may be such that only one at a time can be running”…
If the scheduler if you set the “Any cluster node” then each of IS nodes (that are in cluster) know it running on the one of the node and not chance of duplicate thread running as far I seen the behavior on 712 case and no issues noticed with the “Any” setting:
May be my understanding the above comment might be wrong…sorry just in case:
I’m interested on this topic. Currently, we have a ver simplistic architecture. We have two clustered Integration Servers. If I don’t clustered the two IS, then how can I be sure that the scheduled tasks will not overlap? Basically, I don’t want the second IS process the same file if the first IS hasn’t completed yet.
Yet another approach is to implement a “poor man’s” cluster-aware facility (I’ve done this in a couple of places)–Modify your scheduled services to check a config file. In that file is a list of the IS instances, in order or primary to secondary, etc. When the scheduled service runs on a given instance, it loads that file. If the first name in the list matches its own name, it continues. If the first name doesn’t match, then it pings the other IS instance. If the other instance responds, the service stops. If the other instance does not respond, the service checks the next name in the list. When it finds its own name and all instances that are above it in the list have failed to respond, it continues to run.
When the other instance in the list responds and instance that ran scheduled service stops execution, how would it let know other instance to run the job that responded?