please have a look at the Deployer Users Guide for how to deploy to clustered IS nodes.
Please make sure that packages are developed to be cluster-aware resp. cluster-safe otherwise you might receive unpredictable behaviour in production.
Means: What is working in a single node environment does not neccessarily work in a clustered environment.
Mainly publish/subscribe triggers need to be checked here.
in IS Admin UI, under Settings → Clustering it should show if the node is part of a cluster or not.
Depending on wM version you will have to installation specific components for interchanging informations between the nodes.
Some parts are stored in the IS Internal database schema, some parts will be stored in i.e. a Terracotta caching server.
All nodes of a cluster need to connect to the same Broker or UM and have to use the same client prefix.
You will find more informations about this in the IS Clustering Guide.
Cedric,
There are different ways of configuring the cluster… There is something called ‘stateless IS cluster’ which will only share the same DB schema (ISInternal) & this will be accessed by external clients through a Load Balancer which will distribute the requests to both the nodes and the other type is ‘stateful IS cluster’ which will require Terracotta server to be installed/setup (similar to Orache Coherance) that will share the state information between two nodes… So IS cluster can be of any type based on your needs…
When you say these nodes are in cluster (be it any type), the configuration and packages etc., should be identical in both the nodes. You can use Deployer (a component of webMethods) using which you can deploy the packages from one environment to the other. It could be from pre-prod (source) to the target env (which is of cluster).
Please read the Deployer’s guide and you will have a good understanding.
and to answer your question, yes, you can just create a Deployer group which will have both the prod IS nodes added to it. You can choose the source as pre-prod and deploy to the group (prod nodes)…
When it comes to Deployer, you will typically create a build from a specific source. Once that build is created, you should deploy that same build everywhere.
In your case, it sounds like you’re using your Development environment as the source (which is fine for the time being but I would encourage you to read up on the Asset Build Environment and repository-based deployments.) This means that you will be using a Deployer instance to point to your Development server and generate a build from there. Once that build is created, you should deploy that same build to Pre-Prod and then to Production.
If you have a single Deployer instance that has access to all environments (i.e. Development, Pre-prod, and Production), then you can have a single Deployer project with a single build but a Deployment Map and Deployment Candidate for each target environment (i.e. pre-prod and prod). This is simplest approach. If, however, your Production environment is on a different network, then you can export the build from Deployer and then import it into a Production Deployer instance.
Currently we are using “Build Once, Deploy multiple times” scenario runtime based.
As our QA (Pre-Prod) and Prod environments do not have access to our internal environments (nor vice versa), we export the build from internal deployer and promote them to QA & Prod for import with some notes about preconditions and risks during deployment (i.e. import database tables, create users & acls).
Sometimes a deployment consists of different parts (deployer builds), then the right order is also stated in the notes when these depend on each to get dependencies resolved in the right way.
Senthilkumar G, to reply to your question. Our technical document says :
The 2 “cluster” of IS are “stateless”, and each instance works in a independent way. Currently, there is “no” Terracotta distributed cache installed for our environment.
The 2 IS share the same DB Schema called : IntegrationServer
The 2 IS share the same DB Archive Schema called : Archive
Each IS uses his own ISINTERNAL schema called (ISInternal1 for IS1, and ISInternal2 for IS2)
The first three statements are good but the 4th one is not right if the first 3 stmts are right & vice versa. If ISInternal schema is different for both Integration servers, then these are not stateless IS cluster. They are just two independent Integration Servers.
An expert from French support replied me (traducted from french) :
“the IS instances must share the same access to the Database audit service, to have a centralized monitoring vison of monitoring for your 2 nodes”.
However the alias ISInternal has to be share only in a Terracotta Cluster context"
and I confirm that currently, there is “no” Terracotta distributed cache installed for our environment.
So our environment just has 2 independant IS nodes, but no cluster stateless, isn’t it ?
To complete my previous reply, I forgot to say that all DB schemas used by IS1 et IS2 I’ve described above, are “all” on the same DB server (same Oracle server).
above is NOT correct. Stateless and Stateful cluster - both should have IS Internal shared. If it is not, they are two independent Integration servers. To give you a very simple example, if you have a scheduler in independent IS setup, the scheduler will be visible and will run only in the node where it is created. whereas, if IS Internal is shared, scheduler will be visible in the other node, and the other node will not execute when your first node executes it.
that’s fine…
correct… 2 independent IS; they are not called as stateless as ISInternal is not shared
Not really. Its about the schema and not the Oracle server. Two different schema in a single db server is same as two db server with one schema each. They are independent of each other, and none of the db objects are shared between two different schema/user.