We run critical applications that utilise Integration Servers/Trading Networks and the requirement is that we need to have zero down time. When the IS patches are implemented and webMethods upgrades are performed (e.g. 9.6 to 9.10), is there a mechanism to perform the upgrade without down time.
From the application perspective, we were thinking about side-side approach i.e. (e.g. 9.6 to 9.10) have the 9.10 installed (passive ready ) and switch over at the time of cut-over.
In the case of IS patch installations that requires DB changes as well; side-side approach works from application perspective. But how is the DB patching managed? Should there be a side-side database as well? If yes, wouldn’t the replication between the side-side databases become complex assuming the schema of one side (unpatched) and the schema on the patched side are different?
Is there a better way to perform IS patch installations with no down time?
It would be of great help if you could please share your ideas.
as from 9.5 onwards there is only side-by-side upgrading supported.
Please have a look at the current Upgrade Guide availble from Documentation section of either Empower or Community.
All components require a downtime during patching, but it depends on component if this can be handled by UpdateManager or if the component needs to be shutdown before starting UpdateManager (i.e. MWS is one of the later ones).
Thank you Holger for the information.
The issue with our installation is: the IS is connected to multiple applications and is primarily used for transforming the data and for application to application interfaces and orchestrating the discrete services. The applications connected to IS (say webservices calls) have different outage windows e.g. Application 1 could have outage window of Sunday 10 am to 12 noon while Application 2 could have outage window of Wed 6 pm to 8 pm. If i choose a window for performing the IS patching at a particular time, this impacts the transaction flow for atleast one of the applications connected to the ESB/IS. The applications connected to the IS are mission critical and there are SLAs around the transformation and delivery timeframes.
With your experience, please let me know if you have come across this scenario and how you were able to resolve such issue. We were even planning to build another stack with the upgraded product ready and flip it during the cut-over but the issue with that option was database level patching i.e. take a copy of prod db, apply patch and keep it ready for the flip but how do we keep the data between the prod copy and upgraded DB in sync; taking it 1 step forward how would the sync work with different schemas (if the schema has changed in the upgrade/patch)
Once again, thank you for your prompt help.
we have spread our application on several IS instances each having their own installation directory.
They all are connecting to the same MWS, Broker, Database (if neccessary).
Broker and MWS are installed in its own installation directory each.
All components that require a database schema have their own set of schema except for shared business data…
Each IS has its own internal, tn, and archiving schema, MWS uses a different schema.
This reduces the need to shutdown several instances in one maintenance window in combination.
Usually the Broker is updated first keeping the ISes running by using local publish/subscribe where possible or buffering the data until Broker is up again.
Then it is possible to update the ISes one be one dependent on the agreed maintenance window with the partners connecting directly to the ISes. Some of the are connecting to multiple instances for different purposes and therefore only part of the interfaces is affected per maintenance window.
MWS will be patched as last component as this updating has the longest downtime.
Broker requires a downtime of approx 30 minutes (usually less).
IS takes approx 1 hour (depends on how many components are installed additionally and if they are affected by the fixes or not)
MWS takes more than 1,5 hour downtime as this one is the most complex component in our landscape.
You can try to evaluate if using a Cluster is an option for you.
If you have a 2-node IS cluster, bring down one node, patch it, bring down the second one, start the first patched one, patch the second one and bring it up again.
Check if the cluster is being reestablished afterwards or manually recreate the cluster.
It is neccessary to stop the second node before restarting the first one to do DB-Migration (if there is a DC_DBS_Fix involved) and to avoid errors due to different patch levels.
By doing so the tolal downtime for which all partners are affected is reduced to the time which is needed to shutdown second node and start the first node, which usually should not exceed 15 minutes at all.