Deployment Strategy

Hi,

Version: wM8.2.2.0
Need to understand how to deploy the packages suspending the least number of components in LIVE.

Current Architecture:
The packages have sub folders segregating, services, triggers, webservices etc. But all the adapter services are kept in a separate common package for the re usability purposes.

Note: I do a full package release instead of a patch release. Reason being, if component level(patch) deployment is done, cannot rollback the deployment as the package version wont be changed

Challenge: Most of the deployments will include the adapter service changes which will affect the Common package, which then will end up deploying the Common package(where all adapter services exist for other integration). This has led to a challenge in suspending other integrations which are not relevant for that release. This has been affecting the business and each small release also is bringing the major business down.

Please advise on this

Hi Pavan,

we only have the Adapter Connections and Listeners in a separate package being created manually on each instance as this contains evironment specific values which cannot be replaced during deployment.

As the Adapter Services and Listener Notifications contain application specific code which is only referring to either connection or listener, these objects are in the same package as the services, triggers, webservices and document types.

The generated services for the process models are in a separate package invoking services with the same name and signature in the normal package. By doing so, if the process package gets corrupted, you can delete it and regenarate from Designer without losing implementation details. Only the triggers need to be adjusted if neccessary.

Please explain, why the adapter services need to be in a custom package for reusability.

I do not understand why rollbacking the full deployment should not be possible.
As long as you have taken a checkpoint for the Deployment Candidate, there is always the posssibility to rollback at least the last deployment. Package version does not care here.

Please reconsider your packaging approach.

Regards,
Holger

A few years ago I started with the same idea: separating all adapter services into a single package. But over the years I’ve completely changed the architecture to be able to indenpendently deploy IS packages into production. Right now, every package contains all the services it needs to do its job. Some database connections are even duplicated in different packages to make them completely self-contained.

I think of an IS package as a microservice that should contain everything it needs without any external dependencies (if possible). During deployment this architecture saves me a lot of time because I don’t need to cope with dependencies at all. However, this idea might come at the cost of duplication, which might or might not be a bigger problem than the deployment problem in your case.

So I guess it all depends on what causes you the bigger headache: duplication or deployment.

2 Likes

I also agree that separating adapter services into a common package is asking for trouble. Ever since Deployer started supporting variable substitution for adapter connections, I also stopped separating adapter connections into their own packages. I have a package organization convention that has proven quite useful and I can elaborate more on it, if needed. If you can restructure your packages, that would be best. However, I’m guessing that would not be a small feat. So, in the meantime, let’s talk about how you can deploy that package without breaking everything else. Do you have a clustered environment? Do you take any steps to suspend processing on the target server today?

Percio

I also agree that separating adapter services into a common package is asking for trouble. Ever since Deployer started supporting variable substitution for adapter connections, I also stopped separating adapter connections into their own packages. I have a package organization convention that has proven quite useful and I can elaborate more on it, if needed. If you can restructure your packages, that would be best. However, I’m guessing that would not be a small feat. So, in the meantime, let’s talk about how you can deploy that package without breaking everything else. Do you have a clustered environment? Do you take any steps to suspend processing on the target server today?

Percio

Hi Percio,

I have only spearated the connections and listener to a dedicated connections package which will not be deployed to higher environments but will be created from scratch for each environment. This is due to the fact that SAP Adapter Listeners contain environment specific values which cannot be substituted by variable substitution and the fact that the environments reside behind different firewalls and when the build has been taken with connections being enabled, the deployment to target environment takes quite long as the server tries to start the connections with the old values (getting a timeout of course) before stopping them, substituting the variables and restarting them.

Only the Adapter services and Listener Notifications are bundled with the normal implementation and only refer to the connections package for their connections or listeners.

Regards,
Holger

Got it. I haven’t worked with the SAP adapter. It’s disappointing to hear that it has the limitations you described.

Percio

Hi Percio,

I definitely want to know the package structure for the future but, yeah, it is difficult to change the way the current implementation is. I can understand, self contained adapters within the package though it gets repeated is a better design but wanted to avoid duplicates.

Our environment is clustered and I understand the server could be brought out of the cluster and after deployment it can be brought back to the cluster until which the other servers in the cluster can take the load and requests. This on a busy day, quite difficult to take the risk.

And we do suspend triggers and schedulers during the deployment.

Thanks,
Pavan

Pavan,

I’ll do a separate post (or perhaps a blog post) about the package convention and I’ll share it with you.

Back to the issue at hand then, it sounds like you’re already taking some steps to quiesce the server by suspending triggers and scheduled tasks, so perhaps you just have to take it a little further.

Are you deploying to one server in the cluster at a time or are you deploying to the whole cluster at once? With triggers and scheduled tasks suspended, it sounds like you’re still getting some activity. Via what other methods are services being executed? Can you suspend those entry points as well (e.g. suspend HTTP and file polling ports)?

It’s a bit of a pain, but the basic idea of what you want is:

(1) Put server A to sleep
(2) Wait for all services to finish on server A
(3) Deploy to server A
(4) Wake up server A
(5) Repeat steps 1-4 on server B

When it comes to step (1), in more recent versions of the Integration Server, there’s a “quiesce” feature that you could probably leverage, although I must admit I haven’t tried it in a deployment scenario like this one. In older versions, however, you will have to do the suspending of triggers, ports, and of the scheduler yourself. You could rely on Deployer to do it for you, but I have run into issues in the past when having Deployer suspend those things.

Percio

Totally agree with you. Planing for a similar implementation.

IMO duplicating things just to simplify deployment is not a good thing. Deployment without interrupting operations should be implemented at the “other level” of the architecture, i.e. via clustering etc. (as Percio described). All the software modules should just assume that everything works and is available. It’s a task for the environment/admins to ensure this.

We have also experienced this (with other types of connections). IMO, this is a big design flaw in the deployment process. I’m disappointed by the fact that the deployer provides variable substitution on the one hand and on the other hand does not substitute the values before starting the newly deployed component (but rather tries to start it with the “source” values and only then changes values). We tried to solve this via SAG support but to no avail. Bad and sad IMO.

Has anyone solved this in a pleasant way? How?

1 Like