Automated Code deployment in webmethods

Iam using 10.7 deployer, is there any way that i can automate the code deployment without using ABE.

I completed all these steps (Simulate >> Checkpoint) Just Deploy step I want to automate it and get the deployment report in email. is there a way?

Or any other methods like using publishing and subscribing packages or anything else?

Deployer has APIs available that you can call from a script (which is what ABE can/does do).

When you say “automate” what will be the activity that causes it to execute? Is it a button push somewhere else?

There is a confusion (I had that too) about what ABE does. ABE doesn’t deploy your code, it deploys your assets to a repository. If you have an MSR license you can use Automatic Package Deployer, the link is below.
https://documentation.softwareag.com/webmethods/integration_server/pie10-15/webhelp/pie-webhelp/#page/pie-webhelp%2Fto-autopackagedeployment_for_packages_2.html

There is also a new tool you can use, but its really new. It may or may not support your deployment strategy.

The other manual approach is using WmDeployer from bash or as webservices like Rob mentioned.

Hi,

you might want to check for the ProjectAutomator script, which is part of the WmDeployer package.
This script should be described in the Deployer Users Guide.

Regards,
Holger

2 Likes

Basically, I want to schedule my code deployment final step (deploy). All the prerequisites I can complete before.

I think 1 option is to generate the project Automator file through deployer and call it through a unix script or a schedule service during the desired time.

Another one I came across: invoking this URL through a scheduled service.

http://localhost:5555/WmDeployer/deploy-list.dsp?action=deploy&projectName=AnimalPark&projectType=Runtime&deploymentName=myDeployment&preDeployString=false&force=true

http://localhost:5555/WmDeployer/progress-report.dsp?type=deployment&projectName=AnimalPark&deploymentName=myDeployment

If using Ant is an option, there is a project on GitHub that has custom Ant tasks for Integration Server.

I had used those since 2009 on many occasions with great results.

Deployer is usually my last option to deploy my assets. May be its me, may be it really is overcomplicated, I don’t know. To me it looks like a fragile, can fail because a multiple different reasons product. It has good to have features like dependency check, but it is not trustworthy to me. In your case if you have MSR license I would use automatic package deployer.

It’s you. :slight_smile: (honestly joking)

As with most things, people have different experiences. We use Deployer for all package deployments. We use the “Runtime” project type, which has been deprecated for debatable reasons (a separate topic I suppose). Certainly there are pros and cons, but it has been solid for us. We have never had an unexpected “what the heck happened?” with it.

1 Like

@Sandeep_Deepala If you’re already using Deployer this seems like a logical approach. Instead of someone pressing the deploy button, schedule it as a task. Or have a script or Jenkins or whatever do that based upon whatever “ready to go” event makes sense.

Even though I didn’t like it, I also mostly used deployer. It was probably because I had a very huge package with a shared xsd schema that had kept failing during deployments. It became a trauma for me.

While probably not relevant in this context, it is also possible to “deploy” IS packages just via the file system. This is quite important when building container images.

When you simply extract the ZIP archive of the package into the packages directory (note that the ZIP does not contain the package as a sub-directory), it will be picked up by IS during startup.

Useful documentation, I’ve been looking for it for a long time! It helps to understand some points!

Does this always work? Are there some exceptions to be careful about? Btw this and package export/import deployments are usually dangerous. It bypasses the dependency check completely. It should be complimented with after deployment sanity tests.

Be aware:

  • No dependency check. If necessary services are missing on the target, you won’t know until run-time.
  • No ACL assignment. If services in the package have ACLs assigned, they will not be carried to the target. Permissions are not in the package, they are in a server-wide confg file (which can make version control and deployment using tools other than Deployer a bit more fun!). :slight_smile:

You both (@engin_arlak and @reamon ) bring up valid points. Thanks!

In practice I have always addressed the dependency part via the overall CI/CD pipeline. So, if someone had added a new dependency and forgotten about it, that would have come up in the SIT environment at the latest; although it was mostly detected in CI.

The permissions are a different story and the design to have them in only a single central place is not ideal in my opinion. In the past I had dealt with it via WxConfig, but with containers I will also look into an “offline” aproach, similar to the ART Connection Updater.

I realize that my overall approach to often develop my own tooling is not everybody’s cup of tea. But it has always worked extremely well for me for more than 15 years. And the productivity gains were significant in that it allows a completely different approach to application development wit IS.

Are there other aspects that I missed to address?

Adapter connection pools are another item of interest. Do you deploy packages that contain them?

Another item is publishable doc types. Deployer auto-sync’s them (if the project is configured to do so). If pushing packages around at the file level which contain such doc types, that would be an additional item to address.

JDBC Adapter connections are deployed if the bound package is exported. I had a support ticket recently. SAG support confirmed that export and then import packages will import JDBC adapter connections properly.

This will definitely require a manual sync. I wish there was an automatic mechanism which automatically syncs the document if it is not found in UM server.

Certainly. But the caveat here are the settings within. Copying a package from dev to test/prod will likely have environment-specific settings. And if copied and enabled, even potentially for just a short time before changes can be applied, the results can be undesired (or disastrous).

It was covered in another thread, but we never deploy adapter connection pools. They are all in one package which is never deployed. We create them in the target environments manually.

I haven’t looked but I imagine there is now something in the admin API that would support this.

1 Like

Like everything else, there are services for this but it is not automatic. You need to trigger them manually.

The common practice for packages that host the adapters and the connections are different. They need to be in separate packages no matter what. For the connections you configure it once and you can keep deploying the same package during container startup, or keep it as it is if using it on prem. For adapters there should be a package dependency set properly for sure. And then there is no harm in exporting and importing the package. So the connection package will be environment dependent but the adapter package will be deployed from lower environment to upper environment.

1 Like

I was thinking the call to it would be added to whatever script/tool used for deployment (or CI/CD). For deploying packages containing publishable doc types, include a step that would make the call to sync.

Right. I was referring to adapter connections. Adapter services within packages are not a concern, per se, other than the aforementioned dependency check (on adapter connections) we both noted.

1 Like