Integration Server 10.15 configuration on Docker or OpenShift

Docker image includes webMethods Integration server 10.15 with adapters and BPM components.

Custom Docker image has been created with required components using SAG installer and able to run the container. But how to persist all the new configurations done on Integration Server. Also, how to migrate these configurations to new environment along with the solution docker image?

Configurations on IS like UM connection alias. TSA URL for clustering, Any new adapter connections like JDBC or Kafka, Global variables, or any other configuration which we do it through IS admin page.

I know for MSR there is a concept called Configuration Templates but for this we need MSR license. How to do it for IS container without MSR license? Please post your comments or share your thoughts.

You can basically spin up an instance of your freshly built image, perform the necessary configurations, and then create a new image from it. That new image would then be deployed. This is standard Docker functionality.

However, using Docker on the one hand, while performing configuration manually, seems a bit of an odd combination to me. If those are the requirement within your organization, well, then this is how it is. But apart from living with it short-term, I would recommend to change this asap.

Also, you will need to check which components (e.g. BPMS) can actually be run in Docker and how. I do remember that a few years ago TSA was very particular about its networking environment and would be a bit surprised if this changed (but you never know).

The important thing, from where I stand, is to understand how containerization changes things like the SDLC and your CI/CD pipeline. Can you tell us a bit more about how this looks like for you right now?

Please be warned that moving to containers (and esp. container orchestration platforms like OpenShift/K8S) like-for-like will probably not work without problems. I don’t want to scare you, just point out that this is not an easy task.

Can you us a complete list, which components you plan to use? You have tagged your message for MWS, so this would certainly need special consideration.

Okay we can perform all required configurations for IS and then create an image but some configurations would be different for each environment. So creating separate docker image for each environment is not good practice. There must be some way to provide the configurations to the container.

I have created separate MWS image and installed. Since MWS would be persisting all the required data on DB. Here I don’t see any issue. Even I am able to pass DB details as environment variables.

My plan was to build one custom base image and build the solution image with custom packages through CI-CD pipeline. But not able to identify a way to move other required configurations on server level other than custom code. With MSR, we have a product feature called configuration templates to move configurations between environments.

I completely agree that environment-specific containers is a complete no-no.

As to changing settings per container instance, that is also how I would do it. Unfortunately this means that you need to “make or buy” whatever functionality is currently missing.

Do you happen to have a complete list of settings that we are talking about? Because 1) you would need to compare it with what MSR offers anyway, and 2) perhaps it gives us here an idea how to approach this in a community style way. Because you are certainly not alone with that challenge …

FYI you don’t really need to create your own container for IS. You can just download prod ready version from Only MWS is not prepared this way.

For configuration variables, you have 2 options there. You can use the webmethods helm charts and do the configuration on container startup level, or you can push the configuration files to a git repo, and sync it using CI/CD pipelines. Helm is a package repository for Kubernetes and these Helm Charts I linked are also provided by Softwareag and by default they are set to download the container from You may want to mirror that repo to your own git repo or create your own values.yaml file and inject it during startup. The documentation can be found in the git readme file.

I recommend not putting any config in to any containers. In helm charts there is a values.yaml file where you can make all your changes in one place.

edit: Here is the community post about helm charts.

1 Like

If I understand this thread correctly the OP does not have an MSR license available.

I am aware that we have a docker image for Integration Server provided by Software AG on container registry but that image has a plain IS. We need additional adapters and BPM components in the docker image. So, we have created a custom base docker image with required components.

MSR license is not a requirement here since we have containers for IS and helm charts can be applied to IS as well by updating values.yaml file since they are roughly the same products. I am currently working on this at the moment. Though it looks like I will need to disable and/or reimplement some of the functionality myself.

You can modify the docker file for base image, and BPM components are already included in helm charts. Its basically a simple file copy.

Thanks for sharing the information. I do see the docker image can be created using saginstaller using product codes. Some of the product codes used in the Helm charts are MSC,wmprt,PIEContainerExternalRDBMS,Monitor. I want to include Task Engine and adapters like JDBC, MQ, HDFS and Kafka as well, so where can I find webMethods product codes for these components? I could not find any documentation on these product codes.

Containerized deployment of BPMS, Task Engine, Trading Networks and Active Transfers aren’t supported by SAG yet, this is still work in progress (although technically it has already tried it in sandbox environments, and it works.)

For the creation of your base image, given the product codes you needs, the SAG installer is better. The base image in does not contain adapter packages, and you’re supposed to use wpm (webMethods package manager) and to add the packages you need to the base image coming from As of today, not all the adapters you need are available in

One image that is deployed as is in all the environments, that is the absolute best practice, yes.
On top of the base image you generated, you need to add your custom packages and apply the environment specific configuration.
You could create an applicative image with your base image + your custom packages. It is this applicative image that would traverse all the environments.
The problem is the configuration. The MSR allows you to externalize the configuration using properties, but this isn’t available in the IS (unless you use a MSR license with the IS.)
You could use volumes to inject environment specific config files into your containers. Some of the configuration elements (JMS, JDBC pools for instance) are in the config folder, others are in the packages themselves. But this is a concept, it needs to be experimented.
The Deployer probably is a safer option to deal with your external configuration requirements, if you’re dealing with an IS deployment.

@jahntech.cj , wouldn’t WxConfig provide this properties based configuration, by the way?


Thanks for looping me in, @Stephane.TAILLAND_emp .

Yes, WxConfigNG does address all of those issues. I have just released a short blog post that summarizes some of the key aspects. Please let me know if this helps and what other aspects I should write about.

What is the recommended way of overwriting connection strings of JDBC connection adapters? I have a similar case in my current project, we have a lot of custom JDBC adapters. We don’t have an MSR license either and only way I know to configure JDBC adapter connections is through admin UI, but this won’t work in my case (because of automatic scaling, new pod might have a wrong connection string). Would creating a persistent shared storage for packages work? What I have in my mind its to deploy 1 container, update shared packages folder using deployer, and then scale it up.

Just to let everybody know that I will release a tool in the coming days to update JDBC adapter connection details. It will work directly on disk, so IS doesn’t even have to run. Primary use-case is containers, but of course it will also work in regular installations.

Stay tuned!