Building an integration microservice from A to Z with webMethods and Kubernetes

There have been a number of articles posted on the Tech Community over the last couple of years comparing microservices to monoliths and explaining how to use our Microservices Runtime to deploy your own.

However, I thought it was time to revisit the subject and bring forward a practical article on building an API based on our microservices platform with a view to deploying it into a Kubernetes environment.

We will break this down into a number of sections:

  • Develop a small autonomous package to facilitate deployment.
  • Configuration variables to manage runtime properties.
  • Build a microservices image.
  • Run locally.
  • Deploy to a Kubernetes environment.

This article assumes that you have already used our local development environment and can develop and run webMethods flow services. If you don’t please refer to these articles first:

  1. Creating and Debugging Flow Services using webMethods Service Designer
  2. A word about packages

Develop a small autonomous package to facilitate deployment.

Okay, let’s start from the beginning and build a simple Microservices orientated package. Therefore I would recommend that you refer to my article webMethods packages from a developers perspective if you haven’t already done so. This will help you get started with the building blocks for developing services and APIs using webMethods.

Great, so you know how to build small functional packages. In which case we don’t need to start from scratch and you can just drop in my readymade package JcDemoService from my personal GitHub. To do that simply navigate to your packages directory and execute the following commands

$ cd <SAG_HOME>/IntegrationServer/instances/default/packages
$ git clone https://github.com/johnpcarter/JcDemoService.git

alternatively, if you already have your packages directory under version control, then you can add the above package as a submodule e.g.

$ git submodule add https://github.com/johnpcarter/JcDemoService.git JcDemoService

Then restart your Integration Server and check that the package is available by going to the package home page

http://localhost:5555/JcDemoService

from where you should see the package documentation page e.g.

This documentation was generated automatically using a new feature available in Designer from 10.15. Refer to my article Package improvements in 10.15 for more information.

You can try it out by clicking on the APIs section and downloading the API definition into your preferred API client. There is only one resource to try out

for instance, here is a curl command demonstrating the API and service, which simply logs a greeting to the server log.

$ curl "http://localhost:5555/rad/jc.demoservice:api/v1/greet/john?name" \
 -u 'Administrator:manage'

Configuration variables to manage runtime properties.

The webMethods Microservices Runtime allows you to customise the behavior of the container and your solutions via configuration variables. This is important as you may want to deploy the same API service into different environments or stages that require different properties, such as the level of logging, permissions and even modify end-points if connecting to external sources. More detailed information is available in the article Configuring webMethods Microservices Runtime in a Docker container.

For our purposes, I have provided an example file in the package resources directory named demo-service.properties

# enable service auditing
jdbc.audit.driverAlias=com.wm.dd.jdbc.mysql.MySQLDriver
jdbc.audit.dbURL=jdbc:wm:mysql://mysqldb:3306;databaseName=wm
jdbc.audit.userid=wm
jdbc.audit.password=manage
jdbcfunc.ISCoreAudit.connPoolAlias=audit
auditing.ServiceQueue.destination=ServiceDBDest
jdbcfunc.ISInternal.connPoolAlias=audit
# allow Admin port and Administrator password to be set via environment variables
user.Administrator.password=$env{admin_password}
settings.watt.server.port=$env{admin_port}
# General recommendations for container i.e. make it stateless and keep log files to a minimum.
messaging.IS_UM_CONNECTION.useCSQ=false
settings.watt.server.publish.useCSQ=false
settings.watt.server.saveConfigFiles=false
settings.watt.server.audit.logFilesToKeep=1
settings.watt.server.serverlogFilesToKeep=1
settings.watt.server.stats.logFilesToKeep=1
settings.watt.debug.level=Warn
# Disable debug pipeline save/restore options
settings.watt.server.pipeline.processor=false
settings.watt.net.default.accept=application/json
settings.watt.server.threadPool=75
settings.watt.server.threadPoolMin=20
# example service variable that can be customised
globalvariable.jc..demoservice..sleep..random.value=2

The above file allows us to customize the container to introduce service auditing (jdbc.*) and various other properties. For more documentation on configuration refer to our documentation center → Using Configuration Variables Templates

Building the image

So you can now have the package running in your local development environment. However, we want to run it as a container based on a proper immutable image. To do that we need two things;

  • base image for the Microservices Runtime
  • dockerFile - to merge our package with the base image

You can download a production quality OCI image for our Microservices Runtime server from our dedicated Containers site. However, you will need to have valid empower credentials and if you don’t then you can instead download the version available from Dockerhub. The latter does not get updated with the latest fixes, but is feature complete albeit you aren’t allowed to use it in production!

I have already provided a dockerFile (shown below) and an example configuration variables file in the resources directory of the package that you have downloaded.

#  Make sure that you have already pulled the image from our containers.softwareag.com site
FROM sagcr.azurecr.io/webmethods-microservicesruntime:10.15.0.0
# Uncomment the following line if you don't have empower credentials
#FROM softwareag/webmethods-microservicesruntime:10.15
LABEL MAINTAINER="" \
	DESCRIPTION="" \
	COMMENT="example Micro service using webMethods" \
	CUSTOM="true" \
	SAG="true" \
	BUILD=build-msc-00002 \
	BUILD-TEMPLATE="demo service" \
	TYPE="Micro Service" 
#user root

# define exposed ports
	
EXPOSE 5555	
EXPOSE 9999	

# user to be used when running scripts
USER sagadmin

# files to be added to based image (includes configuration and package)
	
# If you want add your own license key, then remove the comment and put your license file in the same directory as this dockerFile
#ADD --chown=sagadmin ./licenseKey.xml /opt/softwareag/IntegrationServer/config/licenseKey.xml				
ADD --chown=sagadmin ./demo-service.properties /opt/softwareag/IntegrationServer/application.properties				
# We will use this file instead when deploying to k8s as we need to change the DB endpoint to a k8s service.				
ADD --chown=sagadmin ./resources/demo-service-k8s.properties /opt/softwareag/IntegrationServer/application-k8s.properties				

ADD --chown=sagadmin .. /opt/softwareag/IntegrationServer/packages/JcDemoService				

You can now build your own image using the following commands

$ cd <SAG_HOME>/IntegrationServer/instances/default/packages/JcDemoService/resources
$ docker build -f Dockerfile -t default/demo-service:1.0 ..

Running locally

You can now spin up a container with the following command

$ docker run -it -p 9090:9090 -e  admin_password=manage -e admin_port=9090 default/demo-service:1.0

Notice the use of the environment variables as defined by the configuration variables template.

Afterward, you can access the admin page via port 9090 and the password manage. You can also update your API request from before to use port 9090 instead of your local development environment.

Using a docker compose file

The above command allows you to spin a container quickly. However, in most situations, you will need a number of containers to be able to test a real-world example. Even though our example above is not quite right yet, you will see many error messages If you look at the server log. This is because we enabled service auditing via the configuration variables template, but we haven’t yet started up a database to allow us to do that.

The best solution for local development is to use docker-compose, which allows us to start up a collection of containers and their required configuration based on a single yaml file and one command e.g.

version: '2'

services: 
  demo-service: 
    image: defaultrepo/demo-service:0.1
    hostname: demo-service
    ports: 
        - "5555:5555"
        - "9999:9999"
    volumes: 
        - wmdb:/opt/softwareag/IntegrationServer/db
        - cache:/opt/softwareag/IntegrationServer/cache
    environment: 
        - JAVA_MIN_MEM=256m
        - JAVA_MAX_MEM=1024m
        - SECRET_PATH=/home/secrets
        - SAG_IS_HEALTH_ENDPOINT_ACL=Anonymous
        - SAG_IS_METRICS_ENDPOINT_ACL=Anonymous
        - SAG_IS_AUDIT_STDOUT_LOGGERS=ALL
        - admin_password=manage
        - admin_port=9090
    depends_on: 
        - "mysqldb"  
  mysqldb: 
    image: wm-mysql-enterprise-server-5.7.24:10.5
    hostname: mysqldb
    ports: 
        - "3306:3306"
    volumes: 
        - mysql_db:/var/lib/mysql
    environment: 
        - MYSQL_ROOT_PASSWORD=manage
        - MYSQL_DATABASE=default  
volumes:     
   wmdb:     
   cache:     
   mysql_db:

The above file starts first a mysql database container with the name mysqldb, followed by our microservices, which connects to the database via the configuration variable ‘jdbc.audit.dbURL’. An example of this file is also provided in the JcDemoService package resources directory and you can run it with the following command

$ cd <SAG_HOME>/IntegrationServer/instances/default/packages/JcDemoService/resources
$ docker-compose -f docker-compose.yaml up

You will need to replace the MySQL image with a custom MySQL image of your own that has the webMethods schema already populated. Make sure that you update your configuration variables to use the correct database name, user and password. You can encrypt the password using our online encryptor provided by your webMethods runtime at

http://localhost:9090/WmAdmin/#/integration/dsp/microservices.dsp

I will be addressing how to create image repopulated with the required webMethods schema in a future tech article.

To shutdown your running containers, use the command

$ docker-compose -f docker-compose.yaml down

This will stop and remove the containers.

Detailed documentation for building your own docker-compose files can be found here docker docs - docker compose

Deploy to a Kubernetes environment.

Now you won’t be using either of the above techniques in order to host your microservices in production and will probably use some kind of derivative of Kubernetes, such as OpenShift or Azure Kubernetes Service (AKS)

In this case, you will need to be able to deploy your containers as either a DeploymentSet or a StatefulSet. We recommend that you use a DeploymentSet to deploy your microservices as they are stateless and hence easier to configure, manage and auto-scale. The downside is that obviously, you cannot expect to be able to store state inside the container. Any stored data cannot be guaranteed as the pods that get instantiated by your deployment will come and go, nor can you guarantee that all requests related to the same transaction will be processed by the same pod!

In addition, it is generally not recommended to allow different deployments i.e. the pods communicate directly, again due to the nature that individual pods get created and destroyed at will and hence their IP addresses cannot be trusted. Instead, Kubernetes provides both Services and Ingresses to manage accessibility and communication.

This is not a guide to Kubernetes and I will not go into too much detail about the following yaml files that we will use for deploying our microservice. However, you should be able to adapt the files to your own needs after referring to the above links.

Running Kubernetes

The good news is that if you have already installed either Docker for Desktop or Rancher Desktop then you already have a local Kubernetes server. Alternatively, you could use AWS or Azure AKS to run your containers in the cloud, but you will still need to install the Kubernetes client tool kubectl that allows you to interact with your Kubernetes environment.

Run the following command from the command line to check that you have kubectl available and that it is configured to connect to a Kubernetes environment.

$ kubectl config view

I won’t go into the details about configuring your environment here. There are guides available online and if you have installed either Docker or Rancher then it will be configured by default to communicate with the embedded k8s cluster. Refer to this cheatsheet for the kubectl tool for a list of the available commands.

Keep things tidy with a namespace

Kubernetes allows you to keep your projects separate by specifying a name space, we will do that by creating a namespace called webMethods and adding all our assets to that.

apiVersion: v1
kind: Namespace
metadata:
  name: webmethods
  labels:
    description: demo-microservice-in-k8s

Run the command(s)

$ cd <SAG_HOME>/IntegrationServer/packages/JcDemoService/resources/k8s
$ kubectl apply -f webmethods-namespace.yaml

Create a deployment for the microservice (DeploymentSet)

We will ensure that our demo-service microservice is stateless and deploy it as DeploymentSet

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wm-demo-service
  labels:
    app: wm-demo-service
    deploymentId: demo-service
    serviceId: demo-service
    serviceType: deployments
  namespace: webmethods
  annotations:
    monitoring: "false"
spec:
  replicas: 2
  selector:
    matchLabels: 
      app: wm-demo-service
  template:
    metadata:
      labels:
        app: wm-demo-service
    spec:
      hostname: 
      restartPolicy: Always
      containers: 
      - name: demo-service
        image: default/demo-service:1.0
        ports: 
        - containerPort: 5555
          name: admin-port
        - containerPort: 9999
          name: diag-port
        livenessProbe:
          httpGet:
            path: /invoke/wm.server/ping
            port: 5555
          initialDelaySeconds: 120
          timeoutSeconds: 10
        volumeMounts: 
        - mountPath: /opt/softwareag/IntegrationServer/db
          name: wmdb
        - mountPath: /opt/softwareag/IntegrationServer/cache
          name: cache
        env:
        - name: "SAG_IS_CONFIG_PROPERTIES"
          value: "application-k8s.properties"
        - name: "JAVA_MIN_MEM"
          value: "256m"
        - name: "JAVA_MAX_MEM"
          value: "1024m"
        - name: "SECRET_PATH"
          value: "/home/secrets"
        - name: "SAG_IS_HEALTH_ENDPOINT_ACL"
          value: "Anonymous"
        - name: "SAG_IS_METRICS_ENDPOINT_ACL"
          value: "Anonymous"
        - name: "SAG_IS_AUDIT_STDOUT_LOGGERS"
          value: "ALL"
        - name: "admin_password"
          value: "manage"
        - name: "admin_port"
          value: "5555"
      volumes: 
      - name: wmdb
      - name: cache

Run the following command to deploy the MicroServices

$ cd <SAG_HOME>/IntegrationServer/packages/JcDemoService/resources/k8s
$ kubectl apply -f demo-service-deployment.yaml

afterward you can list the running pods with the following command

kubectl get pods --namespace webmethods -o wide

You should see two pods as we configured the property replicas to 2 in the above deployment. The cool thing is that you can scale the number of pods up and down with a single command; the following command would increase the number of pods to 3.

kubectl scale --namespace webmethods --replicas=3 deployment/wm-demo-service

Remember that these pods are stateless and even though we have declared volumes for the cache and embedded database, these volumes are not shared and exist only for the duration of the pods life

Create a stateful deployment for MySQL (StatefulSet)

Our above docker-compose also referenced a MySQL database and obviously that needs to manage state and so we will deploy it as stateful using the following configuration.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: wm-storage
  labels:
    app: wm-storage
    deploymentId: demo-service
    serviceId: mysqldb
    serviceType: statefulsets
  namespace: webmethods
  annotations:
    monitoring: "false"
spec:
  replicas: 1
  serviceName: wm-storage-server
  selector:
    matchLabels: 
      app: wm-storage
  template:
    metadata:
      labels:
        app: wm-storage
    spec:
      hostname: 
      restartPolicy: Always
      containers: 
      - name: mysqldb
        image: iregistry.eur.ad.sag/pmm/mysqlwm:8.0
        ports: 
        - containerPort: 3306
          name: jdbc-conn
        volumeMounts: 
        - mountPath: /var/lib/mysql
          name: mysql-db
        env: 
        - name: "MYSQL_ROOT_PASSWORD"
          value: "manage"
        - name: "MYSQL_DATABASE"
          value: "default"
  volumeClaimTemplates: 
    - metadata:
        name: mysql-db
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName:

Run the following command to deploy the MicroServices

$ cd <SAG_HOME>/IntegrationServer/packages/JcDemoService/resources/k8s
$ kubectl apply -f mysqldb-stateful-set.yaml

We have configured the number of replicas to be 1 i.e. only one pod will ever be instantiated if it fails it will get restarted and no data will be lost as we are using a persistent volume for the database files. We could increase the number of replicas, in which case each pod would have its own database files. Which in turn requires that the application itself can be aware of other nodes and copy the data itself. However, this is outside the scope of the document as it requires us to delve into clustering MySQL, which I don’t want to do here.

Creating an internal service for database access

We need to create a service in order for the microservice to access the database safely. The service acts as both a proxy and a load-balancer so that the pods do not have to be addressed individually. This is why it is important that if you have multiple pods any state is shared, as you cannot guarantee which pod will process each request.

The following creates a service of type ClusterIP, that can only be used internally to allow pods from one deployment to reference pods in another. It provides load balancing to ensure that the load is distributed equally across the target pods.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: wm-storage
    deploymentId: demo-service
  name: wm-clusterip-storage
  namespace: webmethods
spec:
  selector:
    app: wm-storage
  type: ClusterIP
  ports: 
  - name: jdbc
    port: 3306
    protocol: TCP
    targetPort: 3306

Run the following command to deploy the MicroServices

$ cd <SAG_HOME>/IntegrationServer/packages/JcDemoService/resources/k8s
$ kubectl apply -f clusterip-storage-service.yaml

Don’t worry we don’t have to do anything with our deployment other than to kill/restart the pods, we already included the different endpoint for the database connection at build time. We included two files in the dockerFile, one for standard deployment where we want to reference the container name ‘mysqldb’ and a 2nd file that references the service name ‘wm-clusterip-storage’ as above. The snippet can be found at line 29 of the dockerFile.

ADD --chown=sagadmin ./resources/demo-service.properties /opt/softwareag/IntegrationServer/application.properties
# We will use this file instead when deploying to k8s as we need to change the DB endpoint to a k8s service.				
ADD --chown=sagadmin ./resources/demo-service-k8s.properties /opt/softwareag/IntegrationServer/application-k8s.properties				

How do we specify which file to use? By default it will use the first. However, setting the environment variable ‘SAG_IS_CONFIG_PROPERTIES’ allows us to choose a different file and that is what we did in the above DeploymentSet at line 45 for our microservice.

- name: "SAG_IS_CONFIG_PROPERTIES"
  value: "application-k8s.properties"

Thus there is nothing to do, other than to kill the above pods and then see them respawn like magic and connect to the database.

$ kubectl get pods --namespace webmethods
NAME                   READY     STATUS    RESTARTS   AGE
demo-service-3476088249-w66jr   1/1       Running   0          16m
demo-service-3476088240-x67pq   1/1       Running   0          16m
$ kubectl delete pods --namespace webmethods demo-service-3476088249-w66jr

Creating an external service for API access

So far, so good. However, we have no way of actually accessing these microservices from outside of Kubernetes. To do this we need to create either a Service that provides an external IP address or an Ingress.

The easiest is to create a NodePort service NodePort, which creates an open port for each VM cluster, the port must be available on the machine, cannot be reused by any other service and must be in the range 30000-32767.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: wm-demo-service
    deploymentId: demo-service
  name: wm-nodeport-demo-service
  namespace: webmethods
spec:
  selector:
    app: wm-demo-service
  type: NodePort
  ports: 
  - name: iadmin-port
    port: 5555
    protocol: TCP
    targetPort: 5555
    nodePort: 30055

Run the following command to deploy the MicroServices

$ cd <SAG_HOME>/IntegrationServer/packages/JcDemoService/resources/k8s
$ kubectl apply -f nodeport-demo-service-service.yaml

You will be able to access the API with the port 30055

$ curl "http://localhost:30055/rad/jc.demoservice:api/v1/greet/john?name" \
 -u 'Administrator:manage'

to list all of the services

$ kubectl get services -n webmethods
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
wm-clusterip-storage       ClusterIP   10.97.218.140   <none>        3306/TCP         11m
wm-nodeport-demo-service   NodePort    10.107.60.164   <none>        5555:30055/TCP   2m15s

You could also instead use a LoadBalancer service that would instead provide you with a dedicated IP address and more advanced routing features. However, your Kubernetes platform will have to be configured appropriately for it to work.

A world about Ingresses

You may have heard about Ingresses and wonder what is different when compared to a service.

An Ingress provides another level of abstraction on top of a service in that an Ingress allows a specific platform to impose certain rules and/or uses a certain security or routing service. Often security policies will block services from public access and an Ingress becomes mandatory in order to govern access. A good example of this is Microsoft Azure which provides a public access component called an App Gateway and your organisation may restrict internet to using it e.g.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wm-ingress-https-app-gateway
  namespace: webmethods
  annotations:
    kubernetes.io/ingress.class: azure/application-gateway
    appgw.ingress.kubernetes.io/backend-protocol: "https"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: wm-service-package-manager
            port:
              number: 80

The annotations above are important as they dictate the implementation-specific properties that have to be configured. You cannot configure an Ingress without them as an Ingress itself is just an abstraction and not an implementation.

Monitoring your microservices

Microservices somewhat ironically cannot be micro-managed i.e. there are too many of them to manage individually. Instead, we need to automate many of the automation and monitoring activities and leverage common practices as much as possible to avoid bespoke tooling for different types of microservice.

That is why the webMethods MicroServices Runtime provides extended administrative APIs as well as liveness and health probes that are compatible with tooling common to a containerised architecture like Prometheus and Grafana.

liveness probe

## ping
$ curl "http://localhost:5555/invoke/wm.server/ping" \
     -H 'Accept: application/json'
{"date":"Mon Dec 05 10:51:52 CET 2022"}

The health API

## health Request
$ curl "http://localhost:5555/health" 
{
  "Adapters": {
    "JDBCAdapter": {
      "Connections": [
        {
          "alias": "jc.api.helloworld_.services.priv.jdbc:conn",
          "packageName": "JcHelloWorld",
          "state": "disabled"
        },
        {
          "alias": "jc.traveltours._priv.jdbc:conn",
          "packageName": "JcTravelTours",
          "state": "disabled"
        }
      ]
    },
    "status": "UP"
  },
  "Diskspace": {
    "free": "346488680448",
    "inUse": "653752283136",
    "status": "UP",
    "threshold": "100024096358",
    "total": "1000240963584"
  },
  "JDBC": {
    "ISDashboardStats": {
      "avail": "5",
      "poolAlias": "Embedded Database Pool",
      "size": "5",
      "status": "UP"
    },
    "ISInternal": {
      "avail": "1",
      "poolAlias": "Embedded Database Pool",
      "size": "1",
      "status": "UP"
    },
    "Xref": {
      "avail": "1",
      "poolAlias": "Embedded Database Pool",
      "size": "1",
      "status": "UP"
    },
    "status": "UP"
  },
  "Memory": {
    "freemem": "330440288",
    "maxmem": "1073741824",
    "status": "UP",
    "threshold": "100348723",
    "totalmem": "1003487232"
  },
  "ServiceThread": {
    "avail": "70",
    "inUse": "5",
    "max": "75",
    "status": "UP",
    "threshold": "7"
  },
  "UMAliases": {
    "IS_UM_CONNECTION": {
      "status": "DOWN"
    },
    "status": "DOWN"
  },
  "status": "DOWN"
}

Metrics API

Compatible with Prometheus

## Metrics API for use with Prometheus
$ curl "http://localhost:5555/metrics"
# HELP sag_is_uptime_seconds Uptime for Server
# TYPE sag_is_uptime_seconds counter
sag_is_uptime_seconds{} 848845 1670233586625

# HELP sag_is_server_threads The number of server threads currently in use.
# TYPE sag_is_server_threads gauge
sag_is_server_threads{} 4 1670233586625
...

Add the following environment variables to the container configuration if you want to be able to invoke either the health or metrics API anonymously without requiring admin rights.

-e SAG_IS_HEALTH_ENDPOINT_ACL=Anonymous -e SAG_IS_METRICS_ENDPOINT_ACL=Anonymous

Conclusion

In this article, we looked at how to build and deploy a simple webMethods Microservices. In a future article, we will look at how we could take this further to take advantage of a webMethods adapter to simplify back-end connectivity.


This article is part of the TECHniques newsletter blog - technical tips and tricks for the Software AG community. Subscribe to receive our quarterly updates or read the latest issue.

9 Likes

Thanks for publishing this John. This blog coverers everything and the newbies can pick this up from here.

1 Like

Looking forward to this future tech article @John_Carter4

thanks @John_Carter4 I noticed that in my environment which I built according to these excellent instructions the ping and health are successfully invoked on port 30055 and not the default 5555. Is that a typo?

1 Like

I’m unable to find the reference you refer to, but 30055 is the service and not the individual pod so that would be wrong
regards,
John