Universal Messaging 10.15 Cluster on Docker or OpenShift

We have installed the Universal Messaging on OpenShift and I am trying to create a cluster of two pods using UM command line tool but getting below error in logs. What could be the security issue? Is it not possible to create a cluster among UM pods?

We have initialised CreateCluster with: {rnames=nsp://{podhost1}:9000,nsp://{podhost2:9000}, convertlocal=false, clustername=universalmessaging-dev-cluster}
Security error. [Request not supported on Server]
Throwable occurred: com.pcbsys.nirvana.client.nSecurityException: Request not supported on Server
at com.pcbsys.nirvana.nAdmin.nAdminSession.createCluster(nAdminSession.java:879)
at com.pcbsys.nirvana.nAdminAPI.nClusterNode.create(nClusterNode.java:477)
at com.pcbsys.nirvana.nAdminAPI.nClusterNode.create(nClusterNode.java:212)
at com.softwareag.um.tools.cluster.CreateCluster.execute(CreateCluster.java:54)
at com.softwareag.um.tools.UMToolCommon.executeTool(UMToolCommon.java:97)
at com.softwareag.um.tools.UMToolCommon.executeTool(UMToolCommon.java:80)
at com.softwareag.um.tools.UMTools.main(UMTools.java:108)

Please share the command you issued to create the cluster.
I’m also interested in the OpenShift yaml descriptors (not the secrets, of course.)

I am passing the startup command as an env variable but also tried the same command separately through command line once both the pods are available. Same security error in both cases.

env:
- name: REALM_NAME
value: umserver2
- name: STARTUP_COMMAND
value: “runUMTool.sh CreateCluster -clustername=universalmessaging-dev-cluster -convertlocal=false -rnames=nsp://{podhost1}:9000,nsp://{podhost2}:9000”

UM clustering in CaaS environments (such as OpenShift) is not officially supported by SAG. So you’re entering a grey zone by trying this.

I was able to technically make it work with:

  • a statefulSet for the UM pods, with persistence achieved through a volume claim template (each pod has its own storage space)
  • a headless service for each pod, which allows each pod to be addressable by the clustering tool and the UM clients
  • a job that runs the clustering command

If you want to continue what you’ve started, even though it’s not supported, then I can share my deployment descriptors.

In a CaaS environment, a UM pod with proper live probe configuration can be restarted very quickly and automatically. Downtimes would be less than 30 seconds.
If your integration runtime is stateful, then you should use the CSQ to cover for UM downtimes.
If your integration runtime is stateless, then you should be careful with this CSQ.

In the end, it depends on what you’re trying to achieve with this clustering, and the integrations this UM is meant to support.

Even I have created the statefulSets and two services to address those but could not create the cluster.

Yes, I have to continue to setup this cluster on OpenShift so please share your deployment descriptors with me, I will make the necessary changes and try.

Thank you for your support.

This exception is caused by a licence restriction, clustering is disabled and thus the log “Request not supported on Server”

Please review the licence you are using has the cluster functionality.

Regards

Joshua

1 Like

I can confirm Joshua’s verdict, as I get the same error as you when trying to create the cluster with a UM A/P license.

Here are my descriptors.
And I repeat: Clustering in K8S and OpenShift is NOT SUPPORTED by Software AG at the time of writing this post.

StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: umserver
  labels:
    app: umserver
spec:
  serviceName: umserver-h
  selector:
    matchLabels:
      name: umserver-pod
      app: umserver
  replicas: 3
  template:
    metadata:
      name: umserver-pod
      labels:
        name: umserver-pod
        app: umserver
      annotations:
        prometheus.io/scrape: "true"
    spec:
      securityContext:
        fsGroup: 1724
      containers:
      - name: umserver-container
        image: sagcr.azurecr.io/universalmessaging-server:10.15
        volumeMounts:
        - mountPath: /opt/softwareag/UniversalMessaging/server/umserver/licence
          name: licenses
        - mountPath: /opt/softwareag/UniversalMessaging/server/umserver/data
          name: um-data-directory
        - mountPath: /opt/softwareag/common/conf
          name: um-conf-directory
        ports:
        - containerPort: 9000
          name: nsp
        - containerPort: 9200
          name: metrics          
        resources:
          limits:
            cpu: 500m
            memory: 1000Mi
          requests:
            cpu: 250m
            memory: 250Mi
        env:
        - name: REALM_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: INIT_JAVA_MEM_SIZE
          value: '512'
        - name: MAX_JAVA_MEM_SIZE
          value: '900'
        livenessProbe:
          httpGet:
            port: 9000
            path: /health/
          failureThreshold: 2
          initialDelaySeconds: 60
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          httpGet:
            port: 9000
            path: /health/
          initialDelaySeconds: 10
          periodSeconds: 1
          failureThreshold: 50
      imagePullSecrets:
      - name: sagregcred
      volumes:
      - name: licenses
        secret:
          secretName: licenses
          items:
          - key: um-license
            path: licence.xml
  volumeClaimTemplates:
    - metadata:
        name: um-data-directory
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: managed-csi-premium
        resources:
          requests:
            storage: 5Gi
    - metadata:
        name: um-conf-directory
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: managed-csi-premium
        resources:
          requests:
            storage: 1Gi

Storage class names are Azure ones here, they need to be changed to match these supported by your platform.
I am using RWO access mode here since each pod has its own dedicated storage space.
Resource allocations could also need to be tuned, depending on requirements.

Services

apiVersion: v1
kind: Service
metadata:
  name: umserver-0
spec:
  clusterIP: None
  ports:
    - port: 9000
      name: nsp
      targetPort: 9000
      protocol: TCP
  selector:
    statefulset.kubernetes.io/pod-name: umserver-0
---
apiVersion: v1
kind: Service
metadata:
  name: umserver-1
spec:
  clusterIP: None
  ports:
    - port: 9000
      name: nsp
      targetPort: 9000
      protocol: TCP
  selector:
    statefulset.kubernetes.io/pod-name: umserver-1
---
apiVersion: v1
kind: Service
metadata:
  name: umserver-2
spec:
  clusterIP: None
  ports:
    - port: 9000
      name: nsp
      targetPort: 9000
      protocol: TCP
  selector:
    statefulset.kubernetes.io/pod-name: umserver-2
---
apiVersion: v1
kind: Service
metadata:
  name: umserver-h
spec:
  clusterIP: None
  ports:
    - port: 9000
      name: nsp
      targetPort: 9000
      protocol: TCP
  selector:
    app: umserver

umserver-0, umserver-1 and umserver-2 are there for realms to talk to each other.

Job:

kind: Job
apiVersion: batch/v1
metadata:
  name: universalmessaging-job-cluster
spec:
  template:
    spec:
      containers:
      - name: create-cluster
        image: sagcr.azurecr.io/universalmessaging-tools:10.15
        command: ["sh", "-c"]
        args:
          - >
            runUMTool.sh CreateCluster -clustername=umcluster -convertlocal=true
            -rnames=nsp://umserver-0,nsp://umserver-1,nsp://umserver-2
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 250m
            memory: 250Mi
      imagePullSecrets:
      - name: sagregcred

      restartPolicy: Never

To create the sagregcred secret:

kubectl create secret docker-registry sagregcred \
--docker-server=sagcr.azurecr.io \
--docker-username=${SAG_ACR_USERNAME} \
--docker-password=${SAG_ACR_PASSWORD} \
--docker-email=${SAG_ACR_EMAIL_ADDRESS}

To create the licenses secret:

kubectl create secret generic licenses \
        --from-file=um-license=${UM_LICENSE_FILE_PATH}
1 Like

Thank you for your help. Yes, the dev license was not enabled with clustering.

1 Like

Thank you for your help. After changing the license and creating separate services for each pod I was able to create the cluster.