Running MSR with multiple replicas in kubernetes

Product: MSR 10.11

We are trying to run replicas of MSR 10.11 ubi using a custom image and many customized properties from a configmap file. When we start more than a pod of MSR in the namespace the server behaves in a weird way. the login page doesn’t comes up and even using the Administrator login doesn’t work.
These are the properties we are using in our deployment file:
env:
- name: SAG_IS_CONFIG_PROPERTIES
value: application.properties
volumeMounts:
- name: license-config
mountPath: /opt/softwareag/IntegrationServer/config/licenseKey.xml
subPath: licenseKey.xml
readOnly: false
- name: app-prop
mountPath: /opt/softwareag/IntegrationServer/application.properties
subPath: application.properties
readOnly: false
- name: server-config
mountPath: /opt/softwareag/IntegrationServer/config/server.cnf
subPath: server.cnf
readOnly: false

Any pointers to where the issue could be? with kubernetes config? or the image ? or the configmap?

Error messages / full error message screenshot / log file: screenshot attached shows the server behaviour after more than 1 pod is started. the popup comes up instead of the login page and even Administrator login doesnt work after this.

update: we tried to run replicas of MSR with the base image from softwareag 10.11.0.2-ubi here also we are facing the same problem.

AnjniG,
With one pod instance you don’t see any issue while logging to Admin console but with two pod instances if you see problems, most probably it would be about your ingress configuration in k8s environment. You may use HA proxy, or Node port as an ingress layer. Check those configs and enable session stickiness so that your browser sends request to the same IS where it gets connected first. Did you try login to the pod instance via terminal and check the health of the running server?

Regards
Senthil

It may help if you can share all your Kubernetes manifests rather than just a snippet. As replicas tend to copies, it would be unusual for them to behave differently. So the issue is somewhere else in the k8s setup which means you need to share all the manifests.

sag-msr-k8s-manifest_files-1.zip (988 Bytes)

Thanks for your messages. I am writing on behalf of Anjni as we both are working in same team. We also tried the 10.15 image from containers.softwareag.com and used the attached k8s manifest files.

Please note that we have replaced some values as they are specific for the client.
After changing the replicas to 2 the server which was behaving normally by giving the login page starts to give a popup like in the screenshot attached in the post and Administrator/manage doesn’t work on this popup.

Thanks,
Kuldeep Gupta

If you have just one pod, all the traffic is forwarded to one single container, but if you have multiple pods, the K8S service load balances the traffic.
When you create a user session in one container, there’s no replication mechanism to propagate this session in the other running containers, so you have to authenticate again and again.

To solve the issue, you can try configuring session affinity in the K8S service, following this example:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 5555
      targetPort: 5555
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800 # Or any other suitable duration

But you need to be very cautious when doing it, because you change the load balancing algorithm.

Here’s what I advise you to do:

  • create one or several custom port to deal with your integrations (serve API calls, for instance), with K8S services that have no session affinity at all
  • restrict port 5555 to the admin console, and here you can configure session affinity if you want
  • restrict usage of the admin console in production to monitoring use cases. Everything that is related to deployment and configuration should be fully automated, you should never use the console to deploy a package, configure a resource or change a server setting
1 Like