AppMesh User guide


webMethods AppMesh symbolizes the convergence of Service Mesh, API Management and Integration technologies. AppMesh extends the service mesh platform by providing application awareness through “APIfication” of the microservices. “APIfication” really means providing an API face to these microservices. With this, you can enable reuse, governance, consumption, landscape management & importantly drives API led integration of these microservices.

Click here to read more about webMethods AppMesh

The following sections explain how to use webMethods AppMesh.

Installation of the AppMesh

Currently, there a two ways to install App Mesh. 

  1. Pull the API Gateway and Microgateway docker images.
    • The API Gateway image is present at store/softwareag/apigateway-trial: and the Microgateway image at store/softwareag/microgateway-trial:
  2. Patch your 10.5 Fix 2 installation of API Gateway and Microgateway. This is done using Software AG Update Manager.
    • API Gateway patch requires 10.5 Fix2 . You'll get the required patch when you provide the following patch key 5402989_YAI-15157_6.
    • Microgateway patch requires 10.5 Fix2. You'll get the required patch when you provide the following patch is with key 00000000_YAM-000000_3 

AppMesh configuration

In 10.5 AppMesh works with Istio that runs over  Kubernetes environments.

To connect AppMesh to Istio Service Mesh, you must

  1. Configure AppMesh to connect to the Kubernetes environment by pointing to your Kubernetes config file
  2. Provide additional configuration that are required byApp Mesh in API Gateway

This configuration can be provided in API Gateway under Administration→ External Accounts → Service Mesh

Connecting  to the Service mesh 

To connect API Gateway to Service Mesh that is present over the Kubernetes environment

  • Provide the Kube configuration file using the 
  • Currently, AppMesh supports only one cluster from the Kube config file.
  • Once loaded API Gateway shows the cluster endpoint and the cluster name.

A sample Kube config looks like

apiVersion: v1
kind: Config
- name: "Production_k8_cluster"
    server: "<ClusterURL>"
    certificate-authority-data: ""
- name: "user"
    token: ""
- name: "Production_k8_cluster"
    user: "user"
    cluster: "Production_k8_cluster"
current-context: "Production_k8_cluster"

Click on the  icon to delete the configuration

 Additional App Mesh configurations

The table gives you an overview of the fields and the default values that can be provided as a part of the App Mesh configuration.



Default Values


API Gateway URL

This is the URL which the Micro gateway Sidecar uses to connect and fetch the API


Default value is taken from the Load balance URLs that are configured under Administration → Load balancer in the following precedence.

  1. First of the HTTPS Load balance URL if configured (or)
  2. First of the HTTP Load balance URL if configured (or)
  3. Default Host name with 5555 as the default port


API Gateway Username

The username to use while connecting to the API Gateway.




API Gateway Password

The password to use while connecting to the API Gateway




Microgateway image

The Docker Trusted Registry location of the Microgateway image.


A sample image is present at store/softwareag/microgateway-trial:

The user can create his own Micro gateway image. The Micro gateway installation with the test patch installed contains the file that can be executed with the following arguments to create a custom image.




Default values


The Microgateway port




The Microgateway image tag




The Docker repository to push the microgateway



For example -p 7070 -t myimg -r mycompany/api-management/apigateway-dev


Microgateway Port

The port in the Microgateway that is deployed as the sidecar to the pod listens.

This is the port that will be used to build the Microgateway.





The namespaces to use from the Kubernetes environment while connecting to the

Service mesh environment


'default' namespace.

Click the  icon to save the changed configurations.

AppMesh Visualization

Clicking on the  tab will list you all the Services + deployments that are present in the Namespaces that are configured in the App Mesh configuration.

For every Service that is present in the Kubernetes environment that exposes a NodePort, the respective deployments are listed here.

In the current (10.5) version, only services with NodePort port type are displayed.

Managing an Service Mesh deployment and APIFy

The view details will give you an overview of what is present in the deployment.

The basic information gives you the details of the internal and the external endpoints that are exposed in the Deployment.

The   action creates an API in API Gateway with the same name of the deployment which the user can create the policies to the API that are required for the Microgateway.

Once APIfy is done the user can browse to the created API from the API Details section.

The created API will just have only one resource '/' and the routing endpoint will be the first endpoint of the Native microservice. The user would need to update the API with proper resources and policies just as he would do for a normal API in API Gateway.

The user can also view the Service mesh Ingress policies that are present in the service mesh.


The  action deploys the sidecar micro gateway image that is configured from the AppMesh configuration.  

When the Deploy action is performed for the first time API Gateway will provision a Microgateway image that is provided in the AppMesh configuration to the deployment. The deployment is pushed to the Service mesh environment again. This will cause the deployment to be recreated in the Service Mesh.

For subsequent deploy executions, there will be no changes to the deployment in the Service mesh environment unless there is a change in the App mesh configurations under Administration→External accounts.

When a policy is added to the API or the API Contract is changed clicking the Deploy will not change them inside the Microgateway. For these cases the Pod needs to be restarted such the changed configuration to take effect. There are multiple ways to restart the pods in the environment. A simple way is to scale down and scale up the deployment.

A sample command would look like

kubectl scale deployment <deployment-name> --replicas =0 #Scale down

kubectl scale deployment <deployment-name> --replicas =<No of replicas> #Scale up