The purpose of this information is to provide a working example of clustering EntireX Broker on the System z platform with the goal of improving application availability.
The following is intended to assist users with understanding and configuring the Sysplex platform for network-based clustering along with specific practices for configuring an EntireX Broker cluster using the underlying System z platform. This information begins with a checklist for getting started then is organized under two separate sections - The first section introduces the components involved with setting up and configuring the Sysplex platform; The second section is about best practice methods and example configurations for running and administering EntireX Broker clusters within a Sysplex HA environment.
Simple checklist to get started:
- Determine which VIPA networking approach is right for you (find out what existing apps use VIPA)
- Understand what components you will need and who to involve (see How-to VIPA below)
- Example z/OS configurations
- Port sharing – easy setup on a single stack; use to test EntireX cluster
- Dynamic VIPA – provisions cross-stack sharing of common virtual IP address
- Distributed DVIPA – allows sharing across stacks and across boxes
- Example Broker configurations
- RPC Server considerations
- Management and maintenance best practices
How To VIPA:
Note: Dynamic VIPA and Sysplex Distributor do not rely on data stored in structures in the Coupling Facility. Therefore, they can be implemented using XCF communication without a Coupling Facility (aka using Basic Sysplex connectivity).
The association between a VIPA and an actual physical interface is accomplished using either the Address Resolution Protocol (ARP) or dynamic routing protocols (such as OSPF (open shortest path first; RFC 1583)). The ARP takeover function is an important availability tool in and of itself because it provides for high availability in the event of an interface failure. However, that is really just the beginning. Using the same underlying ARP takeover function in conjunction with Sysplex XCF communications, the z/OS Communications Server can also move DVIPAs from one TCP/IP stack to another (including stacks on completely different systems (AKA Distributed DVIPA)), providing high availability for application server failures. When used with Sysplex Distributor, DVIPAs allow a cluster of host LPARs, running the same application service, to be perceived as a single large application server node from the client perspective.
Let’s start off with the simplest IP configuration example using port sharing on a single stack.
Port Sharing:
Port sharing can be used when you are limited to a single LPAR (logical partition) such as in a test environment. Here, two separate EntireX Broker listeners can share the same port address through a common TCP/IP stack. The limitation here of course is that if either the LPAR or stack is disrupted, then the access to the processes will be affected.
The following TCP/IP parameter example shows how to share a single port across two EntireX Broker Started Tasks all running on the same LPAR using a single stack:
; Reserve ports for the following servers.
18000 TCP EXXA1 SHAREPORT ;EntireX broker number 1
18000 TCP EXXA2 SHAREPORT ;EntireX broker number 2
In this next example, any Started Task Name using the EXX prefix (up to 8 instances) will be allowed to share a port for this specific IP address
18000 TCP EXX* SHAREPORT ;EntireX brokers
Specifying the SHAREPORTWLM parameter on the PORT statement enables connections to be distributed in a weighted, round-robin fashion based on the WLM server-specific recommendations (requires Work Load Manager).
You can then follow the information below to configure the EntireX Broker instances.
Distributed and Dynamic VIPA:
Distributed DVIPAs are defined using the VIPADistribute parameter with the VIPADynamic statement in the TCPIP profile.
Here is an example for first defining the VIPA services in the TCP parmlib:
; Static VIPA Definitions
DEVICE VIPA1 VIRTUAL 0
LINK VIPA1L VIRTUAL 0 VIPA1
;
; Dynamic VIPA Definitions
VIPADYNAMICVIPADEFINE MOVEABLE IMMEDIATE 255.255.255.0 10.20.74.103
VIPADISTRIBUTE DEFINE DISTMETHOD ROUNDROBIN
10.20.74.103 PORT 18000 DESTIP 10.20.74.104 10.20.74.114
ENDVIPADYNAMIC
;
HOME10.20.74.102 VIPA1L
And then an example for defining the Home and VIPA interfaces when the OSPF protocol is being used:
OSPF_Interface
IP_Address=10.20.74.102
Name=VIPA1L
Subnet_Mask=255.255.255.0
Attaches_To_Area=0.0.0.0
Retransmission_Interval=5
Hello_Interval=10
Dead_Router_Interval=40
;
OSPF_Interface
IP_Address=10.20.74.103
Name=VIPA2L
Subnet_Mask=255.255.255.0
Attaches_To_Area=0.0.0.0
Retransmission_Interval=5
Hello_Interval=10
Dead_Router_Interval=40
OSPF is an interior gateway protocol that routes IP packets solely within a single routing domain (such as a Sysplex environment). OSPF detects changes in the Sysplex IP topology, such as link failures or listener activations, very quickly and converges on a new loop-free routing structure within milliseconds.
Static VIPA is the IP address associated with a particular TCP/IP stack. Using either ARP takeover or OSPF routing, static VIPAs can enable application communications to continue even if an OSA fails as long as there is another interface and the stack does not fail.
Note: Static VIPA does not require Sysplex (XCF) because it does not require coordination between TCP/IP stacks (aka everything is orchestrated from the same stack).
Because the Sysplex (cluster of mainframes coupled by XCF connectivity) is a fully-enclosed network in itself, you are really connecting two networks together when you connect a Sysplex to a network. Which is why you would normally use a dynamic routing protocol (OSPF) to give the Sysplex and the network full awareness of each other while minimizing the amount of shared routing information.
Netstat commands used to status and monitor the VIPA configuration:
Display Netstat Dynamic VIPA state:
D TCPIP,TCPIPEXB,N,VIPADYN
EZZ2500I NETSTAT CS V1R13 TCPIPEXB 187
DYNAMIC VIPA:
IP ADDRESS ADDRESSMASK STATUS ORIGINATION DISTSTAT
10.20.74.103 255.255.255.0 ACTIVE VIPADEFINE DIST/DEST
ACTTIME: 11/17/2011 09:29:13
Display Netstat Dynamic VIPA info:
D TCPIP,TCPIPEXB,N,VIPADCFG
EZZ2500I NETSTAT CS V1R13 TCPIPEXB 190
DYNAMIC VIPA INFORMATION:
VIPA DEFINE:
IP ADDRESS ADDRESSMASK MOVEABLE SRVMGR FLG
---------- ----------- -------- ------ ---
10.20.74.103 255.255.255.0 IMMEDIATE NO
VIPA DISTRIBUTE:
IP ADDRESS PORT XCF ADDRESS SYSPT TIMAFF FLG
---------- ---- ----------- ----- ------ ---
10.20.74.103 18000 10.20.74.104 NO NO
10.20.74.103 18000 10.20.74.114 NO NO
Display Netstat Dynamic VIPA Port configuration table:
D TCPIP,TCPIPEXB,N,VDPT
EZZ2500I NETSTAT CS V1R13 TCPIPEXB 168
DYNAMIC VIPA DESTINATION PORT TABLE FOR TCP/IP STACKS:
DEST IPADDR DPORT DESTXCF ADDR RDY TOTALCONN WLM TSR FLG
10.20.74.103 18000 10.20.74.104 000 0000000000 01 100
10.20.74.103 18000 10.20.74.114 000 0000000000 01 100
Display Sysplex VIPA Dynamic configuration:
D TCPIP,TCPIPEXB,SYS,VIPAD
EZZ8260I SYSPLEX CS V1R13 166
VIPA DYNAMIC DISPLAY FROM TCPIPEXB AT AHST
IPADDR: 10.20.74.103 LINKNAME: VIPL0A144A67
ORIGIN: VIPADEFINE
TCPNAME MVSNAME STATUS RANK ADDRESS MASK NETWORK PREFIX
-------- -------- ------ ---- --------------- ---------------
TCPIPEXB AHST ACTIVE 255.255.255.0 10.20.74.0
TCPIPEXB BHST BACKUP 001
EntireX Configuration:
The next unit we discuss the critical importance of segmenting dynamic workload from static Server and management topology. You will want to utilize the Broker cluster for dynamic client workload only, and set up the Server-to-Broker connections and the SMH connections using the static IP definitions.
It is also important to use the latest Broker and Broker stub versions (V9 or graeater) for handling VIPA redirection.
Single Broker ======
The goals for HA in a single Broker instance environment are typically focused around maximizing communication access to the Broker, configuring redundant RPC Servers, and minimizing any down-time of the Broker Started Task using automation such as ARM (Automatic Restart Manager).
ARM can restart a job that failed or task that failed without operator intervention. ARM can also restart a job or a task that is running on a system that has failed.
ARM uses element names to identify applications. Each application that is set up to use ARM generates a unique element name for itself that it uses in all communication with ARM. ARM tracks the element name and defines its restart policy in terms of element names. For details about setting up ARM, see the z/OS MVS Sysplex Services Guide.
Here is an example Broker Attribute file containing two separate listeners defined on two separate stacks:
DEFAULTS = TCP
*
STACK-NAME = TCPIPEXB
HOST = 10.20.74.103
PORT = 18000
*
STACK-NAME = TCPIPMVS
HOST = 10.20.74.101
PORT = 18011
Note that the first definition is for the VIPA and the second definition is for the unique static configuration.
Broker Cluster ======
Tips for configuring Broker clusters:
- Share configurations - you will want to consolidate as many configuration parameters as possible in the attribute setting. Keep separate yet similar attribute files. Use Started Task names that match Broker naming conventions and have logical context. Do not share P-Stores however.
- Isolate workload listeners from management listeners
- Monitor Brokers through SMH
z/OS can support up to 8 TCP/IP stacks per LPAR. EntireX Broker can support up to 8 separate listeners on the same or different stacks.
Configuring Redundant CICS RPC Servers (see documentation for other Server platforms)
CICS RPC Server instances can be configured through the ERXMAIN macro gen job that contains default settings and naming conventions. You use the ERXM administration transaction to control and monitor these instances online. Using identical service names will allow the Broker to round-robin messages to each of the connected RPC Server instances. Using different userids for each RPC Server will provide monitoring and tracing paths to specific instances.
Establish the Broker connection using the static Broker address:port definition.
Share the SVM file across Server instances. Use VSAM RLS or simple share options to keep a single image of the SVM across all or groups of RPC Servers (ie. CICS, IMS, Batch)
Managing Brokers and RPC Servers
Use the SMH to status Broker and RPC Server instances. Set up each connection with logical instance names.
Additional EntireX HA information can be found in the EntireX documentation