Natural statements that cause the Natural RPC Server to deregister


It appears that, conditionally, there is a Natural statement in a module running under the Natural RPC Server (batch) that is causing the started task to terminate. It appears this condition is that a previously submitted feed that processed successfully is re-sent. This happened yesterday morning, and when it was re-tried this morning the exact same thing happened again. I am sure it has nothing to do with the actual payload content as it processed successfully the first time, and it is that fact that is why I think this is a conditional Natural statement.

Do statements like STOP or TERMINATE n or others cause the Natural RPC Server to deregister and shut down cleanly? Are there other statements I should be looking for? If this happens, how can I ensure the availability of the Natural RPC Server to subsequent calls?

Currently, such a shutdown generates an email and Remedy ticket and we have to have the STC for the Natural RPC Server started again. This doesn’t seem to be as robust as the XML RPC Servers running as Windows services which are configured to remain active under all kinds of conditions and to restart automatically in case of failure.

Thanks in advance!


Hello Brian,
Here is the link to the Natural Statement restrictions, when running n the Server side : Natural RPC
It clearly explains what you are experiencing, assuming one of the application program actually performs such a statement, in a conditional way.

Regarding the way your Natural RPC Servers (I guess they run as Batch STC’s) are managed – there are ways to monitor (and start a new one if needed) them using EntireX API’s , for example.

One of my customers is using CICS ASYNC Natural transaction for Natural RPC Servers, and monitor the load balancing and count of them using external listener program to add or reduce the number of servers, for example
Hope that helps

Sagi Achituv

Thanks, Sagi!

I have located the TERMINATE 0 statement that is executed under the very condition this happens under (when we receive a feed that has previously been sent and processed. The code is now altered to just set an L-type variable that will bypass further processing and send back that the feed failed in a status parameter instead, so hopefully I won’t see this any longer (re-test is pending).

I am a bit confused on the concept of starting more Natural RPC Servers. For XML RPC Servers, the Windows services are able to start these and they start with 2 active servers each with one attach manager and it automatically increases and decreases this number as needed. However, Natural RPC Servers do not do this. They start as just one server, no attach manager. I cannot start a 2nd STC with the same name, nor can I start another one with a different name for RPC/SRV1/CALLNAT (which is the only allowable service name for Natural RPC Servers) to register to a Broker that already has that service name registered.

Otherwise, I already do monitor if it is at least up by pinging every RPC Server (Natural or XML) that should be registered to all my Broker nodes in 5 minute intervals, and if they do not respond to the ping (because they are not registered or the ping times out), I send an email and create a Remedy ticket so we can be alerted and respond, but it would be great to be a bit more proactive as it sounds like you figured out how to do.




a bit confused why you think you can only start one instance of any given class/server/service combo, you can start as many as you like !

You can either start a number of individual instances or start Natural in “server mode” using the NTASKS parameter

Hello Brain,
Here is some more information that you may consider :

  1. there is a natural system variable called *SERVER-TYPE which will have RPC value for natural RPC (or blanks if not started as a SERVER) - so you may ask for this value before terminating or any other restricted action.

  2. You may have multiple number of Natural RPC Batch servers, even with the same name, if they are defined as STC (Started Task)

  3. You may use the NTASK parameter, as Wolfgang suggested, so there will be several TCB’s in the same address space - Natural and EntireX support this feature, but you will not be able to see them as seperate JOB’S in the spool.

4, you should be able to “ping” the RPC server, using the ETBINFO , for example, and act according the response - as you suggested - seems like basic monitoring practical apprach.


Thanks for the help, Wolfgang and Sagi. Maybe I just assumed you couldn’t start more than one RPC/SRV1/CALLNAT for a given Broker node or that it would get confused and unmanageable if you did. Also guess I was thinking like jobs, STCs couldn’t have more than one of the same name executing at the same time but that is not so. I like the NTASKS idea better though and it seems more consistent with how XML RPC Servers seem to naturally function.

I found this in the documentation:

Natural RPC Batch Server with NTASKS >1

The main task and all replicas run in the same z/OS region or z/VSE partition.

  1. Use the reentrant batch link routine ADALNKR instead of ADALNK.

If you want to use ADAUSER, you must not link ADAUSER with your front-end, because ADAUSER is non-reentrant (see Item 5). Instead, use the Natural profile parameter ADANAME and set ADANAME=ADAUSER. This will cause Natural to load ADAUSER dynamically at runtime.

Note for z/VSE: If you use ADAUSER, you must rename ADALNKR to ADALNK.

  1. In the Natural parameter module:

Set the keyword subparameter NTASKS=n of profile parameter RPC or parameter macro NTRPC, where n is the number of parallel servers (< 100) to be started, including the main task.

Note for z/VSE: The number of subtasks is restricted by the operating system. Ask your system administrator.

Use the Natural profile parameter ETID to specify the Adabas user identification as a blank character. This is necessary to prevent a NAT3048 error (ETID not unique in Adabas nucleus) when the subtask is started.

  1. When using dynamic Natural profile parameters:

Use the dynamic parameter dataset CMPRMIN to pass the dynamic Natural profile parameters to Natural. Do not use the PARM card or the primary command input dataset CMSYNIN.

  1. When using a local buffer pool (z/OS only):

Each subtask allocates its own local buffer pool unless you specify a shared local buffer pool. See subparameter LBPNAME of profile parameter OSP or parameter macro NTOSP (in the Parameter Reference documentation).

  1. In the Natural front-end link job (z/OS only):

Link the front-end reentrant by using the RENT option of the linkage editor.

If the front-end were not linked with the RENT option, only the main task would start the communication with the EntireX Broker. All subtasks would be set to a WAIT status by z/OS, until the main task would have been terminated. If you would terminate the RPC server later on, the address space would hang and would have to be cancelled.

  1. Make sure that any other modules that are additionally linked to the Natural nucleus are reentrant. Any dynamically loaded programs must also be reentrant.

Note for z/OS: If you cannot make a module reentrant, link the module as non-reusable; this means, you should not specify the link option RENT or REUS. This is to ensure that each subtask will get its own copy.

This will get me started and make the mainframe side of this more reliable and available.