Difference Between NTASK=4 vs. 4 RPC Server Instances

What is the difference with regards to performnace and processing capability between implementing an rpc server with NTASK=4 versus merely starting 4 instances of the same rpc server? I know it is no fun to start the same rpc server 4 times…

Thanks,
Min

Min,

well, starting a number of STCs or batch jobs, one per server instance,
usually isn’t a big issue, most shops have some sort of automation
capability anyway.

Starting a single address space with NTASKS will give you (a bit) less
resource consumption (as far as main storage is concerned), on the
other hand it isn’t as easy to detect and resolve issues with single
instances within that “pool”, when one server / address space runs
into issues or falls over you will clearly see which one.

From a Broker perspective there is no difference.

Does this help ?

Best regards,

   Wolfgang

Some differences:

  • if you are not using global buffer pools, you can use the LBPNAME parameter in the NTOS macro to ensure that the NTASK servers share the same local buffer pool (and you can call the buffer pool USR API as an RPC service to flush the local buffer pool too, if needed, without restarting)
  • if your shared nucleus is not loaded in LPA or ELPA, the NTASK servers can still share the memory the shared nucleus uses. (OK, you >really< should be putting the shared nuc in the (E)LPA!)

Beyond that, I don’t think there is any difference in performance. Since the trace files are provided per NTASK, it isn’t particularly difficult to identify which task is having issues (although, typically, if one RPC server is having issues, they are likely all having the same issue!).

Even if you are using Global Buffer Pools, you might look at using a local buffer pool for the RPC servers (with NTASKs) - very often the objects used by the RPC servers are different than those used by batch jobs and/or online programs, so having a local buffer pool may improve overall performance. Just be sure to allocate a large enough buffer pool in either case to minimize program loading.

I found Broker and the RPC servers to be very solid; they remained active between weekly IPLs. As did Wolfgang and Douglas, my client determined that multiple NTASKS did the job as well as multiple RPC Server jobs, so we implemented NTASKS > 1.

The Natural RPC job was submitted several times to mitigate possible job failure; if the first (i.e. active) job failed, the next one in the queue would automatically take its place. But we found that almost all RPC server failures were the result of application logic errors, and these logic error would cause the servers and jobs to fall like dominoes, in rapid succession. I recommended that Optimize for Infrastructure be implemented to monitor the number of active tasks and send appropriate notifications. As a short-term solution, I built a Natural program that could initiate, query, or terminate a server. Running in batch mode, it could send an e-mail notification if the number of active tasks fell below a specific threshold. I used calls to BROKER, USR2071N and USR2073N to build it.

You can dedicate a Global Buffer Pool (another one than the one used by online and batch) to the RPC servers. Then it is also easier to controll this Buffer Pool from outside. And if you have several RPC Servers (with NTASKs) they can also share this this Global Buffer Pool.

Maybe with regards to the operating system scheduling of TCBs you will get a higher priority for RPC servers (without NTASKs) compared to RPC servers running together (with NTASKs).