Best practices and performance on broker calls

We are currently using Broker to call from a web application to Broker on the mainframe. The Programs that are invoked on the mainframe usually have a short execution time (under 5 secs). However, we currently have a need to make a Broker call that will execute several subprograms that can take up to 1 min to execute.

The first subprogram takes in parms from Broker then calls other subpograms. After all subprograms have completed, the first subprogram then returns a result set of parms as output. The subprograms are written in Natural.

The concern is that the broker call is part of a web service. Therefore the time it takes to return from the web service seems unually long. Plus we are not sure what the best practices and considerations are, performance wise, for setting up calls in Broker like this one.

So here’s our questions:

What’s the criteria for selecting candidates for a broker call. When is it not a good idea to use Broker?

How can we measure resource consumption for a broker call that goes from client to mainframe?

Which is more expensive, executing 1 long broker call, or spliting it up into multiple calls?

Sometimes when we triy long broker calls, we time out in broker. Is there are parm that can control the length of time a thread is held?

How do we control the number and size of threads that Broker will have. How does multi-threading within broker work?

If a thread is used for a long time does Broker eventually cash the data in a control block and return that thread to a pool for resuse?

Can you point us to documentation on performance tuning for Broker on a MVS mainframe?

Thanks, BJ

Thanks BJ

first off, I would (ahem, pardon me for tooting our horn) recommend having Software AG do some on site performance tuning and mentoring to assist you as you are asking many complex questions - the answer to most is “it depends”! :o

There are various time out controls in Broker. You can determine the length of time a server can be inactive or the length of time a client can be inactive or the length of time a conversation can be inactive (where “inactive” is the amount of time Broker has not seen a message from that message participant). CLIENT-NONACT, SERVER-NONACT, CONV-NONACT are parameters that can be set in the Broker globally or by service. When you say “we time out in Broker”, you need to relay the specify error code you receive, as each points to a different resource shortage or timer epiry.

If a message has been posted to a participant, the Broker makes that worker thread available to other message processing until a response is received. That is, if a server program is doing thousands of database calls to satify a request, the Broker is not idly waiting for that server to respond; it will process other message requests in the meantime. The number of concurrent requests that can be processed is governed by the NUM-WORKER attribute. More may be “active” but not necessarily within Broker - these are governed by the NUM-CLIENT, NUM-CONVERSATION resource limits.

You also have to consider the number of servers available (not to be confused with the Broker!). As an example, you might have a Natural RPC Server running on the mainframe, accessing Adabas data, being called via RPC from web services on the XML servlet on Tomcat (or a similar web application server). If you are going to have many concurrent long runing requests, you should increase the number of Natural RPC Servers available (increase NTASKS) to ensure that there are sufficient RPC Servers available to handle the requests.

You may want to separate the long running requests from requests of short duration by setting up a different RPC Server - the one with short requests can have a shorter CONV-NONACT and CLIENT-NONACT to keep the turnaround on short requests as short as possible while the long running requests are routed to the other server with higher non activity values, ensuring that quick requests do not queue up behind the slower transactions - particularly if the quick requests vs long requests fit the typical pattern of 80% quick requests to 20% long, slow requests.