Does anyone have any statistics regarding performance of web services and how well the layers can perform?
We are looking at building some web services. Some of our performance tests seem less than stellar. These were run under a Batch RPC Server. One of the thoughts was the Batch RPC Server was going to be less efficient than a CICS RPC Server. This is with a DB2 backend. Does anyone have any experience comparing the two? What kind of volume (from a request level) can this infrastructure support? e.g. how many requests per time (s) can be suppoted? I know there are many variables, so ask away as far what you want to know, but I hope someone has some real world experience they can share regarding performance.
As an example, does anyone have a web service that manages an object/entity like addresses, demographic data, etc. How big is the input (bytes), how big is the output (bytes), and what is peak load like in requests per second?
Anybody running any high volume transactions through an RPC Server?
As an example, let’s say we want a web service that will support CRUD for demographic data associated with an idividual. The layout might look something like:
First Name (A20)
Last Name (A20)
Address 1 (A20)
Address 2 (A20)
For this exmaple, assume SSN is unique. The service can perform all CRUD activities. How many times per second could a web service like this be called? 10, 20, … 100? I know there are lots of variables, so we can either discuss or just make some assumptions. Is there a relationship between payload size, the number of transactions being requested, and the number of RPC Servers needed? Are you running DB update functions (CUD) through their own server(s)?
This is more anecdotal than statistical, but a former client of mine performed volume tests by simulating several hundred users concurrent with a typical Production batch load. This was satisfied by a single batch (mainframe) Natural RPC server with 5 sub-tasks. In Production, a second server was implemented, as a backup in case the primary job failed.
I don’t think that Broker or the RPC servers will be a bottleneck, but the called routine (service) could be. Consider the effect of a badly-written CRUD module, for example, performing many unnecessary database calls.
Regarding too many variables, I agree, but I would be interested in any stasitics that anyone has regarding this. I’m not trying to model performance in this thread, but rather find anyone with some real world experience. I can tell that this board doesn’t really get very much traffic (or at least posts), but I figured someone out there with some real world experience could/would respond.
Is anybody really using this product?
We fully understand the impact of ensuring the executed code is efficient.
Why do you say batch servers would perform better than CICS? We heard this initially from SAG, but since then we have heard otherwise. So, there seems to be some confusion. We ran some tests with our initial web service which has a really large payload. I would not say that it was ideal for testing, but we figured it would get us in the ballpark or help us understand the boundaries. The CPU utilization for the server skyrocketed when we loaded about 20-30 concurrent users. We ran this on a weekend when there was no other real load on the system. The amount of CPU seemingly required would not be sustainable parituclarly if there were other activities occurring on the system. The throughput increased siginificantly when we submitted a second server as a standalone job, but the CPU consumption was still very high.
One thought for the inefficiency with DB2 in batch mode was that every call had to re-initiate a thread with DB2. Under CICS, an existing set of ‘pooled’ connections exists between CICS and DB2. Thus, the thinking is that the CICS Servers will perform better. I am neither an EntireX expert nor a DB2 expert, so I may be ‘butchering’ terms, processes, concepts, etc. We have now heard this concept (CICS will perform better with DB2) from some with EntireX performance. We our in the process of getting the CICS environments and servers setup. We are also going to create a service with a smaller payload for testing.
I agree with the “too many variables” assessment. I can’t say from personal experience yet as we are just now developing with EntireX and haven’t deployed anything in production yet, but people I trust who use this product for some very heavy lifting all sing the praises of EntireX for efficiency and speed of communication.
And so many things could degrade the performance of this object you are running. If you are doing a load test, for example, and this test includes updates over and over to the same DB2 row, what effect would row-level locking have on throughput as you measure it? I can’t say this is your problem but is an example of something that could be slowing down your performance tests that has nothing to do with EntireX or your batch Natural RPC server.
The tests we have run so far (and will run on a smaller payload service) are only using a ‘read only’ object. I fully expect other challenges to arise (potenitally) when we start performing updates of any kind.
This seems really, really large. I thought there was a limit on the RPC side somewhere around 33KB? That would make for a whole lot XML around that. I will provide some details on our ‘large’ service when I get chance. IIRC the XML amounts to about 80KB, and the data amounts to about 30KB (when there is a lot of data to return).
The 30k limit was due to the limits imposed by the Adabas SVC communication (pre ACBX),
this is long gone, with TCP/IP communication you could send larger messages for quite a while now,
and the ACBX interface lifts the 30k SVC communication limit as well.