Natural Bufferpool performance

We are having performance problems which occur inconsistently (don’t they all). The same program will sometimes run considerably faster on our Windows Server than on our mainframe, and then on other occasions the reverse happens.

Numerous tests have been done to try to isolate the problem but the results have been inconclusive. We are now looking at the Bufferpool. The stats from the Bufferpool utility seem to indicate some “odd” numbers which may point to some inefficiencies. After refreshing the Bufferpool stats, the “Loaded objects” and “Activated objects” counters increment at very fast and high rates which do not appear to correlate to the number of objects which would normally be loaded, or even relate to the number of objects within our system. After about one hour the counter reads values in 100 millions.

My first question relates to these counters and what exactly they are supposed to represent?

In an attempt to improve the performance I tried to define a local dynamic Bufferpool, thinking that this might be local to the job. The Help text states that the Bufferpool can be defined dynamically with BPNAME=’ '. It seems that this parameter is only for mainframes.

Any comments on how to define parameters to get the maximum performance from the Bufferpool would be helpful.

We have migrated a mainframe system which comprised around 13,000 modules and used Construct. Consequently there are a lot of iterations and calls many levels deep.

Thanks.

Garry,

have a look at the BPSFI parameter. For production environments it should be set to ON, for development environments to OFF. Compare the setting to your mainframe setting.

The BPSFI was set to those parameters.

We are narrowing down the problem to Construct. The “object oriented” structure is very convoluted - sometimes involving calls 20 or more levels deep. A mainframe handles this with some inefficiency however our Windows server is very slow in these processes. We set up a test using identical programs and identical databases. We did a simple READ of 30000 records - times between the mainframe and the server were about the same. We then performed the same read but this time using a Construct read module. The mainframe performance changed by a factor of less than 2 however the server performance changed by a factor of more than 5.

The problem does not seem to be in the size of the bufferpool – I set up another test on the server using a separate bufferpool and got similar results.

I have run out of ideas. I guess it is just a Natural/Windows problem.

I think the first question is now: Is the Database or the Application slowing down the system’s performance?

If you use ADABAS, I would compare the Full statistics of the ADABAS User queue. There you can read the number of transaction and the number of adabas calls.
Please see the ADABAS-documentation for “adaopr … di=uq_full”

I don’t think Adabas is the issue. We ran tests on identical files on the 2 platforms. We did more tests to isolate the Adabas issue - we set up 3 modules each doing the exact same FIND. The 3 modules and the file were copied to both a server (Win2003) and to a mainframe. The 3 modules are -

1.) doing a straight inline Find inside a loop
2.) doing the same routine but with the Find inside a subprogram and the
subprogram called from within the loop.
3.) the same routine but a Callnat to a Construct module within the
loop. The Construct module was doing the exact same Find.

The results again point to more overhead with the Callnats in Natural
for Windows.
On the mainframe test we saw the following results compared with the
base module (the inline Find) -

  • the module with the Callnat suffered a 7% increase in elapsed
    time
  • the module with the Construct call suffered a 50% increase in
    elapsed time

On the server we saw the following -

  • the module with the Callnat suffered a 55% increase in elapsed
    time
  • the module with the Construct call suffered a 1200% increase in
    elapsed time

To prove this you should do some CALLNATs within a Loop without using ADABAS and then compare the times…

We further tested without the database access. This does seem to point to a problem and or inefficiencies arising within Construct on a server - without the database access the relative performance between the mainframe and the server were fairly consistent. (Construct modules were still used but an Escape was executed before any database reads were performed).

With the database access the server was consistently slightly faster than the mainframe until Construct was brought in the equation and then performance of the server became worse than the mainframe by a factor of about 3.

We will keep looking.

are the number of parameters to the Construct subprogram comparable to the subprogram being CALLNAT’d from the loop? Testing I did some time ago on the mainframe showed the number of parameters involved in a CALLNAT had a significant impact on the CPU consumption.

What else is the Construct module doing between the start of it and the database calls? Are there other subprogram calls that could be commented out to possibly isolate the problem area? Other loops, initialization,??

Doug,
The length of the parameters passed in the Construct module was 20,800 - in the simple Callnat 322. We recognised that this difference might be a big factor in the performance difference and this is why we started thinking along the lines of bufferpool.

Your idea of systematically removing the Callnats to try to isolate the area of the most drop in performance is good and we will look at this.

As time permits we are going to conduct some testing with our server connected to a SAN disk array. (I cannot see that this will have any significant impact on performance as in my observations the test jobs have rarely been utilizing more than 50% of the disk I/O capacity). We then intend to test on a UNIX machine.

Thanks for your input.

The length of the parameters passed in the Construct module was 20,800 - in the simple Callnat 322.

As many people know, I am not a fan of Construct. I am one of those people who likes to chuckle when someone reports a Construct problem (especially one of inefficiency) and utter a witticism like “what did you expect”.

All that said, this seems outrageous, even for Construct. What does this subprogram do? Is this the first subprogram in a long chain of CALLNATs?

steve

Steve,

I thought all Construct modules were “the first subprogram in a long chain of CALLNATs” - mostly redundant.

The testing I did compared not the length of the parameters, but the number of parameters: “CALLNAT ‘SUB1’ p1” was faster than “CALLNAT ‘SUB2’ p1 p2 p3” (abbreviated - difference is hard to measure below 1 to 40…), even if the total data length being passed was the same.

Note that a parameter to a CALLNAT is an elementary field, not a group field. If you have
01 #Group
02 p1
02 p2
02 p3
and do a CALLNAT ‘SUB2’ #Group, it is the same call as CALLNAT ‘SUB2’ p1 p2 p3 - 3 parameters.

I think it is a golden rule in Natural that every programmer should keep in mind:
Natural never handles groups. Groups are always split up into their fields at compile time. That’s the reason why you can’t MOVE GROUP1 TO GROUP2, but MOVE BY NAME/POSITION with groups (then the compiler knows how to split the groups into fields).

I believe that NATURAL internally verifies that the data being passed on a CALL or a CALLNAT is valid by comparing the data type to the value. For example, that an N5 field contains 5 numeric digits. I think that is why NATURAL doesn’t pass groups it passes elementary fields.

1 #GROUP1
2 #SUBGROUP1
3 #FIELD1(N5)
3 #FIELD2(B5)
2 #SUBGROUP2
3 #FIELD3(A10)
3 #FIELD4(N3)

CALLNAT ‘SUBPGM’ #GROUP1

I believe that NATURAL interprets the CALLNAT above as

CALLNAT #FIELD1 #FIELD2 #FIELD3 #FIELD4

When passing the data by reference, NATURAL passes 4 addresses not 1.

[quote="Wilfried B

[quote=“Steve Robinson”]
[quote="Wilfried B

:shock: I didn’t know that. I never used the RECORD-Option. What is the advantage of it? The documentation says:

I made some tests with a 20MB-Workfile on Natural for Windows. From my point of view the RECORD-Options doesn’t process the data sigificantly faster. Maybe it’s about 10% faster …

READ WORK FILE (without RECORD) validates the format of every field. This involves a fair amount of CPU time. READ WORK FILE RECORD does not check formats.

Try the following code:

define data local
1 #group
2 #bb (n5)
2 #aa (a5)
1 #a (a5)
1 #b (n5)
end-define

move ‘abcde’ to #a
move 12345 to #b
write work file 1 #a #b
*
read work file 1 once #a #b

This will work.

now try

read work file 1 once #b #a

this will not work. we are trying to read an alpha field into a numeric field. you will get an error message.

Now try

read work file 1 once record #group
write #bb (em=h(5)) #aa (em=h(5))

this will run. natural does not check formats for the fields. However, if you are doing a READ WORK FILE into a single large alpha field, which is REDEFINEd into smaller fields, RWF and RWF RECORD will be very close in performance.

RWF RECORD can be used effectively when you are using work files as “temporary storage” between job steps. If a field was n5 when you wrote it out in job step 1, it is probably still n5 when you read it in in job step 2.

However, if you are getting a workfile from a remote site; and you do not trust the validity of data from this source, you want to use RWF (no record) to allow Natural to check formats (unless you “prepare” the workfile with a separate “cleansing run”).

steve

And another one: CALL. As parameters are passed by reference, only the address of a group is passed. No check for length is done. That must be implemented by the called program. This also applies to CALL FILE and CALL LOOP (did someone ever use these statements?)

READ WORK FILE RECORD also passes an address of the variable area but also a length to be copied from the IO buffer. As no checks are done, it is indeed much faster than READ WORK FILE. If you want to check it, define an array of 1000 packed numbers, write it into a workfile and read it back again, once with RECORD and once without RECORD. You will see the difference!

If you read in the records with the RECORD option, you should afterwards check each field for correct contents (MASK!). This also works for packed and unpacked fields. We do this for files from third parties, as we get detailed error messages. Without the RECORD option we only will get an error message like “Oops, there is a wrong value in the file. But I do not tell you, which record and which field the error is in!” :wink:

On the mainframe, the CALL statement expands the group field, just as it does for a CALLNAT. This can be seen with the examples (such as Broker ACI calls) where the parameter list is represented by the first field in the structure, not the group name. If the group name were used, each of the individual elements would be parameters, not the group. It is indeed call by reference, but the reference is to the elements of the group, not the group.