Workfile performance

Hi all,

we have switched one of our major batch processes to use a flat(work) file which has been pre-prepared using ADASTRIP and various ICETOOL functions such as splice.

For what we what to do, the performance has improved dramatically. I am however concerned with the amout of TCB(CPU) time which Natural seems to use when reading a work file. We wrote a simple program to read 10 million records (32 bytes each) from a work file and it used .35 minutes (record clause on). We then did the same thing with COBOL and it used .04 minutes.

Is there any way to improve this resource utilization for example through using a parameter that anyone knows of?

Any help appreciated.

John

Try the DCB BUFNO parameter, this is type of readahead / prefetch.

Maybe DCB=(BUFNO=20) for a start and see where it gets you to.

Thanks Wolfgang,

I did try the buffers (and OPTCD), but this did not seem to make any difference to the CPU time used.

Any other ideas also welcome.

Regards

John

I recommend that you investigate the RECORD option of the NATURAL READ WORK FILE statement. Using the RECORD option can save a significant amount of CPU time. However, you will have to check the integrity of the data fields yourself. Without the RECORD option NATURAL checks that each field contains valid data. That is where the CPU is used. Also, when reading and writing arrays with a work file, the RECORD option can effect the physical sequence of the fields.

Sorry, I did not notice that the original posting mentioned that you were using the RECORD clause. Except for the RECORD clause I cannot think of a reason why there would be such a big difference in CPU time. QSAM is QSAM is QSAM. NATURAL and COBOL should perform the same.

What’s the blocksize, I assume you don’t write/read those 32 byte records unblocked ?

Furthermore, can we assume processing in both cases (COBOL / NATURAL) is comparable, i.e. both just reading the workfile and nothing else or the like ?

I agree completely that QSAM is QSAM, and this is what puzzles me the most. If I were in charge of the EXCPs myslef for example in an assembler program, I would assume that I could influence the instruction path length, but in Natural or COBOl, I would think that they were just about the same.

The file being read is exactly the same in each program. Blocksize is +/- 32K which is OK in our case as we use SMS compression on all the datasets (no inter block gaps wastage issues).

One other interesting fact is that I created a file of 160 bytes also with 10 million records. The Natural program did not increase in its CPU usage fivefold, but just by about 20%. this leads me to tink that it is each individual logical I/O which is causing the problem.

Do any of you perhaps have any of your own benchmarks?

Thanks for the help so far.

Regards

John