My client issued a READ PHYSICAL in a loop, but the ADABAS logs shows over 7 million reads on the file, and this caused our Command Log offload process to stall. ADABAS was able to recover, but we are trying to find a more efficient way to scan through all of the records in this file in read only mode in the physical order the records were stored in. The MULTI-FETCH clause looks like the only way to improve performance. Our questions are:
Is there any other way to scan the whole file more efficiently?
Are there any recommendations for using the MULTI-FETCH clause such as optimal buffer size?
. Keep the READ PHYSICAL
. Add MULTI-FETCH 10
. Reduce the view’s field list to only those that are necessary
. consider Adabas utilities to unload and decompress the file, then change the Natural program to use READ WORK
One thing disturbed me when reading this. The physical order of records in an Adabas file is not permanent. If maintenance is done to a file (updates/deletes/adds) records can be moved. For example, an updated record that has gotten larger may not fit within a block. The record will be relocated. A new record does not physically go at the end of the file. It will be placed in gaps that might exist within a block that is physically in the middle of a file. If the client is really looking for a chronological sequence they might try reading in isn sequence with non reusable isns.
You mentioned 7 million READs on the file. How many records are actually in the file?
If there really are 7 million records on the file, Ralph’s suggestions should be followed. Shrink the view to the fields that will actually be required (this is true for any view, not just this scenario). With the suggested Multifetch of 10, you will reduce Adabas calls by 90%.
If the number of records is nowhere near 7 million, there is a problem with the clients program logic. Please post a skeleton of what is going on with the code.
My client is attempting to read all records in the file to assess performance. They want to be sure they understand the impact of running jobs like this and make sure the schedule it accordingly.
There are actually 7 million records in the file, he does not have program logic error.
He just changed the code to include the MULTI-FETCH clause, and here is a sample:
READ MULTI-FETCH OF #MULTI-FETCH OASMON BY ISN
ADD 1 TO #COUNTER
MOVE EDITED *DATX (EM=YYYY’-‘MM’-‘DD) TO #DATE
MOVE EDITED *TIMX (EM=HH’:‘II’:'SS) TO #TIME
IF #COUNTER > #INTERVAL
- PERFORM END-NOW-CHECK
WRITE ‘Out of time. Ending work at’
*TIMX (EM=YYYY’-‘JJJ’ ‘HH’:‘II’:'SS)
He is also considering changing the BY ISN to PHYSICAL SEQUENCE as the order of the data is irrelevant
Immediately after a file load, READ BY ISN is about as fast as READ PHYSICAL. The more UPDATEs and DELETEs against the file, the less efficient READ BY ISN will be. READ PHYSICAL is consistently the most efficient because it reads each Adabas data block just once.
READ by ISN also must reference the Address Converter Table for every record in order to access the block number of the next record in ISN sequence. READ PHYSICAL does not require accessing the ACT.
Thank you Ralph and Steve!
The multi-fetch of 10 made a huge difference:
Without calls to ENDNOW: TOTAL TCB CPU TIME= 3.48 TOTAL ELAPSED TIME= 79.2
Using MULTI-FETCH: TOTAL TCB CPU TIME= 1.91 TOTAL ELAPSED TIME= 14.9
I’ll let him know about the PHYSICAL read information as well.
Our intermediate user buffer area is set to the default of 65535, but this could easily be increased since our NAB (number of attached buffers) is set to 100. Should we look at increasing this buffer and setting the MULTI-FETCH higher?
A MultiFetch factor of 10 means that in one call to Adabas you are doing the equivalent of ten calls without MultiFetch.
If you were to increase the factor to 20, one call to Adabas is doing the equivalent of twenty calls without MultiFetch.
Looked at another way, with the 10 factor you save 90% of the Adabas calls to read records. With the 20 factor, you save 95% of the Adabas calls. In other words, the biggest bang for your buck (buffer area size) is in the lower range of MF factors. However, you might have the client look at the fields in the View. As Ralph mentioned, the View size directly impacts the maximum MF factor you can specify.