Is there any formula to determine the factor on this option?
I tried 5 for my test and looked like the performance increased…
Thanks
Is there any formula to determine the factor on this option?
I tried 5 for my test and looked like the performance increased…
Thanks
There is no formula.
I start with 10. It’s not a small number - it’s an order of magnitude!
With HISTOGRAM or READ, use MULTI-FETCH ONLY if you’re processing the entire file, or if you can specify a TO clause.
If running in batch, you may want to go higher than 10, but I wouldn’t go beyond 100.
On-line, consider the unit of work. If you’ll display up to 20 records, then you may want to set MULTI-FETCH to 20. 10 is still a good number for on-line because you don’t want your memory allocation to impact upstream or downstream modules (CALLNATs). That is, what if your MULTI-FETCH is in a subprogram which is called from within a MULTI-FETCHed loop? Nested MULTI-FETCHes are cumulative in available memory.
Thanks Ralph.
These are the possible 12 identical files that I mentioned on the other topic about copycode.
Each file has about 20M records and I will be reading by a date descriptor with STARTING/TO.
We expect to have around 100K records that in the range above and it will run ONLY in batch.
Funny you mentioned ON LINE, because I saw a code today that does exactly that, reads a file and displays records, but have a muilt-fetch factor of 1500! Even if displays only 20 records at the time…
The developer claims to be very knowledgeable and thinks that multi-fetch is a solution for EVERY read… :?:
This is one of those facilities where it is very difficult to formulate hard and fast rules. As Ralph pointed out, the MF factor is an order of magnitude. If you have a file with 10 million records and must process the entire file, you “normally” would issue 10 million calls to Adabas; with a MF factor of 10, you would only issue 1 million Adabas calls.
It is important to realize that you do not save on I/O. All 10 million records get read regardless of the MF setting.
Occasionally, large MF factors (like your factor of 1500) are appropriate. For example, suppose you have a file with 10 million records. Basically, your loop starts with the following code:
READ MYFILE IN PHYSICAL SEQUENCE
ACCEPT IF F1 GT F2
:::: process "big records"
Assume only 300 (just to pick a number) of the 10 million records pass the IF test. By “big records” I mean that there are 150 fields I will process, and they are long fields (say 20 bytes each just for discussion). At 3000 bytes per record, I will not be able to take advantage of a large MF factor.
However, if I create two Views, one of which just has F1 and F2, and the other of which has the other 148 fields, I could code something like
READ MULTIFETCH OF 1000 VIEW1
ACCEPT IF VIEW1.F1 GT VIEW1.F2
GET VIEW2 *ISN
::: process VIEW2
I am now only MF’ing the VIEW1 READ, which only has two fields, perhaps only twenty bytes per record. I can use a very large MF factor without exceeding the buffer size.
What is important to understand is just what MF does; namely, save calls to Adabas. It is equally important to understand how you can hurt performance with MF. For example, something like a READ loop with a MF factor of 20, when you typically ESCAPE out of the loop after processing just 4 records, is silly. You will end up reading five times as many records, and will only save three Adabas calls per loop execution.
steve