Let me start by saying that I know that the MULTI-FETCH factor should be 10, or a similar number based on how many records will be processed, eg. 15 per screen…

We have a file clean-up program / conditional delete (2,000,000 records per run) that was coded using MULTI-FETCH with a factor of 20,000 and an ET every 500 records. This was bringing everything to a stand still, causing NAT3009 errors.

After flushing the job, the ET was reduced to 5, more 30009s. MULTI-FETCH has now been removed and the ET set to 2, and I think this is working.


  1. I seem to recall that Natural ignores MULTI-FETCH for updates / deletes?
  2. This implies that the issue was the high ET, but still getting 3009s with MF and ET of 5…
  3. Would a large MF factor as 20,000 use only memory or spill over to WORK?
  4. Would a MF factor of 10 and GET for DELETEs and a normal ET of 25-50 work?

Thanks in advance…


  1. Yes, ADABAS will ignore multi-fetch if the loop has an update.

  2. I would ask, how are you doing the update and where is your ET counter in relation to your escape logic?

For a process that reads 2 million records, if it only deletes a small percentage of that number, you should read the file and use a GET to delete the records. That would allow you to use multi-fetch on the main read. You should ET based on records read, not records deleted.

A better option would be to have two programs. One that reads the records and writes the records to be deleted to a workfile (include ISN on the file) and a second program that reads that workfile, gets the records by ISN and deletes them.

  1. Your multi-fetch factor is limited by ADABAS buffers and the record length of the file being read. A factor that would exceed the buffer will be adjusted.

  2. See #2 above.

Now, you start off saying your factor should be 10 if you are showing 15 per screen, but that means ADABAS will retrieve 20 records for Natural to display 15. So, if you are showing 10 records, your factor should be 10 (or 11 if your screen includes information on the next record).

Happy Coding!

Thanks Jerome! It’s not my program. This “simple” program was written by our director, who doesn’t code much anymore… I would have used GET logic with MULTI-FETCH if conditional, or better yet split into 2 programs as you suggested if unconditional.

Regarding the MF factor of 10 or 15: I didn’t express it well. I always use 10 - optimal per Jim Poole who trained me in ADABAS internals at MCI where we both worked a lifetime ago. The exception was if you knew you were reading 15 records to populate a screen, then use 15, or 7 for a week’s worth of daily records… Before and since those few years working as a DBA, I’ve worked mostly as a developer in Natural with some Assembler - I modified the SAG NATCICS interface program for some special requirements at a bank I worked at when upgrading CICS and NATURAL. So I haven’t kept current on ADABAS internals in the last 25 years.

I too thought ADABAS would ignore MULTI-FETCH if an update referenced the read. The exception would be using GET logic for the update. But this situation calls that into question.

And I was wrong about it being a conditional delete: It’s deleting all records earlier than a certain date (ESCAPE logic). The ET logic follows the delete, just before the END-READ.

Regarding “ET based on records read, not records deleted”, I don’t understand: If using GET logic, wouldn’t it only hold the records deleted or updated?

Again, I like the two program solution best: Clean, it allows a review of records to be deleted…

Regarding ADABAS buffers, automatically adjusting the buffer size makes perfect sense, but I thought I remembered something about it using WORK if the calculated buffer size was above a certain threshold… I’m old and probably confused about this.

Again, thanks so much for your prompt reply!

Best Regards,


We can’t debug a 3009 without the subcode, which you will find at the end of the error message.

Natural normally issues standard ACB calls to Adabas. ACB allows a maximum of 2,048 records to be returned via Multi-Fetch. With Natural’s latest releases, you can explicitly set Natural to issue ACBX (Adabas Control Block Extended) calls which can handle much larger buffers. In either case, WORK is not involved.

When Natural compiles a READ LOGICAL, he inserts an L3 command into the ACB. If a MULTI-FETCH factor is specified, he inserts an M into a Command Option field in the ACB. If the compiler later finds an UPDATE/DELETE attached to the READ, he replaces the L3 with an L6 (read with hold) and removes the M. Adabas would multi-fetch an L6 if told to do so, but Natural is smart enough not to ask.

Yes, but Jerome is saying that the ET should also consider how many records have been read. Something like

IF  *COUNTER > 10000
      OR #ET > 499

This test must be executed even if a record is rejected (by your date logic).

1 Like

Also - if you are writing out the list of ISNs to be deleted, take a look at the ADALOD utility - it can do a mass delete, saving you from having to write a program to do the deletes. If you are deleting a large portion/large number of records from a file (more than 10 - 20%), it is usually faster to use ADALOD UPDATE DDISN than a delete program.

Boy do I miss this kind of geeky detail! We migrated from our homegrown Natural/ADABAS HR/Payroll system to Workday three years ago so I’m not up to date on everything. We still use Natural for Students and Finance, but that’s not my area.

1 Like

Thank you all for your input! Most illuminating!!! I’ll use them to suggest a plan to management.

I’m still curious about the cause of the 3009 (i.e. the subcode). Most likely it was a transaction timer (0) or the hold queue (6).

Despite warning that my code snippet was “something like” the necessary logic, I would like to append one very important line:

IF  *COUNTER > 10000
      OR #ET > 499

If you choose the Adabas utility solution (ADALOD UPDATE), be aware that deleted records will not be written to the Protection Log. For recovery purposes, you need to archive the ISN list or the program that generated it.

Thanks Ralph, I had replied in SAG-L. It was a sub-code 15.

For those who are too lazy to look up the subcode. :slight_smile:

Subcode 15 means that the WORK dataset cannot handle all the records you are placing on hold. (Perhaps this is where OP was confused with WORK’s involvement with Multi-Fetch.) Either the ET threshold must be decreased in theupdate logic, or the ADABAS LP parameter must be increased. As a developer, I avoid involving the DBA if I can resolve the issue myself. As a DBA, I don’t like to change ADABAS parameters. :slight_smile:

OP posted on SAG-L the intent to Multi-Fetch 10 and ET every 10. My recommendation agrees with the MF factor, but I would ET after 50 updates or 1000 READs, whichever happens first. That would reduce ET commands, RESET statements, and the number of checks for ET status, while avoiding 3009.

Note that ET logic must not be conditional on update logic.

1 #COUNT (I4)
1 #ET (I4)
1 #UPDATE (I4)
  IF  EMP.BIRTH < D'1900-01-01'   /* conditional update
      GET EMP *ISN (R.)
      UPDATE (G.)
      ADD 1 TO #ET
      ADD 1 TO #UPDATE
  IF  #COUNT > 1000               /* unconditional ET check
   OR #ET    > 50
      RESET #ET
    / 'Updated:' #UPDATE       (EM=Z,ZZZ,ZZZ,ZZ9)