GET statement

READ.

READ view-name BY key-name

  …

  IF update-condition-met

        GET.

        GET view-name *ISN (READ.)

              Update record

        UPDATE (GET.)

        IF #ET-CNT > ?

              END-TRANSACTION

              RESET #ET-CNT

        END-IF

  END-IF

END-READ

In the above pseudo code to update a record, is the GET statement an absolute must. Can the code be written in the following way.

READ.

READ view-name BY key-name

  …

  IF update-condition-met

              Update record

        UPDATE (READ.)

        IF #ET-CNT > ?

              END-TRANSACTION

              RESET #ET-CNT

        END-IF

  END-IF

END-READ

If yes, can you please tell me when is the usage of the GET statement an absolute must.

Thanks,
Mac

The GET statement helps in locking only one record while updating it. It is so possible in your second READ statement, that you might be reading several records which do not satisfy the update condition. Thus, it can hold a lot of records with out doing any update.

The GET is not required. You determine its need based on the data. To be updated, a record must be placed in the Hold Queue. As such, the record is locked and may not be updated concurrently by other users. In your first example, each record processed by the GET is placed on hold. In the second example, each record read is placed on hold. This is determined by the label reference on the UPDATE statement.

If 75-100% of the records on the file are to be updated, then I would consider an additional GET as unnecessary overhead. The few records which are not updated (but held anyway) are inconsequential compared to the cost of the GET commands.

If 50-75% of the records are updated, and they are spread evenly in the file, then I would still update the READ, but I would add *COUNTER to the ET limit, as in

IF  #ET-COUNT      > 100
 or *COUNTER (RD.) > 1000
  then
    RESET #ET-COUNT
          *COUNTER (RD.)

The reason for *COUNTER is to estimate the time between ETs. The DBAs set several Adabas timers. If sequential ETs are not executed within the allotted time, the program ABENDs. Often, developers are uninformed of these timer values. A rule of thumb is to guesstimate how many records you can process in 15 minutes (a reasonable and typical timer setting) at peak processing times. My example, above, is for 1,000 records. Then the ETs occur after 100 records, or within 15 minutes, whichever comes first.

If less than 50% of the records are to be updated, then use a GET. This limits how many records are unavailable for update by concurrent processes. Be sure to check the #ET-COUNT and *COUNTER fields for each loop iteration, not just for each update.

Also, remember to avoid ACCEPT and REJECT within update loops, as they don’t allow your IF #ET-COUNT statement to execute in a timely fashion.

The goal is to reduce unnecessary Adabas commands (GET and ET), while limiting negative impact on other processes and avoiding program ABENDs.

If you don’t use the GET, then, as Vallish says, you put all the records not being updated on hold also. If you are updating a small part of a large file with the READ, you may encounter problems with the hold queue size.

Secondly, even with the GET logic, if you are updating a small part of a large file, you may run into time out problems if the time between ET’s is greater than TT (transaction timer) - you are only checking the ET counter if an update condition is met. You might want to add


IF BREAK OF *COUNTER /7/  /* every 1000 
   IF #ET-CNT > 0 
      END TRANSACTION
      RESET #ET-CNT
   END-IF
END-IF

Place this in the READ loop, just before the END-READ statement.

You can also use this approach if you are updating a lot of the records read (experiment, but I would think ~ 30%+) - remove the #ET-CNT and just use the BREAK on *COUNTER to issue the ET every 100 (/8/) or 1000 (/7/) records. This will avoid the overhead of the GET but keep from having too many records on hold.

Thanks for the replies. It really helped.

Can you also let me know the other uses of the GET statement, apart from the usage with the Update statement ?

Thanks,
Mac

If you have a record with widely variable number of PE (or MU) occurrences, your primary FIND or READ might just get the first set of occurrences (say 10 - the actual will be a tradeoff of the size of the buffer and the frequence of rereads with the GET). If the PE counter for the record read is greater than the first set (ie 10), then you can use the GET into a different data area to obtain the remaining occurrences). Or the data from the FIND or READ might allow you to calculate a specific occurrence that you want to read with the GET (messy, but ‘legal’).

This approach works best if the majority of the file (>75%) has fewer than x occurrences (~10), but for the others, it will often have large numbers of occurrences (~100). Using a smaller set improves performance by reducing the work to decompress and transmit the empty occurrences, but trades off against the time required to make an additional call to Adabas for the additional occurrences when needed.

GET is the quickest way to read a record, because in only accesses the ADABAS address converter plus the regarding DATA-Block.
The disadvantage is, that you have to know the ISN first.

I heared about a company, which uses the ISN as a customer number. There you can use the quickness of GET.

Doug,

That’s an excellent idea to use the IF BREAK OF *COUNTER /7/ .

However, I don’t like the IF #ET-CNT > 0 do end transaction end-if.

Suppose you read, and put on hold, 1000 records, but none will be getting updated? You won’t do an end transaction, and you’ll continue putting records on hold, eventually exceeding the hold queue limit.

So I would do the end transaction regardless of the number of records that were updated. Also, this code MUST be placed after the update, or you’ll get a 3144 (record not on hold) error if you did an end transaction before trying to update the record.

And MAC, in your pseudo code you don’t show where #ET-CNT is being incremented. It MUST be incremented regardless of whether the record is being updated, because all the records are being put on hold. Many programs are written where the ET counter is incremented only when the record is being updated, and then the hold queue gets exceeded. The programmers solution is usually to reduce the limit at which the end transaction is done, so the hold queue can hold 1000 isn, but an end transaction is done every 10 records.

Therefore you have to do a GET before the Update, so you only put the desired records on hold.
I think it’s best to combine the code of Douglas with a GET-Statement. Like that:

READ. 
READ view-name BY key-name 
  … 
  IF update-condition-met 
    GET. 
    GET view-name *ISN (READ.) 
    UPDATE (GET.)
    ADD 1 TO #ET-CNT
  END-IF
  IF BREAK OF *COUNTER /7/ /* every 1000 
    IF #ET-CNT > 0 
      END TRANSACTION 
      RESET #ET-CNT 
    END-IF 
  END-IF 
END-READ
IF #ET-CNT > 0
  END TRANSACTION
END-IF

The only exeption is, if you want to change almost all records of a file. Then a simple BREAK OF *COUNTER and an UPDATE (READ.) is enough. But if you want to change for example 5% of 1 million records, you should use GET etc. for performance reasons.