Auto-incremented ID Generation in ADABAS

Hello,

I’m writing a natural Programm which generates a unique ID for each record and writes this record into an Adabas Table. I’m using a rather simple mechanism to determine the next ID:

READ (1) TABLE DESC BY ID ADD 1 TO ID
Is there a better way to acomplish this. Is the table locked with such operation or better, how can I prevent 2 Users to get the same ID?

Regards,
Carlos

Carlos, need a bit more information.

Do you need this ID as a secondary key, in a Superdescriptor, or …

There is no “table lock” in Adabas, so the answer to that question is “no”, and a simple READ without an update intent will never lock even that record / row.

Best regards

Wolfgang

Hi Carlos,

Your method is good but first do all you processing; when you ready to insert the new record, do the find and add 1 to it. To make sure that no body took the new ID do one more find with the new number you generated. If no record returned with the new ID, then go a head and store the record otherwise add 1 to the new ID and repeat the logic until you make sure that the ID generated was not taken

Thank you,

Jamal

1 Like

Carlos, you wrote “writes this record into an Adabas Table”. Thus I guess, your code looks really like
READ (1) TABLE DESC BY ID ADD 1 TO ID UPDATE END-READ

From the READ to the END-READ your record is hold by the program so that no one else can update it at the same time.
But what happens if someone tries to update it while you have it in hold? This depends on the WH parameter setting.

If WH=ON, others wait until the record is free again. That is not what you want. You read 14 (and set hold), the other reads 14 (and waits), you add 1 and write 15 and release it. The other continues, add also 1 (to his 14) and writes 15 as well.

If WH=OFF (default), others receive an error (guess 3145). In this case, they can just re-read the record (statement RETRY in the ON ERROR) until it is free. This way you can be sure that the record you increase is really the most recent.

If you are not sure whether WH=OFF on your side in general, you can read the setting with USR1005N, and if needed set it with SET GLOBALS WH=OFF.

1 Like

What Lucas has outlined is correct and probably the most common technique I have seen at customer sites.

However this technique can be a bottleneck in a high volume add environment since the design essentially single-threads the store transactions.

I have two alternatives for you. Note there are many others but you expressed a desire to have sequentially assigned numbers.

I read the question a bit differently, as in “read the last record to give me the next-in-sequence unique number for a NEW” record to be stored, hence I mentioned the record won’t be locked as it won’t get updated, so we need clarification here.

Thank you, Jamal.
It’s a good tip.

Wolfgang,

I need to read a million records from an Adabas file. But I need to read them in parts. Parts of 10 thousand records. Therefore, I need to know which record was last read. I read that ISN is not recommended for this. So I need a unique and sequential identifier.

READ BY ISN is perfect for this. Has STARTING FROM option…

Eugene (Gene) Miklovich

ADABAS/NATURAL Systems Support and DBA

Cell: 916-202-7047

Normal Hours: 8am-4pm E.T.

Please call or text if assistance needed outside these hours.

Out of Office Thursday June 1st thru Monday June 5th.

is the concern with ISN that the ISNs are not sequential in relation to the order that the records were added? if ISNREUSE=OFF, then Adabas will not re-assign ISNs from deleted records to the new records.

Does the record id actually need to be sequential, or could it be unique and incremental?

Could a field definition like

FNDEF=‘01,Z1,20,U,DT=E(TIMESTAMP),SY=TIME,CR,DE,UQ,NU’

Where when a record is created the field gets automatically filled in by Adabas with a unique timestamp accurate down to the ?micro?second (just going off memory on that accuracy).

Carlos,

just as Eugene says, reading by ISN is just perfect for “chunking” if you don’t need that ID for anything else.

It’s a lot faster, you don’t waste any space for the extra field.

If new records are deleted and added in parallel to your browse operation keep in mind what Douglas said,
in that case make sure the file is defined with ISNREUSE=OFF so that newly added records will always get the next higher ISN, if there is no parallel update activity it just doesn’t matter.