Transaction Time Limit

Is there a way to 1) utilize the *time-out variable in a Natural BATCH program, or 2) obtain the value of the Transaction Time Limit in a Natural batch program.

I have a Natural batch program that will read a large sequential flat file (100 million records). For approximately 10% of those records the program will do a Find\update against an ADABAS file (only a small percentage of the Finds will result in an update). If I do my End Transaction based on the Find count, the program could time out. If do an End Transaction based on sequential read count, the program may be performing End Transactions for very few held records. It seems the better alternative would be to use a combination of the Find Count and the *time-out value (IF Find-Count > 200 or *time-out < 60)

However, I see that the *time-out variable is only available in Natural security. I could create my own *time-out value with the Settime statement, but would have to hardcode the Transaction Time Limit, unless the batch program can somehow obtain that value.

Any Feedback would be appreciated.

A few questions about the scenario:

Is this a one time job? If not, how often would it be run?

How many records will a FIND typically produce? How many (percentage) of the records from a single FIND will typically be UPDATEd?

How much data from the sequential records will be used when a FIND is executed and the resultant records READ and possibly UPDATEd?

The last question is a result of having once “solved” a similar problem by pre-processing a transactions file (with about seventy fields) to create a new file with just a unique key and two other fields, for a small (somewhere under 5% of the original) “spin off” file which was processed by Natural.

I have seen this once before, but don’t care for the idea. DBAs set Adabas timers as low as possible to force us programmers to keep records in the Hold Queue for as little time as is possible. By checking *TIME-OUT to see if more time is left, you would be attempting to keep records on hold for as long as possible. To reduce ET commands.

. In this world of asynchronous ETs, each individual ET is not really that expensive. That’s not to say I wouldn’t try to reduce them by checking a held-record count.

. Holding records for longer than absolutely necessary is counter-productive.

. If you deferred an ET knowing that 10 seconds were left on the clock, how could you guarantee that an overall system slowdown wouldn’t result in your timer being exceeded before your next ET was issued?

I would use

IF  #ET           >= 100
 OR *COUNTER (R.) >= 2000

If the updated records are distributed evenly, that would give you an ET command count of 50,000.

Steve, Thanks for the Reply. This will be a monthly job, but only the first run would have several million records; subsequent runs would be around 200,000 records. The Find will always return just 1 record as it is using a unique descriptor. Of the Finds, only about 10% wll be updated. Only one field from the sequential record is used in the Find By statement and only 5 fields on the ADABAS file will actually be updated.

Ralph, Thanks for the response. Our TT=600. I was going to check my counters and *time-out at the bottom of the sequential read count.

Hi Thomas;

With your description of the job, I would probably make it real simple. Something like:

READ sequential file
FIND adabas file view using unique descriptor - view should just have fields needed for following IF
IF record should be updated
GETR. GET using *ISN from the FIND and a view with just the 5 fields

The 20,000 ETs, for the regular monthly run, will almost certainly be cheaper than tests using *TIME-OUT or your own homegrown timer. Also, as Ralph mentioned, 2o,000 ETs are really rather trivial (cheap).


Thanks for the feedback. It’s good to know that extra ET’s are not that costly. I will keep it simple and use the strategy you and Ralph have posted. Thanks again.