We have been asked to improved batch performance, by discounting the Et command.
Until now for each business transaction we performed ET command (each business transaction involved more than one file, so we perform more than one STORE command).
So, now (for the example) we will perform Et command for each two business transaction.
Unfortntualy, how can we deal if we found that we need to perform BT command for the second transction (it will perform BT for the two)?
The logic you described is typical of online applications, not batch applications. For an online system, the user enters some data, then hits a PF key or the enter key. The appropriate database activity is performed, then the screen is returned to the user with the implicit understanding that the database now reflects all the changes relevant to the last transaction. In other words, one ET for every logical transaction.
In batch, this logic is typically not employed. There is no user waiting to enter a next transaction. Instead, the next transaction is typically on an adabas file or a work file. There is no need for one ET per logical transaction (or two , or three logical transactions). Instead one typically sees a counter to 50 or a hundred (or more). At the end of the countdown, an ET is issued. The selection of the counter value can be based on many things, including how many updates are involved in a single logical transaction, how many updates you can tolerate re-doing in the process of a restart, etc.
As expensive as the one ET per transaction is, I have rarely seen a scenario where this is high on the list of things to be “tuned”. You might want to look further at the logic involved in a transaction.
It is also unusual to issue a BT inside a batch application. The BT is a relatively expensive command (even more than an ET), because it must virtually issue a delete for every store, a store for every delete and updates for every update, plus track the uncommit.
A more typical approach is to read a transaction record, edit it, accept or reject it, then do store/delete/update(s) as needed. A Backout Transaction should really only be called on in a catastrophic situation, not as a programming shortcut. (This is true for almost any database I’ve encountered, btw, even when they call it a “rollback”.)
Unlike Steve however, I have seen many cases where the ET logic for batch did need to be tuned - reducing the frequency of the ET’s did help reduce batch run times and Adabas load. Like Finn suggests, hundreds of updates can be committed in a single ET. And yes, you do need to adjust your restart logic to allow for re-positioning your input files to the right point (hint: use transaction data with the ET command to keep track of where you are).
The demand of reducing Et command came because we are using Hot Drp.
So, for every ET an I/O operation for Drp is involved.
The system is quiet old, so we dont want to make noise, while doing some changes.
It is divided for a lot of Subprog, and in every one we deal with one file.
So, we perform a STORE command without an ET.
At every subprog. we do some check up, and decide if to perform BT.
Now, adding a ET counter we cant continue use the BTcoomand, at the middle of the process.
We need to find a solution, to deal with it.
Yet, with no luck
The demand of reducing Et command came because we are using Hot Drp.
So, for every ET an I/O operation for Drp is involved.
The system is quiet old, so we dont want to make noise, while doing some changes.
It is divided for a lot of Subprog, and in every one we deal with one file.
So, we perform a STORE command without an ET.
At every subprog. we do some check up, and decide if to perform BT.
Now, adding a ET counter we cant continue use the BTcoomand, at the middle of the process.
We need to find a solution, to deal with it.
Yet, with no luck
Your description suggests several approaches that might save you resources.
Suppose, in each of the subprograms, you do not issue a STORE. Instead, you populate a global variable(s) (or a field(s) in a GDA). After all the subprograms have executed, you would check a flag (maybe unnecessary, merely getting this far would be the indication you need) and then issue all the STOREs followed by an ET.
If one of the subprograms decides that the transaction should be backed out, instead of a BT, all you would have to do is RESET some global variables.
This could be extended to multiple transactions. instead of a single global variable for a STORE, you could have arrays, say dimensioned as 10. Then, you would run through ten successful transactions without ever doing a STORE. As noted above , to back out a transaction you would simply null out the relevant array member(s).
One more thing to note. From your last posting, it is not clear how many STOREs (each with their own subprogram) constitute a transaction. CALLNATs are relatively expensive. If you have a large number of STOREs per transaction, you might try combining the STOREs into fewer (one?) subprograms.
Of course, I do not know your application, so I do not know if any of the above is feasible. But, perhaps they are, or might suggest variations to you that would be feasible.
As noted in an earlier post, we do not really “know” the application.
What percentage of logical transactions get backed out? Hopefully just a very small percentage. If the percentage is large, you will be doing a lot of work (adding records, then deleting them). If instead, you accumulate the updates logically (global variables, stack, etc) you could only add records when you know the entire logical transaction is valid.