I haven’t checked this in while, but wonder which statement would be more efficient, specially when I have an AT BREAK on the code.
Thanks
I haven’t checked this in while, but wonder which statement would be more efficient, specially when I have an AT BREAK on the code.
Thanks
ACCEPT and IF…ESCAPE cause the same grief with AT BREAK: the rejected records will cause breaks. To avoid this, use WHERE (which may introduce another set of logic problems).
The only difference between ACCEPT and IF is that the ESCAPE is implicit in the former and explicit in the latter. An additional statement (ESCAPE) interpreted at execution time will increase CPU usage, so ACCEPT should execute minimally faster.
In addition to WHERE, another way to avoid breaks from records that will fail an ACCEPT/REJECT or IF…ESCAPE TOP, is to have a BEFORE BREAK clause with the ACCEPT/REJECT or IF…ESCAPE TOP logic.
I have not done any indepth performance comparisons between the two. However, I long ago discarded ACCEPT/REJECT. The reason? Readability.
The posting on this site by someone who wanted to know the difference between ESCAPE TOP and ESCAPE BOTTOM (yeeesh) to the contrary, most programmers, even those new to Natural, easily understand ESCAPE (both forms). ACCEPT/REJECT, to the contrary, seem to be more confusing, even to experienced programmers. Of course, the difference between contiguous and non contiguous ACCEPT/REJECTs may be a major contributor to such confusion.
steve
Thanks Ralph & Steve.
The goal is to print all records with field AMOUNT NE zeros
and totals of this AMOUNT at break of field AGENCY.
As you pointed out, the ACCEPT/REJECT conflicts with AT BREAK
This a draft of the code:
READ FILE BY KEY
ACCEPT IF FIELD NE 0
*
AT BREAK OF AGENCY
WRITE ‘Agency had’ OLD(AGENCY) SUM(FIELD) (EM=Z,ZZZ,ZZ9.99)
NEWPAGE
END-BREAK
I must confess that I never used the WHERE clause assuming that would affect performance using a non-descriptor on my search criteria (Nat 1.7 and before hyper descriptors :oops: ). The BEFORE BREAK looks good, but I will try the WHERE option, since it will isolate my data right where it needs.
WHERE can cause performance problems if you’re not intending to read the entire file, but these can be avoided with the use of FROM and TO clauses.
Hi Carlos;
A couple of observations. I presume AGENCY is a subset of KEY, otherwise your code doesn’t make a lot of sense.
Now, some assumptions which may or may not be relevant in your actual system.
Suppose you have a large file (pick a number, say 10 million records), and your records are relatively small. It could now be much more expensive to READ BY KEY than to READ PHYSICAL. (assume file is not regularly sorted by KEY).
Further suppose a large percentage of the records have FIELD = 0 (and hence will be rejected).
The following code would likely be quite a bit more efficient than what you have:
READ FILE IN PHYSICAL SEQUENCE
ACCEPT IF FIELD NE 0
END-ALL
SORT BY AGENCY USING …
AT BREAK OF AGENCY
:::
END-BREAK
process record
END-SORT
steve
Hi Steve:
Yes, Agency is part of the key. The file is small (300K records) and it is working fine with the WHERE clause.
Your suggestion of the READ PHYSICAL / SORT would be good if the file was bigger and the pgm only used in batch. In our case, the users want it to see the report on line.
Thanks again.
Carlos