Flushing the buffer for a work file

Hello.

I have a batch program that runs several hours and logs important information (e.g. errors) into a work file. Natural obviously buffers the work file output in memory and only writes it to the hard disc when the buffer is “full”. If the output contains only a few lines (e.g. only a few errors occur) it takes hours for the work file to actually physically contain the information, so that I can take a first look at it.

Is there any way of telling Natural to “flush” the buffer to the disc manually, e.g. after a record counter reached a certain number?

Best regards,
Stefan

Stefan,

I think the only way around this is to open the work file in APPEND mode, write the log message, close the file again,
and the next time the program needs to log something it’ll be opened in APPEND mode again so nothing is lost.

This way the file should always be fully accessible from outside.

Best regards,

Wolfgang

Dear Wolfgang,

I’ll try that one, although I’m a bit concerned about the IO overhead. As of now, the program only opens/closes the work file once. When I change this as you suggested the IO effort will increase dramatically. After all, the Natural buffer is a good thing :slight_smile: Only in some cases I would like to be able to control it a litte bit more…

However, I’ll let you know about my results as soon as I get a chance to test it.

Best regards,
Stefan

Stefan,

I suggested this approach because I got the impression this program only writes
some messages every once a while, if this isn’t the case and the number of messages
to be written is substantial then your concerns are appropriate and the proposed
solution isn’t feasible :wink:

You may have to judge and go for a compromise then, i.e. only “flush the buffer”
by closing the workfile for important messages and keep it open for less severe /
informational messages.

There are other options, like piping the messages to another process which issues
unbuffered writes, but that’s probably overkill.

Best regards,

Wolfgang

and, the old standby…write important messages to the database to a “logging” file. Just make sure the ET’s are coordinated appropriately!

If you got a newer ADABAS there’s the possibility to define a no-BT-File…

http://techcommunity.softwareag.com/ecosystem/documentation/adabas/ada6110os/relnotes/new.htm#new

I didn’t try this out but I’m sure I’ll need it some day.

The logging framework I use already supports logging to the database (although in my case it’s Oracle and not Adabas) so I’m aware of solutions that don’t use work files at all (anyway, thanks for the suggestion :slight_smile: ).

However, my question was aimed at a situation in which you don’t have a choice but need to use work files. And as I know “flush” from other programming languages I thought maybe Natural offers something similar (although I was pretty sure it doesn’t after searching the documentation for quite some time :wink: ).

As always, it’s not that easy :slight_smile: It depends on the situation: the logging is used frequently during development but is reduced to important messages in production. So, when the work files get “flushed” (opened/closed) for every single message that would slow down development. But having to wait for hours for the first error message to show up in production without flushing is neither what I want.

I think I’ll try APPENDing the work file and only closing (= flushing) it after an important message. That seems like a good compromise…

Best regards,
Stefan

Hi Stefan,
I think I`d try to “unblock” the work file (I mean in JCL “DCB=(RECFM=F)”) to simulate
"the buffer is “full”.
Perhaps, Natural (or rather OS) would not have any choice as to “write out right away” whatever comes into the buffer (?). I should admit I did not test it myself, but… it may work.

Best of luck,
Nikolay

I’ve just realized it may NOT be a mainframe environment; if so, my suggestion would not be good, of course :slight_smile:
Sorry in such a case.
NK

You are talking about z/OS, but this question is for Natural on OpenSystems (Windows and *IX, that is),
for the latter there is no way of tweaking the workfile behaviour from outside.

I got a similar problem a few months ago. My solution (under Nat 6.1) was to do a CALL ‘SHCMD’ ‘echo sometext >> mylogfile.txt’.

Of course this is quite a overhead (opening a new process, running the echo-command).

I wrote a user-exit that wrote with C-functions to a logfile. With C-function you can flush the buffer and you can do some more things like checking file-size and other things

On Open-Systems you can use tools like gtail to control these logfiles.

Best regards
D.E.

Sounds interesting.
I think the best thing you can do is to use system-functions to write logfiles. On Solaris and other *IX this should be done using syslogd.

Would it help to count the no. of incidents or errors you want to log, and/or the elapsed time and - depending on some time limit or no. of log lines - only do the close work file / reopen/append when the time or counter limit has been reached?
So you could at least limit the time you’d have to wait for accessing the log and wouldn’t have to wait for hours.
I haven’t tried this for logging but I’d imagine it could work like an update counter, so as to do an ET automatically after n transactions, or - on certain points in your processing - force an ET.

Hi Eva,

the use of a “countdown” timer could be a possible solution. I wonder how I could implement this in Natural… When entering the logging framework I could store the current time in an independent variable and check against it every time the logging gets called after that. Sounds interesting :slight_smile:

Best regards,
Stefan

Hi Stefan,

In the following example, used in batch processing, fields in a global data area are used.
Maybe the sample is of help even if it’s not about ET control but about logging in your case?

Relevant fields for the control when an ET is made, are

  • a counter (here G-ET-C)
  • a logical stating whether or not ETs should be made, here G-ET-MODE (e.g. in a simulation mode you would want to backout any
    transactions, therefore you would set this logical to FALSE)
  • a logical whether an ET should be forced regardless of the counter or the time elapsed (here G-ET-NOW )
  • plus several time variables and counters, here:
    G-SYS-START /* starting time
    G-SYS-ET-LAST /* time of last ET
    G-SYS-ET-DIFF /* milliseconds until next ET
    G-SYS-ET-C /* update counter
    G-SYS-ET-DONE /* ETs conducted

After a set of processing steps has been completed, a subroutine END-TRANSACTION is performed, which is in an included copycode.
Where necessary, the ‘perform END-TRANSACTION’ can be preceeded by the statement ‘G-ET-NOW := TRUE’ to force an ET at this particular point.
*
This subroutine does the following:
*

* ET will only be made after  G-SYS-ET-C records                    
* or latest after G-SYS-ET-DIFF milliseconds           
* but it can also be forced by setting G-ET-NOW  to TRUE    

SUBTRACT 1 FROM G-SYS-ET-C                                          
IF  ( NOT G-SYS-ET-C = 1 THRU G-ET-C )                              
    OR                                                              
    ( (*TIMX - G-SYS-ET-LAST) GT G-SYS-ET-DIFF )                    
    OR                                                              
    G-ET-NOW       
                                                     
  IF NOT G-ET-MODE         /* If no ET is to be made,
    BACKOUT TRANSACTION    /* all changes are discarded  
  END-IF                              	
* ...   
* >>> your coding re.
*  saving restart data, etc.
* <<<
*
  END TRANSACTION      
*                   
* >>> your coding re.                        
*    logging,tracing ..
* <<<
*                           
  ADD 1 TO G-SYS-ET-DONE  /* count ETs done                               
  G-SYS-ET-C    := G-ET-C  /* ET control counter is reset to an initial value         
  G-SYS-ET-LAST := *TIMX   /* save time of last ET           
  G-ET-NOW      := FALSE   /* ET-Now is only valid once         
END-IF