Doubts

I have work file which is of a variable record length and the fields are delimited , i tried using SEPARATE for the same but the record length is exceeding beyond 10,000(for a single record not the whole file)and SEPARATE does not work on group variables.
So the other alternative was to read the same file into an array(1) of nearly that same occurences and check for the delimiters and load it into a fixed length record. I have to run a daily job for the same and the record would be coming in millions. Can someone suggest a better alternative or performance enhancer for the same?

I think your code is like the following:

define data local
1 #a (A10000)
1 #array (A50/1:5)
end-define
read work file 1 #a
separate #a into #array(*)  with delimiters ';'
display #array(*)
end-work
end

If you got millions of records it’s maybe faster to read unformatted. So you read more bytes at once to minimize the number of harddisk-seeks… But that’s only an idea. Maybe natural does that kind of buffering automatically.

Matthias:If that would have been the case then it would have been great but , I am using an older version of NATURAL and the Maximum length allowed for a single field Length is 253
That is #A (A253) , but if I would be able to read like #a (A10000) then there would not have been any need for an array i would have direclty used SEPARATE
because #A would not have been a group variable. I tried searching and found out that maximum lenght allowed for single variable is 1Gb but that is system limited,
Since i am working in Corpn. i canot change the system parameters on my own.
But Thanks for the suggestion. More suggestions are always welcome

It’s always been a pain in Natural to process comma-delimited non-fixed width data as input. When this happens, I always ask if the sender can reformat the data into fixed width columns such that you can REDEFINE the input record.

Can you make such a suggestion?

Brian that would be my job , to take a FTP file into a GDG which is having variable length record but of a large length , so due to this i am facing problems ; so i have to handle things at my end due which the ambiguity is high as in what to apply???

Is there a set structure for this delimited file (i.e. FIELD1 is name, FIELD2 is date, …)? Is there a maximum length for each field?

As you are taking an FTP dataset from someone, my suggestion was to ask the sender to format the data formatted differently. Data formats should be something mutually agreed upon rather than dictated, and such agreements should become standards.

My guess is that the sender probably doesn’t care whether the data is comma-delimited or fixed width and it most familiar with databases which have easy import capabilities when handling comma delimited files, and didn’t realize you would have a problem with it.

Just ask the sender if that is possible.

Hi Aseem,
Do you think somebody from your around would be able to create for you a COBOL (Assembler would be even nicer!) routine to simulate that NATURAL SEPARATE statement?

Something like
Read work file 1 #A
Call ‘SEPARATE’ using …
et voila… :slight_smile:
End-read

I remember in my dark dust past I used to write something like this on my own, but after getting my hands on NATURAL 4 (I’m talking about mainframe) I got very lazy :-), sorry :slight_smile:

Good luck.

Jerome: yes the output file is having a fixed structure but the file being read: its length would be (delimiters which would be the total count of fields in the file layout + the output file layout length).
Brian: cant do that because that is what the requirement is from the Corpn.
side : get a variable length record, delimit the file , put it into a fixed length format and store it.
Nikolay : even I resorted to COBOL because i cudnt figure out a solution in natural, since we can use nearly 32,000 variable length in COBOL so i used UNSTRING option to read the delimiters, and store it in file.
Would really apperciate if could do the same in NATURAL .
Thanks

Aseem,

I`m sorry if I misunderstood you, but I meant using NATURAL (exactly as Matthias recommended to us):

define data local
1 #a (A10000)
1 #array (A50/1:5)
end-define
read work file 1 #a
separate #a into #array() with delimiters ‘;’
display #array(
)
end-work
end

However, since your NATURAL language would not allow you to split the data in such an elegant way, you would have to write something like

define data local
1 #a (A10000)
1 #array (A50/1:5)
end-define
read work file 1 #a
call ‘seproutn’ using #a #array()
display #array(
)
end-work
end

where the routine SEPROUTN (written not in NATURAL) would put your data directly into #array(1:as many as needed). Thus, your program would stay in Natural with just a small exception.

Any way, I`m glad you solved the problem.
Best regards,

Nikolay

Would something like this work?


 DEFINE DATA LOCAL
 1 #WKF       (A100/250) 
 1 #HOLD      (A101)
 1 #WORK      (A250)     INIT <H`FF'>
 1 #FIELDS    (A100/300)
 1 #I         (I4)
 1 #DELIMITER (A1) INIT <','>
 1 #CTR       (I4)
 END-DEFINE
 *
 READ WORK FILE 1 #WKF (*)
   FOR #I = 1 TO 250
     MOVE #WKF (#I)     TO #HOLD
     MOVE H'FF'         TO SUBSTRING (#HOLD,101,1) /* to preserve
                                                   /* embedded blanks
     COMPRESS #WORK #HOLD INTO #WORK LEAVING NO SPACE
     EXAMINE #HOLD FOR H'FF' DELETE FIRST
     REPEAT
       IF NOT #HOLD = SCAN #DELIMITER
         ESCAPE BOTTOM
       END-IF
       ADD 1 TO #CTR
       SEPARATE #WORK INTO #FIELDS (#CTR) REMAINDER #WORK
     END-REPEAT
   END-FOR
   PERFORM MOVE-FIELDS-ARRAY-TO-FILE-VIEW
 END-WORK
 END      

Nikolay : appreciate the help but however i want the code in one language
Jerome : the initial process of reading is fine but My biggest concern is the record length cos its nearly 10,000 as had it been 250 or nearby then the problem would have been easily solved.The version i am using does not allow elementary filed to go beyong 253 length.

Guys I checked about the variable length for NATURAL is allowed till 1Gb but is limited by system to a very short length(in my case 253) . Does anyone has idea on this because if i can utilise this fact then i can make the record reading very simple.

thanks

Hi Aseem,
Which Natural version are you using ?

  • I mean: The large variables has been around for quite some time…

Finn

  • Try issuing SYSPROD from commandline to get this info…

2.2.4 a very old version i presume , so have to work around it only :frowning:

Hi Aseem,

Natural 2.2.4 is ancient history; I hope you have some qualifications in archeology. Natural has come a very long way in the last 15 years. Good luck!

Cheers and sympathy,

Graeme Lane

Just wondering that we still have shops running under Natural 2 and not yet upgraded :shock:

Guys what excalty it takes to shift all the applications which are and were runnig on a NATURAL version to get it upgraded , some solutions which i can present to my CORP. ?

Hi Aseem,

I’d guess that management isn’t that all that worried about gambling the business by using software that the vendor hasn’t supported for over a decade and it would seem they have gotten away with it so far.

By not taking advantage of new versions, you aren’t getting best value for the money you pay for support/maintenance. My suspicion is that a decision was made by management a long time ago to stop paying maintenance and to keep their fingers crossed hoping nothing breaks.

So before investing too much effort investigating the upgrade process, I suggest the first question you need to get answered is whether or not the company has a current support contract with Software AG. If not then you’ll need to have a conversation with Software AG (and with other vendors I suspect).

Cheers,

Graeme

Now I understand your initial question!

It sounds to me, that you got a working solution - maybe something like Jerome LeBlanc’s code. But I don’t think that there is a trick to make it much more faster… :frowning: