Hello Experts,
I need to handle a scenario where i need to parse a flatfile ,which is of size 30 MB (Which keeps on growing) ,In that file i need to check the date parameter if the date is of current date then make of record, other wise ignore the record.
I tried to implement by loading file as stream and setting iterator to true. Is there any other ways of handling this in efficient manner.
Hi Rajesh
This is what i did.
Instead of loading the complete file into the webMethods memory and then using the convertToValues for parsing what i did is I break the BIG file into small chunks and if that chunk has relevant data then pass it to convertToValues and append to document list.
inorder to break the BIG file to small i used the
service pub.file:getFile with loadAs set to Stream, pub.io:createByteArray with length set to the length of each row in the flatfile
Repeatedly used the service pub.io:read to read the stream until i reach the end of the file… the rest of the logic depends on your situation
Hello chakradar,
Thanks for the response.
But the file i have is delimited flatfile.(i mean to say length is not fixed,then how can i specify the length of each record.)
If your logic can be applied to the delimited length file also,Could you please elaborate more on implementation of solution.
Their are two ways in which you can parse the huge flat file into IData document.
As you mentioned in your post set iterator in convertToValues to true. This way you tell IS not to load and parse flat file in one go, but to parse it record by record. When the last record is reach the value of ffiterator in the ouput will be NULL. When you set iterator to true , you will have to call convertToValues under the repeat step and should exit the repeat loop when ffIterator variable is null. This process works fine for almost all the cases, but my exp says it may go for a toss or may take a very long time to process the file , ifs its very very large( say 300 MB). (someone correct me, if I am wrong.)
The second way to process the file is to split the large files into many small files and process each file one by one. The java service to split the large file into small chunks is located in the shareware section and their are many post on this site which talks how to use it.