HOW TO READ LARGE DOCUMENT USING DEVELOPERURGENT

Hi All,

I am trying to read a very large flat file that is in the hard disc.
the flat file is a fixed postion with no delimiters. i also have to count the number of records in the file and append it to another file. in the hard disk.

can anybody please give me an idea how to do this. the flat file might have more than 300 records.

i need a suggestion how to read this large flatfile from the disc. count the number of records and append it to another file.( i am not using trading networks).

thanks in advance

Mike

Mike,

FlatFile with 300 records is not a large file,Create a flow service and directly load that file with pub.file:getFile (loadAs=Stream)and use wm.flatfile:convertToValues service for parsing that file. use the sizeOfList service to get the record count and append it to another file using (writeToFile) as per your requirement.

If your developer is hanging while executing the service just schedule that service and test it.

Since i have parsed a flatfile with 30MB(60-70K records)size via Filepolling,direct FTP to the service as well,ofcourse it will take few minutes to parse but it will surely process.

HTH,
RMG

RMG,
thanks for your reply.

I have a record that is 50MB and it has like 180K records only. so it might be a large. Will this affect the getfile value. Do i have to specify a value for the buffersize in the getFile service or is it enough that i just specify LOADAS=stream.

will the CONVERTTOVALUES can hold that many records. I also have to do data mapping and a convertToString services before writing to the file.

Can you suggest me how to work this out.
right now iam getting it as a stream and then using the convertToValues with the iterate ==true. so it loops through each rcord and iam
mapping it to another format and using convertTOString and writing it into another file.
but it takes like 40 minutes to do this for 43MB file. is that fine. how can i minimize the execution time.

HELP NEEDED
thanks in advance
MIKE

Mike,

Specify some value in the bufersize of getFile and loadAs=stream and you have already set iterate==true,which should be fine for process huge loops while parsing.since your flow have some mapping involved,writing the data to another file it will take some time for processing.

Schedule the flowservice you will see some change in the execution time.

HTH
RMG