Hi,
There has been a memory issue in Integration Server with the pub.flatfile.convertToValues when loading a large file with around 30k records. To fix the issue permanently, we decided to go for the iterator option set to true. The file is picked up by a filepolling port and hence streamed and by setting iterator to true in the convertToValues builtin service, we can process each record instead of loading the entire file in memory as an IData Object.
The structure of the file is like this
header record ( beginning with H|) , detail records , trailer record ( beginning with T|)
H|APPNAME|COUNT|DATETIMESTAMP
FIRST_NAME|LAST_NAME|ADDRESS|POSTCODE|COUNTRY
FIRST_NAME|LAST_NAME|ADDRESS|POSTCODE|COUNTRY
FIRST_NAME|LAST_NAME|ADDRESS|POSTCODE|COUNTRY
FIRST_NAME|LAST_NAME|ADDRESS|POSTCODE|COUNTRY
T|COUNT|END
The output of ffvalues when mapped to the DT document is always the H Document ( header ) and the detail records are nested within the header document. Then it loops through and picks up the trailer document.
This has really not resolved the issue because in the first iteration, it does load the Header alongwith detail records in the memory.
The solution needed is , right from the header record to the trailer record, we want only one record to be output as we iterate.
Please let know if you have resolved such a problem. Any more info you need also can be given.
Output of ffvalues snapshot has been attached
Thanks,
Venkat