Best approach to handling large flat files

We have about 115k+ records. The file size is approximately 45MB. I need to parse this file into an IS document (doc2) to be use with another IS document (doc1). Based on how many records that’s in doc1, it needs to loop through doc2 to retrieve additional data elements. The way I’m parsing doc2 is based on the ff developer’s guide on handling large flat files. That means, use the REPEAT operation with the convertToValues service and set the “iterator” parameter to “true”. Exit the REPEAT when ffiterator is $null.

Just checking the community if this is the right approach or if anyone have a better recommendation or suggestion. Thanks.

One idea to be memory friendly and probably more speedy is to load the data of “doc2” into a DB table. You can use FF iterator and batch insert to load the table pretty quickly. Then query the table as needed while populating “doc1”.

A key, in my mind, is to avoid loading “doc2” into memory all at once. The in-memory representation will consume far more than 45MB.

With more info about the nature of the data in “doc2” and its relationship to “doc1” there might be other options as well.