“But i am getting the Outofmemory exception while converting the document to string”
It sounds like your steps are something like this:
- Use node iterator to get one record from the file.
- Convert that to a document and append to a list.
- Repeat until all records read.
- Convert the resulting document (that has all items appended) to a string.
Does that match what you’re doing?
If so, then the problem is that you’re effectively loading the entire file into memory. And then duplicating it when trying to convert it to a string. So if you have 100k file, you’ll have 100k+ document representation in memory and then try to create a 100k+ string in memory.
When dealing with large files it is important not only to iterate over the source, but to not create the entire target in memory either.
If the steps above are not what you are doing, please provide additional detail so we might see where memory is being exhausted.