We have a requirement to process a huge (1GB) XML file in webMethods.
Given below is the sample XML and wM code to process the huge file using ‘nodeIterator’ approach.
1.11 pub.file:getFile (Input → loadAs = “stream”)
1.12 pub.xml:xmlStringToXMLNode (Input → filestream)
1.13 pub.xml:getXMLNodeIterator (Input → criteria = “OrderItem”)
1.142 Branch on ‘/next’
1.1421 $null:EXIT “SplitProcess”
When we tested the code with 25MB file, the processing took about 13secs.
However, when we gradually increased the size to 250MB, the processing took about 1hr 20mins, and it affected the server performance severely and the processing became horrendously slow.
Any inputs to the below questions will be of great help:
- Are we loading the entire data into the memory as per this approach?
- If yes, what would be the alternate way to process this data.
- Similar to ‘LargeFileHandling’ in EDI, can we process this data in chunks by writing to an alternate hard disk location.
We have referenced to below link while deciding the approach.