We have a requirement to process a huge (1GB) XML file in webMethods.
Given below is the sample XML and wM code to process the huge file using ‘nodeIterator’ approach.
When we tested the code with 25MB file, the processing took about 13secs.
However, when we gradually increased the size to 250MB, the processing took about 1hr 20mins, and it affected the server performance severely and the processing became horrendously slow.
Any inputs to the below questions will be of great help:
Are we loading the entire data into the memory as per this approach?
If yes, what would be the alternate way to process this data.
Similar to ‘LargeFileHandling’ in EDI, can we process this data in chunks by writing to an alternate hard disk location.
We have referenced to below link while deciding the approach.
[URL]wmusers.com
As you can see above, IS can handle large XML documents, if the integration is designed and implemented correctly.
Two key concepts are needed: 1) don’t load the entire document into memory. Instead iterate over the nodes 2) implement a mechanism for processing individual records/documents in parallel.
Generic statements such as “X couldn’t handle Y” are usually incorrect if X is in the hands of a person with the right skills and experience.