Is the file XML?
Note that a XML parse tree Node object is non-trivial size (it has to keep lists of children, parent reference, a current in-scope namespace list, list of attributes and the value itself, etc). The webM parser doesn’t create a copy of the value, because it points a reference to the input stream data from the parse tree node, so there aren’t two copies of the data in memory during parsing, as with some parsers.
But it would not surprise me if a 60M XML file took 600M of RAM once parsed.
Usually large files have a logical ‘repeating chunk’, that can be processed fully before another chunk is read. If so, see the WmPublic pub.web:nodeIterator service and the WmSamples sample.complexMapping example for how to use the streaming features of the IS XML parser.