If you to minimize memory in wM, and especially if the large file is already on the file system, you could try streaming the file in one line at a time, then manually creating the XML string via concatenation and avoiding the whole Document pipeline structure and a documentToXMLString call. This definitely defeats much of the purpose of webMethods (the ease of mapping), but it is do able. If you hold the string in memory while you’re building the XML, use a StringBuffer. Otherwise you could also stream the file out on the other end and just write it to a file as you build the XML.
But like I said, you’re really just using wM as a Java VM to do this, since you’d have to write a large majority of this in Java services, and most of your pipeline variables would end up being various Java objects being passed from Java Service to Java Service (file streams, string buffers, etc). But you’re absolutely right in the fact that using many of the built-in wM services for this will suck memory and be slow. Also, depending on what else you have going on in your IS server and the memory you have configured you could run into things slowing down from lots of incremental and full garbage collects.
Another option is to do this all at the file system level - for example, an external PERL program to do the transformation from one file to another. Then you can invoke this at the OS level from within wM. You’re removing even more of the logic from wM, but I’ve come to grips that there’s just some things I don’t like doing in wM, and handling really large files is one of them. To me, wM isn’t the only tool in the toolbox to get integrations done, but you just have to make sure you really document these quirky processes.
So if you want this process to use little memory and run fast, chances are you won’t end up with a nice, elegant, and easy to modify solution.
I disagree with many parts of this post. Integration Server provides various facilities to work with large data sets. The flat file services can iterate over records. The XML services can iterate over nodes. One does not need to “write a large majority of this in Java services.”
Read the documentation on large document handling. Handling large documents is definitely structured differently than when one can load an entire document into memory, but it is doable without resorting to lots of Java and without sacrificing all of the capabilities of IS and the built-in services.
use the control file and load the flat file data into a temp table and then read the data into web methods pipeline or document with smaller number of records and then you can write into xml file without the memory issues
Do not hijack this thread as I feel it is irrelevant to your query. You can open a new thread/topic instead.
As per your question if you are looking to count the number of occurrences of comma (,) then you can write a java code or use the services (replace and then length) in WmPublic. Sample code snippet is below:
String str = “abc,abc,abc,abc”;
int num = str.replaceAll("[^,]","").length();