We are doing an EDI translation of some very large files and are seeing that when a large amount of EDI data hit our server at once it takes a long time for the processing to complete. During this time the memory usage stays stable but the CPU is heavily used.
To describe the EDI processing, we treat our EDI files as flat files and each file that is translated has one interchange. These files are dropped in a directory from where polling service picks them up for processing. The polling service has 10 threads and polls every 10 seconds. Once the required recognition parameters have been picked from the file, the file is passed to the TN for processing. Most of the processing is then accomplished in the processing services. The mapping services that use heavy looping are written as Java services. We are also using the large file handling feature for TN and EDI. The threshold for the large file is set to 1 MB.
We can process a two large files of 320 MB in 7.5 hours but the CPU usage is at 87% during this time. But the bigger issue is when we have four 120 MB files that are in process at the same time, it takes almost 15 hours to complete the translation and also the CPU usage is at 90%. Does anyone have any recommendations for improving the processing time and/or also lowering the CPU usage? Any help on this issue will be highly appreciated.