slow performance due to multiple loops

I am using inner loops which loops around 1000 of records. The performance is getting very slow now especially during batch processes.

Is there any other way, instead of using loops? I cannot use a stored procedure in this case. How about 'queryXMLNode’. Will this increase the speed? Please suggest.


What are the loops doing? That will help determine if any alternatives may be speedier.

Thanks Rob

This is how the code is now:

  1. Loop over lineItems list (maximum 30 records)
    a) Loop over serviceItemsList (obtained before from an RFC call) (maximum 3000
    i) If serviceItemList/PackNo = lineItems/PackNo
    ii) Then if serviceItemList/LineNo = lineItems/LineNo
    - Append to a new selectedItems List
    - Exit a)
  2. Loop over selectedItems List generated from above (maximum 30 records)
    a) Loop over currencyItemsList (obtained before from an RFC call) (maximum 3000
    i) If currencyItemList/PackNo = selectedItems/PackNo
    ii) Then if currencyItemList/LineNo = selectedItems/LineNo
    - Add the currency value to the selectedItems List
    - Exit a)

Is there a way to combine the two loops, or shortlisting? or any other way to speed up. Please suggest!


There are two areas that I see could be modified that may provide a performance gain.

The first is probably the easiest. The append to list can bog down when the list size grows. Instead of using appendToDocumentList or appendToStringList use a Hashtable or HashMap or LinkedList or some other collection class. When the loop is complete, then convert the collection to a document list.

The problem with appendTo*List is that it allocates a new array for each iteration and copies all the item references to the new array. This is “busy” work and can also negatively impact memory.

The other area can also be aided by using collection classes. Instead of looping to do the lookups, it might be helpful to put the serviceItemsList and currencyItemsList into collection classes. Perhaps a HashMap using a concatenation of PackNo and LineNo as the key. Loop over lineItems/selectedItems as you do now but then use HashMap.get to retrieve the matching (if any) item. The HashMap lookup should be faster than the loop.

Thanks you very much!

But is there any wM in-built services for creating collection or hashMap. Because I am afraid that I am not at all good at writing java code?
Also, if I use your first idea (removing appendtoDocList) and if using collection, then do I have to completely write the whole loop logic in java code?

One more thing the same code is 10times faster in one IS, but slow and often error out ‘java.outOfMemory’ in another IS (both runs only this application). Any thoughts why it could possibly happen?

Thanks Again!

PSUtilities provides services to work with Hashtable (a precursor to HashMap).

Using the Hashtable services you would not have the whole loop coded in Java. You’d do the same has you’re doing now with these changes:

  • Get a Hashtable object before the loop.
  • In the loop call “addItem” (or whatever the service name is that PSUtilities has) instead of appendTo*List to add each document to the Hashtable.
  • After the loop call the PSUtilities service to convert the Hashtable collection to a document list. I think PSUtilities has such a service but if not I can provide one (or any number of regulars on the forums can as well).

Things to check on the speed/memory issue:

  • CPU
  • Memory allocated to the JVM in which IS is running
  • JVM version

Hi Rob…Thank you very much!

I have PSUtilities package, and I shall surely try this way.

Thanks Again!!