Best practices about document list

Hi all,

I am new to webMethods. I have 2 questions regarding best practices about document list.

  1. What is best way to query data that will return thousands of records? I have tried for 50.000 records. I am using JDBC adapter Select right now, and it creates result in large document list. I find that query those data would take long time, i guess it is because creating ‘large document list’ process.

  2. Then, in the middle of flow service, to save memory, i want to drop this query document list result. I find it take long time to drop too. Is there any best way to drop large document list?

Thank you in advance.


Relational databases and SQL are designed to help you select a subset of rows from large tables. It is an architectural mistake to fail to leverage a RDBMS and to attempt to sort through the data inside Integration Server instead.

Refine the where clause in your JDBC adapter service to return only the data you need then process that data in IS.


Thank you Mark!

This table i am talking about indeed consisting large number of records, it could be million on it. This table is hosted on oracle DBMS.

Those hundred thousands records i am referring to is already data that i need all where clauses are properly used, using its primary key, other fields and indexed properly. I am thinking to break down those large data to small chunks using paging result set. But it still doesn’t answer my question

Thank you for your advice, i am looking on it :wink:


Assuming that you really, really need to perform some operation on all 50,000 rows and that the operation can’t be better done in SQL on the entire result set at one go, an approach would be to use a stored procedure to periodically populate a buffer table with the results of the SQL select that currently is returning 50,000 rows.

Create a JDBC adapter polling notification that processes a configurable number of the rows in the buffer table by specifying the “Maximum Row” field. This will cause the notification to only publish that number of documents in each polling cycle. Tune this number to maximize throughput without overloading your server’s memory or processing capability.


You can also use the IS large document processing techniques to process large XML documents (or document lists). See the XML folder of the Built-In Services Guide for details.


My experience with IS large document processing is that it’ll drag system performance right down the toilet. I agree fully with Mark that it’s a mistake not to try and utility RDBMS for these situations.

For smaller data (say thousands of records that will fit into memory footprint of somewhere less than 8MBs), XML query would be the way to do it, assuming the data can be queried this way. They are almost always a lot faster than looping through data.

On the other hand, XML queries are nowhere as flexible as SQL. And larger dataset (above 12MB in my experience) will cause IS to choke in the process of converting your data to an XMLnode. Even more unfortunate is that where IS will choke seems to depend on the phase of the moon, solar flare activity, cosmic ray, galactic orientation, and other factors too profound for me to comprehend… 8)