I want to select a large amount of rows (over a million) from a database table and process each row in a loop. Currently it is not possible to do this with a JDBC adapter service because there is not enough memory to hold a million rows. Therefore I somehow need to batch through the result set in order to be able to process all rows. Is this possible with the JDBC adapter or do I need to “roll my own” batch solution?
One of possible solutions:
Pre-condition: result set can be ordered by some column with unique value.
Define jdbc adapter select service with Maximum Row (SELECT tab) set to desired batch size (i.e. 1000). Select list should contain this column with sort order. Define WHERE condition for this column to be greater input value. The input value for next batch should be value from last row of the previous batch. Iterate until service returns no rows.
If unique column doesn’t exist then this solution could use expression of some columns instead but implementation becomes more complex.
Thank you for your reply, Greg. In my case the unique key is made up of multiple columns, which makes the implementation of a generic batch mechanism more complex, as you explained. I would be sad to learn that the webMethods JDBC adapter does not include the ability to batch through large data sets. I hope the next version of the adapter does.