Large Documents

Product/components used and version/fix level are you on:

10.7

Detailed explanation of the problem:

Hello,
I wish to ask the community for a technical question:
I need to load rows from the database into webMethods documents (2 documents) to do some mapping on these documents by looping all occurrences. The problem is that these Documents are large (~200 million lines).
My question: can the IS support this load ? I risk saturating the IS?
Else, is there another better solution to do this treatment?

thank you.

Error messages / full error message screenshot / log fileL

Is your question related to the free trial, or to a production (customer) instance?

Have you installed all the latest fixes for the products and systems you are using?

If you give the IS enough memory, it should work. But 200M rows is huge. Also, depending on your pattern, the data may be loaded in memory multiple times in different IS documents (e.g. the JDBC service output document and your target document)

I would recommend to extract it incrementally using the native SQL paging feature of your database
E.g. order by key_field" offset ? rows fetch next ? rows only

Call that in a loop for a reasonable number of rows (E.g. 10M), incrementing the offset until you have all rows

You didn’t specify where you are delivering the data. That is relevant, as you will want to avoid accumulating in memory for the target as well. If you are writing to files, the pub.file services have an append parameter that can be used to incrementally write blocks of data to the file

1 Like

@Dave_Laycock1 First, thank you for your excellent answer.
I will try this solution.
The idea is to make a Cartesian product of two documents then send the lines in a topic to insert them in another table in the database.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.