It seems that the longer I work on webMethods, the less I seem to know. Here’s my dilemma: I have finally successfully configured the webMethods environment 6.01 to handle large documents. Once I hit 40mb, TN no longer recognizes the document. I am streaming the documents into the system with http post (pub.client:http).
I expect that the system will write the file to TSpace and then contain a TSpace reference in the BizDocContent table, but this apparently isn’t exactly the way it works. It not only writes the file flat out into the file system (TSpace) but also persists the file into the database.
Is this the way it is supposed to work? Any experiences? I can process files up to 40mb. How about files above 40mb? I have replicated this problem on TWO separate systems.
We are using Windows for IS (6.01) and SQL2000. The boxes are multi-processor and have plenty of RAM (1.5gb) assigned to the JVM and SQL processes.
If I cannot resolve this issue, then I will need to remove TN from my architecture for large file handling.
I have a support request logged and will update anyone who is interested.
Ray I agree with you “It seems that the longer I work on webMethods, the less I seem to know”.
LargeDoc Handling in WM is very confusing in terms of how it works. Currently I have an open case in where it treats docs that are half the EDIBigDocThreshold size as a Large Doc. That is if EDIBigDocThreshold=40000000 then when it encounters a doc that 20000000 it treats as a Large Doc.
As to your question my understanding is that the document should be written to TSpace and pointer to the document is written to the database.
We had a fair amount of issues with large doc handling and TN. We are using 4.6, so I don’t know if 6.X is the same. We are using large doc handling with RosettaNet, and for RosettaNet TN not only persists the document but actually persists three times the size of the document to the database(the different parts of the MIME doc and the full doc). In 4.6 you couldn’t turn off the database persist, even though it has that option in the document type definition. It actually uses the persisted database document farther on in receive processing (probably in recognize, but I don’t remember where exactly).
I agree with you regarding “where” the files should be written. But, I created an adapter service and flow service to extract the BizDocContent per document id. The file contents are written into the database as a byte array and stored as such. I came across this by accident because I was debugging the issue and when we rebooted the server, all of the TSpace documents vanished. But, when I queried the documents in TN Console, one by one, as they were queried, they reappeared in the TSpace. This led me to believe that the system persists the contents into the db and my flow service proved this.
So, this now leads me to believe that TSpace is a performance workaround. My formula for capacity planning will need to be reworked due to the increased database requirements.
Also, I received some feedback from support regarding OS level settings in boot.ini in windows to support large document persistence in SQL2000.