We are using webMethods 4.02 and trading networks apparently creates a separate repository in /wmRepository2. It contains a .dat file that contains a pointer to each file in this directory. The files are named VH where the “*” is a long string of numbers. I know that the numbers correspond to a key/value.
My main problem is that even after the processes have completed (for many hours) the files remain. Once we reach 2000, the server locks up because the number of file handles have been reached. Shouldn’t the process release the file when it is complete? The only way we can address this problem is to shut down the server and delete the file folder/files. This then produces a WmQueue cannot find context error (because we deleted the *.dat with pointers.)
webMethods gave us a workaround that doesn’t seem to work. It uses wm.tn.enumerate:deleteQuery. Yes, another hidden and undocumented feature, just check the guide. Unfortunately, it doesn’t seem to work very well because the file handles are not released and therefore it cannot delete the files. For the files that do happen to get released, we are able to delete. No rhyme or reason for which ones get released. Thanks for any and all help.
We ran into the same problem … Seems that EDI will open four
files for every transaction. We had to disable conversation manager
on startup of the EDIfor TN package.
Ric Cross
EDI Module 4.5: Trading Networks Component Fix 2
November 2001
Copyright (C) 1996-2001, webMethods, Inc. All Rights Reserved.
This file provides important information for upgrading
the webMethods EDI Module 4.5: Trading Networks Component.
For release information about EDI Module 4.5: Trading
Networks Component Fix 2, see the Release Notes document
in the directory \packages\WmEDIforTN in which you installed
webMethods Integration Server.
Contents
1.0 Fix Name
2.0 Product(s) Affected
3.0 Fixes Superseded
4.0 Fix Contents
5.0 Platform Support
6.0 Fix Installation
7.0 Cautions and Warnings
8.0 Contacting Us
1.0 Fix Name
EDI Module 4.5: Trading Neworks Component Fix 2
2.0 Product(s) Affected
This affects the EDI Module 4.5: Trading Neworks Component
3.0 Fixes Superseded
This fix contains Trax 1-4P6CT from Fix 1, but not 1-4P6D6. Please
see EDIforTN_4_6_FIX1 for more details about Trax entry 1-4P6D6.
4.0 Fix Contents
This fix addresses the following issues:
Trax: 1-4P6CT Null pointer exception when recognizing edi documents
(from fix 1)
Provides services to disable extraction of conversation
IDs from EDI documents.
5.0 Platform Support
N/A
6.0 Fix Installation
Copy the file WmEDIforTN_4_5_FIX_2.zip into the directory
%webMethods%/server/replicate/inbound
On the webMethods Server administrator home page (default url: http://server:port/) select “Management” from the menu “Packages”.
Click the link “Install Inbound Releases”
Select the file name WmEDIforTN_4_5_FIX_2.zip from the drop down
window and click the button “Install Release”.
Shut down the webMethods server.
Copy the new editn.zip file from %webMethods%/server/packages/WmEDIforTN/config
to %webMethods%/server/lib/jars. You should overwrite an existing
file. The new editn.zip should contain a file called Readme.txt, that states
the editn.zip file is distributed with FIX 2.
Restart the webMethods server
To disable extraction of Conversation IDs from EDI Envelopes, Groups and
Transactions, run the service wm.b2b.editn.cm.disableCIDextract. Conversation
IDs will not be extracted until the server is restarted, or the service
wm.b2b.editn.cm.enableCIDextract is run. To insure that Conversation ID
extraction remains off, set the service wm.b2b.editn.cm.disableCIDextract to
be a startup service for the WmEDIforTN package. See the webMethods Server
Administrator’s Guide for details.
7.0 Cautions and Warnings
None
8.0 Contacting us
You can call webMethods Support at 888-222-8215 or send
e-mail to support@webmethods.com to report problems or ask
technical questions. See www.webmethods.com for the latest
information about product updates.
We ran into the same problem as well after running webM B2B Serverv4.0.2 for about 5 months in production. Here’s what we were told by webMethods:
The repository stores configuration information like all the adapter configurations and settings, as well as acts like a cache for TN queries. Everytime you query the TN Transaction Analysis Log with a custom query e.g. Sender = X, Date Received = Today, a cached version of this query is added to the repository, if the value of ANY of the parameters in the query is different from a previous one. webM was not able to tell us whether this query is simply appended to an existing file or a new file is created in the repository. Basically, your repository is bound to grow as the size of your integration increases and you increase the number of different queries, adapter configuration etc.
The funny part is this : everytime your b2b server starts up, it opens ALL the repository files whether they are needed or not. This takes up as many system file descriptor resources as the number of files in the repository. It can lead to running out of file descriptors based on the limit kept by your OS and other applications on the OS that need file descriptors.
The workaround webMethods gave us is to schedule wm.tn.enumerate:deleteSavedQueries on a periodic basis. This will delete all the cached queries only. Your adapter configuration information will NOT be deleted, hence you dont have to re-create it. We have this service running every month on our production server and since then we’ve been working ok.
> > My main problem is that even after the processes have
> > completed (for many hours) the files remain
…
> The workaround webMethods gave us is to schedule
> wm.tn.enumerate:deleteSavedQueries on a periodic basis.
Hello Ema and Ray -
This is from your post a long time ago. I’m facing the same problem as you were – excessive file in the /WmRepository2 directory.
I could not find this “deleteSavedQueries” service. Did you mean the “wm.tn.enumerate:deleteQueryResults” service instead?
I tried running deleteQueryResults in a 4.6 environment and yes I believe that is what they meant since when I run it does delete files in WmRepository2 directory that were created as direct result of me running queries in trading networks.
Thanks Hoon. That’s what I found too. Just a small note: its worthwile running “wm.tn.enumerate:deleteQueryResults” as a scheduled service, but you have to use a wrapper service to invoke it, so that the confirm input can be set to ‘Yes’.
just a note. It probably is a best practice to add wrapper as you have done but it is possible to run in this case without a wrapper for whatever reason.
From testing I notice that as long as the inputs for a service have fixed possible values (in this case yes/no for the confirm input) it will just take the default value (in this case yes) if no explicit value is specified and run.
That’s strange Hoon. I get the following error (XML tags may be stripped out by WM Forums software)…
I just checked. The ‘confirm’ input for this service is set to be a text field and not a pick list. Hence the error message below…
Thanks for the tip though.
Just a further note on this that could help avoid another “too many open files” issue.
We encountered the problem again even after scheduling the deleteQueryResults.
Cause was due to too many obsolete files in WmRepository2 directory. It turns out that if you are running RosettaNet conversations and the conversation ends up with ERROR, SUSPENDED or CANCELLED status, then the files will remain in the WmRepository2 directory and never get deleted (whereas status of DONE is deleted by the cleanup scheduled service which comes with the service pack).
The solution is to schedule the wm.ip.cm:deleteByStatus service in the WmIPRoot package. When running this though it is critical that you set the fromRepoOnly input to true so you don’t loose the conversation history from the Trading networks. Thus the wrapper should have three invocations of deleteByStatus one for CANCEL, one for SUSPENDED, and one for CANCELLED.
Hope this helps.
Also I’m wondering what the community thinks about migrating the repo directory to be RDBMS based instead of flat-file/directory based (assuming customer already has a RDBMS available so cost not issue). I would think off-hand it would be the preffered setup in terms of stability and performance. What do others think?