Has anyone else encountered an issue in which the Archive Services and Archive Server Data services never complete? We have approximately 700K rows in our WMERROR table and I’m unable to archive any data.
I’m currently attempting to reproduce the error in another envioronment with only 50K rows of data. The Archive Server Data service has been running for over an hour.
the webMethods provided archiving services are far from efficient. If your archiving seems to last forever, or takes a long time and does not do anything in the end, (and in our case using oracle) check your temp and undo tablespaces. They might be running out of space because of the large query-results, and these types of errors are not reported back to your server log
If you are running 6.1, ask TS for fix TNS_6-1_Fix6. This fix adds attributes that make the archive service more efficient. The following text was taken from the readme.
HTH,
Michelle
The wm.tn.archive:archive service performs the entire archive or
delete operation as one database transaction, which does not
efficiently use database resources, such as, transaction
rollback, locks, etc. Additionally, when using a SQLServer
database, a long running archive operation can cause deadlock
when the archive/delete operation is running at the same time
Trading Networks is attempting to save an incoming document
to the database.
To improve the archive and delete operations, this fix introduces
two new system properties to allow Trading Networks to perform
the archive/delete operations in multiple smaller database
transactions rather than in one large database transaction. The
two new system properties are tn.archive.batchSize and
tn.archive.batchBackoffTime.
Trading Networks uses the tn.archive.batchSize property to
determine the number of documents to archive/delete in a single
smaller database transaction. After archiving/deleting the number
of documents specified by the tn.archive.batchSize property, the
archive/delete thread sleeps for the number of seconds specified
by the tn.archive.batchBackoffTime property. When the number of
specified seconds elapses, the archive/delete thread continues
operating on the next batch of documents. Trading Networks
continues the archive/delete operation in this manner until
it processes all documents to be archived/deleted.
Performing the archive/delete operation in multiple smaller
batches of documents uses the database resources more
efficiently. The backoff time between batches allows other
database operations to be performed (e.g., saving document
content to the database).
High row counts can appear to hang the archive.
Please bump up IS logging level to 9 and select only 0119 Monitor and 0120 Monitor (Database layer) to view SQL execution and Archive/Purge steps.
Also, have your ORACLE DBA view the active sessions and check for locks held.
I have inadvertantly locked up my archive execution. You must patiently wait until the previous archive completes, or kill the session (in ORACLE).
I have spent many days learning the tricks of Monitor Archive/purges in v6.1.
We are using wm6.1 with sql*server and cannot run any of archive/delete processes on wmError due to the ‘locking/time issues’. At present the most convenient way for us to clear out the table in non-production databases is to have periodic outages and truncate the table. As discussed above, the standard procedures provided by WM are clunky at best - we don’t have the patch mentioned above, but I wonder whether it applies just to TN or to all the data.