We want to re-submit failed transactions using Monitor.
If there are two integration servers in cluster connected to one broker, where do we install the WmMonitor package?
Do we install it on each of ISs or do we dedicate a seperate stand-alone IS server to run the WmMonitor package?
An Integration server (cluster) generally logs to its own IS Core Audit Log and Process Audit Log. Right? If so, do we need to install the WmMonitor package on each of the IS server locally to access the IS Core Audit Log and Process Audit Log to be able to re-submit servics, documents etc?
If the WmMonitor is installed on its own Integration Server, How does the re-submit functionality work? Is this why we have define a remote alias?
What is the recommended architecture for installing the Monitor in Production when there are two Integration Servers talking to a Broker server?
Srinivas,
Can you provide more information about your cluster? Is this a hardware cluster, or are your using webMethods to cluster your IS servers?
Generally speaking, WmMonitor should be installed on both IS servers. You can choose which IS server to resubmit to via the WmMonitor tool. Yes the remote alias are needed to be able to resubmit. If you are using IS clustering, then the audit database is generally shared between the two IS instances, Monitor can then resubmit regardless of which server originally sent the document.
Thank you for the response. We are using webMethods clustering. In production I heard the best practice is to install the WmMonitor package on a dedicated Integration Server because WmMonitor can use the system resources intensively. Could you elaborate on how the re-submit(Documents, Services, Process Models etc.) works if the WmMonitor is installed on a seperate IS server and if the transactions are being processed by the clustered IS servers?
Others may chime in, I haven’t seen any performance issues with the WmMonitor other than what i’m mentioning below. I don’t see moving it to a separate instance. You IS servers in the cluster still have to log there audit stuff to the database and thats where the overhead is. Having a monitor on a separate instances is not going to make that go away.
Document Logging and service auditing have a lot of overhead if you audit every service and save the pipeline on success. I’d recommend auditing on error only for your services if performance is a concern or you have a large volume. It you must save every message and you have a lot of them, you might want to consider using TN as your repository for documents. The IS servers and the broker really kind of slow down when full auditing is turned on. But you can’t separate out the WmMonitor to make that better.
Srinivas,
If you install WmMonitor on a separate stand alone IServer,
To be able to view the data in Monitor, You need to configure the Process and IS core Audit logging database of it to be the one used by your Integration Server Cluster.
Monitor usually resubmits data via broker. So you need to configure the Monitor IS to talk to the same broker connected to your cluster.
This way, Monitor gets the data from the Cluster’s database, and re-submits requests in the form of ChangeCommand documents to the Broker.
Please, any one, correct me if i am wrong or if there is a better architecture to use Monitor on a separate integration server. We are planning to use the Monitor on a stand alone server too.
Srinivas,
If you install WmMonitor on a separate stand alone IServer,
To be able to view the data in Monitor, You need to configure the Process and IS core Audit logging database of it to be the one used by your Integration Server Cluster.
Monitor usually resubmits data via broker. So you need to configure the Monitor IS to talk to the same broker connected to your cluster.
This way, Monitor gets the data from the Cluster’s database, and re-submits requests in the form of ChangeCommand documents to the Broker.
Please, any one, correct me if i am wrong or if there is a better architecture to use Monitor on a separate integration server. We are planning to use the Monitor on a stand alone server too.
Thanks Mark and sekay for your input. I agree that auditing needs to be turned on only in case of error for services. Also the architecture suggested by sekay makes good sense if WmMonitor is to be installed on a separate IS server.
The remaining part of the question i have is, if we have multiple subscribers subscribing to the same Broker Document Type and only one subscriber errors out while processing. If we re-submit using WmMonitor functionality, will all the other subscribers also get the Document from Broker, resulting in duplicate document processing?
We only want the failed subscriber to receive the re-submitted document and process it, since others have already processed it successfully. What is the best way to re-submit the failed document?
Sriniavas,
There is a difference between document logging and service auditing. The resubmit function differs depending on which you are doing and why you are doing it. There are a number of different reasons why you might need to resubmit a document or reprocess a service(infrastructure failure, bad data, etc) both of which are done via the monitor.
We decided against document logging and just use service auditing(error only). Much better performance and a more fine grain resubmit function, ie you are resubmitting to the service that failed and nothing else. If you do use document logging it is possible for all triggers that are subscribing to it, to get it again.
My two cents, but are you planning on putting the third IS server and broker on a separate host just to perform the monitoring function? I’m sure webMethods loves ya, pretty expensive. Unless there is some specific functionality you need from document logging that you can’t get from service auditing, I would personally not add the extra complexity and cost to my infrastructure. Invest your time and energy in establishing good peer reviews for your flow services, coding flow services to handle infrastructure failures and you will be better off than relying on WmMonitor. But that’s just my two cents…your mileage may vary.
If you resubmit the failed Subscriber, Only that particular subscriber gets the document as each transaction will be identified by a UUID.
If you resubmit the Publisher, resulting in the document being published to broker, obviously all the subscribers will get the document. which is what you might want in practical.
Mark: “We decided against document logging and just use service auditing(error only). Much better performance and a more fine grain resubmit function, ie you are resubmitting to the service that failed and nothing else. If you do use document logging it is possible for all triggers that are subscribing to it, to get it again.”
I agree on the above comments. The third IS server might be jusifiable if you do heavy process monitoring, since the overhead occurs because WmMonitor runs a reaper thread that establishes connections to the repository and polls existing process instances to determine if they are complete.
I guess having a third IS makes sense thou it might look like an overhead right now. Moving farword you can give acces on this IS to the various support teams as required for their support activity.
Keeping that in mind its better to have a dedicated IS rather than have monitoreing on all individual Prod IS’s. This makes more sense if your infrastructure is huge.
Regarding the other question abotu resubmitting it to the IS that only requires, i guess webMethods came up with additional patches which help you to get that functionality where in you can resumbit only to the IS that you want to re-process a failed service.
Why not talk with webMethods and upgrade your WmMonitor.
We have been using this quite succesfully at our end and with great effect without any problems.