Capturing documents from failed triggers

I want to be able to manually recover from the cases:

  • Triggered service fails with a ServiceException
  • Triggered service fails with an ISRuntimeException, and retry limit is reached

From what I can tell, if either of these happens, a pub.publish.notification:error document is delivered to a client which I can specify. However, it doesn’t look as if that will contain a copy of the failed message.

What I’d like is to store the failed message on disk, probably as XML, so that it can be manually edited if required, and resubmitted to the queue.

Does anyone have a good pattern for doing this? I’ve thought of a few ways, but none of them are as simple or elegant as I feel it should be.

any reason you are not using the built-in auditing feature for services?

Probaby because I’ve never heard of it. If it stores the input pipeline for failed services somewhere, I’m very interested. Quite aside from the pub/sub stuff, if we could do this for normal synchronous service calls it would be useful.

Having looked in the Pub/Sub book, the IS Administration book, the Developer Guide and the Logging Guide, I still don’t know what you’re referring to. Where should I be looking?

Thanks!

Look in the standard Developers guide (in the 7.1 version it is page 140) for service auditing.

Auditing is the way to go, but will require some configuration beyond just the service (you will want to audit to a DB to be able to stored the pipeline).

If for some reason you can’t use auditing, you could call BrokerClient.getEvents in an error notification processing service and look for the event ID the error notification will contain.

Wow, don’t know how I missed that stuff. I guess I felt I already knew how to build a flow service, so I never expanded that section.

It’s going to be tremendously helpful. Thanks.

Is there a quick way to configure auditing across many services?

Not a good way to do it quickly.

You should configure via the Developer, one service at a time.

I’ll lay out three ways to audit across “many services”, but please consider these within the context of “all things are possible, not all things are wise”.

  1. Set the audit level for the server to brief or verbose. This is a server.cnf setting (watt.server.auditLog) that is set to perSvc by default. “perSvc” means you have to configure it per service, as described in the Developer guide. “brief” means every service will be audited, but no pipelines. “verbose” means every service will be audited, with pipelines.

  2. Create an audit event manager that filters on all services under a certain folder. For example, create FlowA with service inputs and outputs matching those of the specification pub.event:audit. In the service, put your logic for grabbing the pipeline and saving to a DB. Then, in Developer, go to Tools → Event Mgr, choose Audit event, and add an event subscriber. service=FlowA; filter=folder or service you want to monitor; enabled=true. In theory, when the service(s) referenced in filter are executed, FlowA should execute. I’ve not tested this beyond my last 6.5 impl.

  3. Audit settings are all stored in the node.ndf file in the packages/ns/[fullyqualifiedservicepath] folder of the service. The element is the <record name=“auditsettings” … In theory, you could configure one service to have the appropriate settings, and then copy/paste this element to all other to-be-affected service nodes.ndf.

Thanks - I’ll consider options 0 and 3 :slight_smile: