Adding to Rob’s post with my answer to the question “what are others using to achieve this?”
In my current project, we took the approach of implementing a logging package that is built on top of Log4Jv2 which enables us to emit log events from an executing service to a simple log file. Then, on each host of the Integration Server, we have a Splunk agent installed which ships the logs to Splunk.
The package does a number of important things, like supporting thread contexts, enriching the events with host, environment, thread, and calling service information, etc. However, one of the key things we did was to move away from issuing plain text log events. Instead, we now emit events based on a standard JSON structure. This approach allows us to emit rich log events that can be easily parsed in Splunk, enabling us to execute powerful queries and to build amazing dashboards. We also moved away from issuing email notifications from the Integration Server itself and now notifications are issued directly out of Splunk based on these JSON log events.
I can elaborate more on the structure we’re using if needed but another key thing we did was to separate the static textual log messages from the variable data within them. For example, where before we used to emit events like “Job ABC processed 150 rows in 6000 milliseconds,” we now emit events like:
{
...
"message": "Job completed",
"attributes": {
"job": "ABC",
"rows": 150,
"elapsedMillis": 60000
}
...
}
This made a world of difference.
Now, if you’re interested in a vendor-neutral approach to shipping not only logs but also metrics and traces out of your application and into an observability platform of your choosing, I highly recommend you take a look at OpenTelemetry (https://opentelemetry.io/). It even has a Logs Data Model which you can use if you’d like.
There’s a cool product that supports OpenTelemetry for webMethods but I too don’t want to run afoul of forum etiquette so hit me up if you need more info.
HTH,
Percio