Level: BEGINNER / INTERMEDIATE
Why is logging important for Apama?
Well, the thing to remember is that normally your solution going to be running as a remote server process somewhere - once you’ve got past the development stage it is unlikely to be a local application. So, logging, and interpretation of logs become important. In a later post we’ll also talk about the separate but related topic of metrics and observability… but one step at a time.
The correlator process itself will log a lot of information during startup, as well as logging periodic important status metrics. In addition, any EPL code can also log to the same output stream. That output stream by default is
stdout (which aligns nicely with the use of Apama in containers), but is obviously also configurable. If you do log to file, then some of the important aspects of startup information are also repeated if the log files are rotated, to ease later analysis.
Each log message from Apama will be at a given level - CRIT, FATAL, ERROR, WARN, INFO, DEBUG, TRACE.
The typical, and strongly recommended, configuration to be used in production is INFO, and that is the level at which periodic status metrics are logged. If you are developing an application, pay close attention to what you log, and how frequently you log (your production workload could cause higher logging rates). You should also carefully consider what type of information you log, particularly in light of things like GDPR (see links below).
We have an overview of some of that in this blog post: Apama Community Edition Logging in Apama
We mentioned above that periodic status lines are always provided in the logs, and over the years we have learned that these can be extremely valuable to anyone diagnosing what has been happening within a correlator. We’ve blogged about these in the past: Apama Community Edition Correlator status log lines
In fact, analysis of those periodic metrics is so valuable that there is even a fantastic community project to help with this, including generating interactive charts. It’s a tool we use very regularly:
- ApamaCommunity/apama-log-analyzer: Python 3 script for analyzing Apama correlator log files and extracting useful diagnostic information (github.com)
We all know that logging is a separate topic to exposing live metrics, and indeed that is a topic for another time.
Some deep links into the current (v10.11) documentation:
- Log levels, log filenames, and guidance
- Using logging to diagnose errors (softwareag.com)
- Logging and printing (softwareag.com) (and sub pages)
- Specifying log filenames (softwareag.com) and Examples for specifying log filenames (softwareag.com)
- Setting correlator and plug-in log files and log levels in a YAML configuration file (softwareag.com)
- Setting EPL log files and log levels dynamically (softwareag.com)
- Handling personal data “at rest” in log files (softwareag.com) (e.g. due to GDPR)
- Log status lines
Note: When Apama is used inside cloud deployments of Cumulocity IoT then the logs are available via regular microservice log viewing capabilities from the Admin application. The mechanisms to extract log files when used inside Cumulocity IoT Edge are a little different but fully described in the Edge documentation.
With this information and tooling, you should be in good shape for both development and production deployments.
This is Day #7 of a short series of brief tips, tricks, hints, and reminders of information relating to the Apama Streaming Analytics platform, both from Software AG as well as from the community.
This series of articles will be published Monday-Friday only, with the occasional weekend bonus.