Side note: this entire thread reinforces that we made the right decision to avoid direct use of log4j like the plague. As with so many libraries/facilities that originally started off lean and useful, this one has gone bonkers and tries to do far too much, IMO (and we get mind-boggling vulnerability because of a freaking logging utility!?). Of course YMMV.
For custom logging we created a couple of helper services. First is a wrapper service that uses the same class IS uses internally to write and daily rotate stats.log.
// Wrapper service for com.wm.app.b2b.server.ServerAPI.getLogStream.
// This is what IS uses internally to write and rotate stats.log.
//
// Opens a log file in the server's default logs directory.
// This should only be used to obtain handles to new log files for
// solution-specific logging purposes. By convention, the name should
// be something like "myapp.log".
//
// The returned stream is intended to kept open for the life of the
// JVM (don't open a whole bunch of these).
//
// This service maintains a hash table of log streams that have
// been opened keyed by the logfile parameter. If the log has
// been previously opened, the existing stream is returned and
// the "newLog" is set to "false". If the log is opened for the
// first time a new stream is returned and the output parameter
// "newLog" is set to "true".
public static final void getLogStream(IData pipeline) throws ServiceException {
IDataCursor idc = pipeline.getCursor();
String logfile = IDataUtil.getString(idc, "logfile");
boolean newLog = false;
com.wm.app.b2b.server.LogOutputStream os =
(com.wm.app.b2b.server.LogOutputStream)openLogFiles.get(logfile);
if(os == null)
{
os = com.wm.app.b2b.server.ServerAPI.getLogStream(logfile);
openLogFiles.put(logfile, os);
newLog = true;
}
IDataUtil.put(idc, "logStream", os);
IDataUtil.put(idc, "newLog", ""+newLog);
idc.destroy();
}
// --- <<IS-BEGIN-SHARED-SOURCE-AREA>> ---
private static final java.util.Hashtable openLogFiles = new java.util.Hashtable();
Next is a helper to write to the log stream. This can be adjusted to support different input types such as a byte array, but typically supporting string is sufficient.
public static final void writeLogStream(IData pipeline) throws ServiceException {
IDataCursor idc = pipeline.getCursor();
com.wm.app.b2b.server.LogOutputStream os =
(com.wm.app.b2b.server.LogOutputStream)IDataUtil.get(idc, "logStream");
String message = IDataUtil.getString(idc, "message");
idc.destroy();
os.write(message);
}
A FLOW service would call getLogStream to create/get a stream and then write to it using writeLogStream.
For a given log, one can create another service that encapsulates the filename and the writing in a single call:
writeMyLog
..Input: message (string)
..Output: (none)
..Tree:
....MyUtils:getLogStream
......logFile = "MyLog.log"
....MyUtils:writeLogStream
......logStream = (stream returned by getLogStream)
......message = (message passed by caller to this service)
One can create multiple “writeMyLog” services in various packages as desired. This doesn’t have all the flexibility of log4j (logging levels and all that) but doesn’t have the complexity either.
Edit: Forgot to emphasize that IS will automatically rotate such logs daily, same as is done with the stats.log file.