redirecting logs

I need to redirect the server.log,session.log,stats.log,audit.log,txin.log and txout.log. to different drive than the app is installed on. any suggestions

Yes, reinstall the app on the other drive.

Most of these logs are lightly used. What problem are you attempting to solve? You can use the built-in jdbc pools to redirect some of these to tables on a supporte datatbase server.

Mark

the problem that I am having is that the server the app is installed is a san drive, my server eng.team does not want to to copy the app onto a new cluster. Either lack of knowledge, or just don’t want to. The drive the app is running on is running low on space. It have websphere installed on it also. The drive is 13 gigs, which only 5 gigs are of free space. The server eng team gave me another drive with 33 gigs of space. I have redirected connection pool logs, and the RTE logs. I am just having issues with logs listed in the previous post.

[SIZE=2]1 In Integration Server Administrator, go to the Settings > Extended page and click Show and Hide Keys. Integration Server Administrator displays a list of the Integration Server configuration properties you can change using Integration Server Administrator.
2 Select check boxes next to the watt.debug.logfile property.
3 Click Save Changes. Integration Server Administrator displays the selected property in the Extended Settings box.
4 Click Edit Extended Settings. Integration Server Administrator displays the selected property in an editable text box.
5 In the Extended Settings box, set the property as follows: watt.debug.logfile= fully qualified path to server, session, service, and error logging file directory.
For Java OutOfMemory:

  1. Collect and analyze the verbose gc output
    a. Add ‘-verbosegc’ flag in the java command line. This will
    print GC activity info to stdout/stderr. Redirect the stdout/stderr to a file. Run the application until the problem gets reproduced.
    b. Make sure that the JVM does the following before throwing
    java OOM
    c. Full GC run:
    Does a full GC and all the un-reachable, phantomly, weakly and softly reachable objects are removed and those spaces are reclaimed. More details on different levels of object reachability can be found at:http://java.sun.com/developer/technicalArticles/ALT/RefObj
    You can check whether full GC was done before the OOM message. A message like the following is printed when a full GC is done (format varies depending on the JVM - Check JVM help message to understand the format)
    [memory ] 7.160: GC 131072K->130052K (131072K) in 1057.359 ms[/u][/size]

[url]The format of the above output follows (Note: the same format will be used throughout this Pattern): [memory ] : GC K->K (K), ms [memory ] - start time of collection (seconds since jvm start) [memory ] - memory used by objects before collection (KB) [memory ] - memory used by objects after collection (KB) [memory ] - size of heap after collection (KB) [memory ] - total time of collection (milliseconds)
However, there is no way to conclude whether the soft/weak/phantomly reachable objects are removed using the verbose messages. If you suspect that these objects are still around when OOM was thrown, contact the JVM vendor.
If the garbage collection algorithm is a generational algorithm (gencopy or gencon in case of Jrockit and the default algorithm in case of other JDKs), you will also see verbose output something like this:
[memory ] 2.414: Nursery GC 31000K->20760K (75776K), 0.469 ms
The above is the nursery GC (or young GC) cycle which will promote live objects from nursery (or young space) to old space. This cycle is not important for our analysis. More details on generational algorithms can be found in JVM documentation.
If the GC cycle doesn’t happen before java OOM, then it is a JVM bug.
Full compaction: Make sure that the JVM does proper compaction work and the memory is not fragmented which could prevent large objects being allocated and trigger a java OOM error.
Java objects need the memory to be contiguous. If the available free memory is fragmented, then the JVM will not be able to allocate a large object, as it may not fit in any of the available free chunks. In this case, the JVM should do a full compaction so that more contiguous free memory can be formed to accommodate large objects.
Compaction work involves moving of objects (data) from one place to another in the java heap memory and updating the references to those objects to point to the new location. JVMs may not compact all the objects unless if there is a need. This is to reduce the pause time of GC cycle.
We can check whether the java OOM is due to fragmentation by analyzing the verbose gc messages. If you see output similar to the following where the OOM is being thrown even whether there is free java heap available, then it is due to fragmentation.
[memory ] 8.162: GC 73043K->72989K (131072K) in 12.938 ms [memory ]
8.172: GC 72989K->72905K (131072K) in 12.000 ms [memory ] 8.182: GC
72905K->72580K (131072K) in 13.509 ms java.lang.OutOfMemoryError
In the above case you can see that the max heap specified was 128MB and the JVM threw OOM when the actual memory usage is only 72580K. The heap usage is only 55%. Therefore, the effect of fragmentation in this case is to throw OOM even when there is 45% of free heap. This is a JVM bug or limitation.You should contact the JVM vendor.
2. If the JVM does its work properly (all the things mentioned
in the above step), then the java OOM could be an application issue. The application might be leaking some java memory constantly, which may cause this problem. Or, the application uses more live objects and it needs more java heap memory. The following things can be checked in the
application:
o Caching in the application - If the application caches java
objects in memory, then we should make sure that this cache is not growing constantly. There should be a limit for the number of objects in the cache. We can try reducing this limit to see if it reduces the java heap usage.
Java soft references can also be used for data caching as softly reachable objects are guaranteed to be removed when the JVM runs out of java heap.
o Long living objects - If there are long living objects in the
application, then we can try reducing the life of the objects if possible. For example, tuning HTTP session timeout will help in reclaiming the idle session objects faster.
o Memory leaks - One example of memory leak is when using
database connection pools in application server. When using connection pools, the JDBC statement and resultset objects must be explicitly closed in a finally block. This is due to the fact that calling close() on the connection objects from pool will simply return the connection back to the pool for re-use and it doesn’t actually close the connection and the associated statement/resultset objects.
It is recommended to follow the coding practices suggested in the following documents to avoid memory leaks in your application.
JDBC -
[SIZE=2][COLOR=#0000ff]http://e-docs.bea.com/wls/docs81/jdbc/troubleshooting.html#1026696[/color][/size]
JNDI -
[SIZE=2][COLOR=#0000ff]http://e-docs.bea.com/wls/docs81/jndi/jndi.html#472853[/color][/size]
JMS -
[SIZE=2][COLOR=#0000ff]http://e-docs.bea.com/wls/docs81/jms/implement.html#1194127[/color][/size]
o Increase the java heap - We can also try increasing the java
heap if possible to see whether that solves the problem.
o Workaround - As a temporary work around, the application may
be gracefully re-started when the java heap usage goes about 90%. When following this work around, the java max heap can be set to as high as possible so that the application will take more time to fill all the java heap. The java heap usage can be monitored by adding ‘-verbosegc’ flag in the java command line which will send the GC/ heap usage info to stdout or stderr.
3. If none of the above suggestion is applicable to the
application, then we need to use a JVMPI (JVM Profiler Interface) based profiler like Jprobe or OptimizeIt to find out which objects are occupying the java heap. The profilers also give details on the place in the java code from where these objects are being created. This document doesn’t cover the details on each profiler. The profiler documentation can be referred to understand how to set and start the application with this profilers. In general, JVMPI based profilers have high overhead and drastically reduce the performance of the application. Therefore, it is not advisable to use these profilers in production environments.