I assume you’re using an Oracle database. This link has information about the Oracle JDBC driver behavior.
A couple of background points (based on assumptions, not explicit knowledge of the inner workings of the JDBC adapter):
- The adapter interacts with the JDBC driver. It relies on the driver for many things.
- The driver maps column data to/from java.sql data types, such as java.sql.Date, java.sql.Timestamp, etc.
- The adapter can map the underlying type to other compatible types. For example, you can select java.util.Date as the type in the adapter service and the adapter will convert java.sql.Date or java.sql.Timestamp to java.util.Date when reading and the reverse when writing.
Starting with version 9.2, Oracle introduced a new column data type of “timestamp.” With that change, the JDBC driver no longer maps date columns to java.sql.Timestamp. They map to java.sql.Date, which does not have a time component.
The JDBC adapter relies on the output of the driver–since the driver now returns a java.sql.Date, the adapter does not have any access to the time.
The reverse is also true. Even if the adapter passes a java.util.Date to the driver, the driver will map that to java.sql.Date, losing the time portions.
According to the Oracle site, 11.1 changes the driver behavior (again) back to how it behaved before 9.2.
One way to address this for 9.2 to 11.0 is to use custom SQL adapter services. With these you can control the JDBC data types used when interacting with the DB. You can set the JDBC data type for a particular field to be TIMESTAMP and set the field type to be java.util.Date (or java.sql.Timestamp) and the time will be preserved.
Using custom SQL services may not be ideal (more manual work to define things) but it works around this issue nicely.