Using NVARCHAR2 with Oracle jdbc thin driver

Hi everyone,
I am new to support a production system, running IS6.5 sp3 and JDBC adapter 6.5. The existing flow will read plain text files at a specified location and insert to a staging table on Oracle 9i.

  • The source files are in ASCII or GBK encoding.
  • The staging table has a VARCHAR2 column to hold file data.
  • The DB NLS settings are NLS_LANGUAGE=AMERICAN, NLS_CHARACTERSET=ZHS16GBK, NLS_NCHAR_CHARACTERSET=UTF8.

Recently there is a enhancement request to allow files in other encoding (e.g. MS874 (Thai)) to be read into staging table. Without messing with database settings, I altered the staging table column type to NVARCHAR2.

Next I recreated the InsertSQL Adapter service, where the ‘param JDBC Type’ = NVARCHAR and Input Type = ‘java.lang.String’ now. However I found the (Thai) charaters are corrupted on the staging table (shown as ?, hex ef,bc,9f). I dumped the pipeline data to server log just before invoking the insert service, it shows the characters are fine upto that point.

Several JDBC usage notes on web mentioned that I need to call setFormOfUse() to bind the column type to NVARCHAR2. The JDBC Adapter User guide does not cover this area. Can anybody show me how to do this in WM? Or is it not required in JDBC adapter? Any pointer to document on how to properly config the adapter?

Thanks in advance!