Transform CSV columns in Document Type

Hi all!

I have a requirement here which I need to read a CSV file and write it to a database. I know I have to use a Flat File Schema/Dictionary, but I can’t find a way to do that using these features.

The CSV file has a structure similar to this:

Column A, Column B, Column C,
var 1, var 1, var 1,
var 2, var 2, var 2,
var 3, var 3, var 3,

Can someone give a hand on this?



  1. Create a flat file dictionary
  2. Create 3 fields in your dictionary, e.g. columnA, columnB, columnC
  3. Create a record definition in your dictionary, e.g. myRecord
  4. Add columnA to your record definition as a new “field reference” (change the extractor type to Nth Field, set the position to 0, and mark the field as mandatory before clicking Finish)
  5. Repeat step #4 for columns B and C setting the position to 1 and 2, respectively.
  6. Save your dictionary
  7. Create a flat file schema
  8. Set record parser to delimiter
  9. Set the record delimiter accordingly, e.g. Unix = newline, Windows = CRLF
  10. Set the field delimiter to a comma (,)
  11. In the Properties pane, set the Default Record to the record you just created in your dictionary
  12. Save your schema
  13. Click the button to generate the document type from the schema
  14. In your Flow service:
    14a. Read the file contents into a byte array or an input stream (e.g. via getFile, FTP, file polling, etc.)
    14b. Call pub.flatFile:convertToValues and map the byte array or stream to ffData and set ffSchema to the name of your flat file schema
    14c. Call your JDBC adapter service mapping the flat file document to the inputs

I tried to give you enough info but without getting into minute, boring details, so you may have to fill in the “blanks.”

Boa sorte,

1 Like