Best practices on receiving csv / comma delimited files

I will be tasked with receiving a large .csv file, converting to IS doc, parsing,etc.

I’m curious if there’s any best practices that can be suggested from that standpoint, because I have never worked with a .csv file in Webmethods before.

I’m concerned about things such as:

  • The first line contains header name information that I will need to skip
  • If quicker download for large csv file if done with java, versus creating flat file dictionary, getFile
  • Based on the large size, should the csv be brought in as a stream
  • How to handle bad special characters in the data. What if someone places a comma in the data itself? Seems that this will throw off everything.

Any assistance with any of the above will be appreciated. Wasn’t sure if there was a single best practices doc that speaks to this.

Thanks.

Hi Joe,

please have a look at the FlatFile Users Guide.
This one also has a section about LargeFile Handling.

Documentation can be found in Empower or here:
http://techcommunity.softwareag.com/ecosystem/communities/public/_communities/documentation

Regards,
Holger

Hi JoeG,
One option for skipping the header line is that, as you’re looping over the results of pub.flatFile:convertToValues, branch on /$iteration and skip mapping if /$iteration = 1.
I’m pretty sure that a delimiter character within the data itself will cause issues and should be avoided.
-Mary

Hi,

when using FlatFile Dictionary and Schema there is an option to specify that the first row is the header metadata row and should be skipped.

When it should be possible that the value of the fields can contain the field delimiter character, the value of the fields should be enclosed in double quotes ("). By doing so the field delimiter inside the fields value is not treated as delimiter.

Regards,
Holger

1 Like

Thanks for the tips everyone.