Flatfile Schema when using Carriage return as delimiter

Hi all,

I did a sample schema for the flatfile which has the below structure xxxxxx004502044 H1001231 00000S2009021320090213 V21170215147348
xxxxxx004502044 L1001231 000012009021320090213 V2000A0011126

When parsing, I used the record identifier as 28th character which is either H or L as H is header and L is line . The there is no problem with that Schema. Record length is 300. i am using record delimiter as newline

my questions are ,
1)when can i use record delimiter as carriage return line feed and how can i recognize whether my sample file is having carriage return line feed or not?
2) I tried using carriage return line feed ( \r\n) as delimiter, when i tested the schema it is parsing in correct way . but when i am parsing by using a service which contains converttovalues then the problem is that it is not parsing all records, it is only parsing first Header record which is H , other than this there are no records( as the file has many records)

why is this happening . are there any settings i have to set if i use carriage return line feed in the schema …

Please help me .

Thanks in advance,
David.

Carriage return and line feed is nothing but a new line character… When you have records separated by new line characters, use \n as Record Delimiter…

Use \n instead of using \c\f characters… There is no separate settings that is required… There might be some minor mistake in Schema

Maybe I’m misreading but CR/LF is not the same thing as new line. Line feed and new line are synonomous. And \f (form feed) is not a typical end of line (EOL) marker.

Carriage return = CR = 0x0d = 13 = \r
new line = NL = Line feed = LF = 0x0a = 10 = \n
Form feed = FF = 0x0c = 12 = \f

Conventional end of line terminators for various platforms are:

CR/LF for Windows/DOS and others
LF for Unix
CR for Mac

Typcially for successful and consistent text file handling one will specify that all incoming files use just 1 of type of EOL. Trying to support all 3 is problematic and normally unnecessary.

Which termination a particular file uses depends on a number of factors. If someone uses a text editor to create the file, the file will typically have the conventional EOL for the platform where the file is created/edited. For example, if someone uses Notepad on Windows, the file will have CR/LF for EOL.

Some editors do automatic translation of EOL. It will read and convert any EOL marker to its own conventional EOL–this can be confusing because in the editor the file looks one way and on disk it is another.

Some editors allow the user to specify what EOL convention to use when saving the file. Textpad is one example. There are others.

If the file is created programmatically, the programmer almost certainly has control over which EOL marker is used.

Use an editor that has a binary viewing mode or use a binary file viewer (one the shows the bytes in the file as hex) to view the file to determine the EOL marker in use.

If you’re defining an integration that accepts flat files and you control the flat file definition, explicitly specify what the EOL marker must be when files are submitted to IS. If you do not control the flat file definition, ask the team that does what the EOL markers will be and indicate that the EOL marker should always be the same. In either case, use the specified EOL marker in your FF schema definition.

When transferring files via FTP at any point along the integration path, be aware of which mode is being used. In ASCII transfer mode, the FTP server will translate EOL and end of file markers to the convention used by the receiving system. In binary mode this is not done. I normally indicate that all transfers be done using binary mode to avoid EOL and EOF marker confusion.

Hope this helps.