FTP from mainframe issues extra CR/LFs


I’m trying to build an integration that receives FTPed input from a IBM Z/OS mainframe. I have successfully built the beginning of the integration as
pub.io:streamToBytes, pub.string:bytesToString, and pub.flatFile:convertToValues. It converts the first record type quite admirably, and then falls on its face.

Examining the string variable in the pipeline (after bytesToString is invoked) shows me that the FTP from Z/OS is inserting two CR/LF pairs between the record types. I believe that the convertToValues is failing because it attempts to parse the second rectype it doesn’t see ‘H02’ (the key, which starts in the first byte for a key length of 3) it sees ‘0x0d 0x0a 0x0d’ (the first CRLF and the second “CR” at the end of the previous record.) At least that’s what I’m assuming it sees, because it is failing with an error code 11 (undetermined record type.)

I’ll bet that one way around this is to add 2 extra bytes to the record length to account for the CR/LF pairs. But this is inaccurate, because the record length is 1200 bytes, not 1202 bytes. If at all possible I’d like the schemas to match the COBOL copybooks that they are mimicing.

Wondering if anyone else has had mainframe experience in FTPing - specifically with IBM Z/OS - and can tell me what magic words I need to put in the sysin to keep it from sending these four extra bytes per line.


Here’s something you could try:

Even though you’re dealing with fixed-length records, it appears that your records are delimited by CR/LF, correct? Therefore, instead of setting the Record Parser of your flat file schema to Fixed Length, set it to Delimiter and choose carriage return life feed as your record deilmiter. All your fields will still be extracted correctly based on how you defined the record structures.

By the way, convertToValues accepts a string, a stream, or a byte array as input for ffData. So, if you already have a stream object, there’s no need to convert it. Just map the object straight into convertToValues.

  • Percio

Yep, I tried that this morning and it works well. We’ll probably go with that solution.

I also have some mainframe JCL which will:

Convert the EBCDIC to ASCII as a binary file on the mainframe which suppresses the CR/LF pairs; and
FTPs the resulting binary file (containing valid ASCII, but without CR/LF) to the IS.

This way, the 1200 byte per record fixed length constraint is satisfied.

I’m going to try it both ways, just to see how well the other way works.
If it does, at least another mainframer can get ahold of me to get the answer. :slight_smile: