I’m trying to build an integration that receives FTPed input from a IBM Z/OS mainframe. I have successfully built the beginning of the integration as
pub.io:streamToBytes, pub.string:bytesToString, and pub.flatFile:convertToValues. It converts the first record type quite admirably, and then falls on its face.
Examining the string variable in the pipeline (after bytesToString is invoked) shows me that the FTP from Z/OS is inserting two CR/LF pairs between the record types. I believe that the convertToValues is failing because it attempts to parse the second rectype it doesn’t see ‘H02’ (the key, which starts in the first byte for a key length of 3) it sees ‘0x0d 0x0a 0x0d’ (the first CRLF and the second “CR” at the end of the previous record.) At least that’s what I’m assuming it sees, because it is failing with an error code 11 (undetermined record type.)
I’ll bet that one way around this is to add 2 extra bytes to the record length to account for the CR/LF pairs. But this is inaccurate, because the record length is 1200 bytes, not 1202 bytes. If at all possible I’d like the schemas to match the COBOL copybooks that they are mimicing.
Wondering if anyone else has had mainframe experience in FTPing - specifically with IBM Z/OS - and can tell me what magic words I need to put in the sysin to keep it from sending these four extra bytes per line.