Transferring database from mainframe to Windows

We are using an FTP process to transfer decompressed mainframe files from a mainframe to a Windows server. On the upload of these files to the database on Windows there is a conversion routine provided by SAG which converts various field formats from EBCDIC to ASCII compatible.

This step can apparently be avoided by using the RDW parameter within the FTP process. We have tried this and for some reason it only works on files defined to disk - for files decompressed to tape, the RDW conversion does not work. Whether to disk or to tape the RECFM is defined on the mainframe side the same (as Variable).

Has anyone had any experience transferring files to Windows using FTP and the RDW parameter?

IBM has documented this error when processing tape files.

Change the SITE parameter to

SITE RDW READTAPEFORMAT=V

The additional parameter has no ill affect on disk files.

Hi,

It is similar, but on Linux.

Does anyone have any experience in how to transfer from Adabas file on Mainframe to Adabas file on Linux?

I have been using the sequence below:

1º Option - using RDW

  1.  UNLOAD DECOMPRESS on MF;
    
  2.  FTP binary using RDW option on Windows machine;
    
  3.  WINZIP;
    
  4.  FTP binary to Linux machine;
    
  5.  UNZIP;
    
  6.  COMPRESS+LOAD using RDW option and EBCDIC
    

After some records added, an abend occurred. See below:
%ADACMP-I-STARTED, 16-JUL-2009 09:56:25, Version 5.1.4.01(04) (Linux 32Bit)
%ADACMP-I-DBON, database 241 accessed online

%ADACMP-W-ERROR, Field = RB, ISN = 208928, Offset = 6782
%ADACMP-E-IOSUBERR, IO subsystem error (20/32): Buffer too small for read
%ADACMP-I-IOCNT, 1094 IOs on dataset CMPDTA
%ADACMP-I-IOCNT, 1 IOs on dataset CMPERR
%ADACMP-I-IOCNT, 208929 IOs on dataset CMPIN
%ADACMP-I-IOCNT, 1 IOs on dataset ASSO
%ADACMP-I-ABORTED, 16-JUL-2009 09:57:03, elapsed time: 00:00:3

%ADAERR-I-STARTED, 16-JUL-2009 09:57:06, Version 5.1.4.01(04) (Linux 32Bit)

%ADAERR-F-ERROR, Field = RB, ISN = 208928, Offset = 6782
%ADAERR-F-ERR3, input record too short
%ADAERR-I-IOCNT, 1 IOs on dataset ERRIN
%ADAERR-I-TERMINATED, 16-JUL-2009 09:57:06, elapsed time: 00:00:00

2º Option - using cvt_fmt

  1.  UNLOAD DECOMPRESS on MF;
    
  2.  FTP binary on Windows machine; 
    
  3.  WINZIP;
    
  4.  FTP binary to Linux machine; 
    
  5.  UNZIP;
    
  6.  CVT_FMT -h -f    xxx.dat  yyyy.dat dbid fnr 
    

After some records added, an abend occurred. See below:

Start converting File071.txt
already 278681 records converted
Error in cvt_fmt: mismatch between DATA and FDT detected
reason: input file incomplete

Doubts:

  1. Is there any limit to the use of the cvt_fmt? What I noticed: if the file is bigger than 2GB, both abend.
  2. Is there something wrong on the procedure to transfer the files from one machine to other? (mentioned above). It seems that the ftp is changing the content.

Additional information:

  1. The FDT is correct on both sides;
  2. When the file is not so big, less than 1 million of records, everything works fine; however, for greater values, an abend occurs. It could be a coincidence, but….
  3. It is necessary to transfer to Windows before, because the MF is not in the same site that the Linux machine;
  4. When the utility abends after processing some records, there is no magic number to the file be aborted;
  5. On every situation, it doesn’t matter what the record length is;

So, it seems that the solution would be to separate the huge files in several files, i.e., the solution would be: UNLOAD using starting by ISN and NUMREC; on Linux, the ADAMUP should be used to add the records.
Are these procedures correct? Does anyone have any other idea of how to solve it?

Thanks

One thing to check, in your MF unload/decompress ensure that the dataset LRECL is large enough, remember to account for all occurs of PE’s & MU’s.

Check ulimit on the target platform. It might be defaulting to 2gig.

Hi,

See below the information about ULIMIT:

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 8179
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 8179
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Does anyone have other idea?

Thanks

Are you using the “source=(ebcdic,high)”? for example,

adacmp fdt record_structure=rdw source=(ebcdic,high)

It appears that adacmp is getting to the end of the file, but expecting more data. Either the last few bytes have been truncated, or the format of the file is not what adacmp was expecting.

Hi James,

Thanks for the information, but the adacmp is correct, we used as the same way as you mentioned. Maybe the file could be corrupted, but it is weird, because when the file is huge, I have problem, when the file is not so big, it works fine.

Another hint…

Under Windows, I’ve seen this procedure work with decompressed file sizes of over 120 gigabytes (150 million records), but I haven’t tried anything that large with Linux.

How about trying it without the Zip/Unzip steps?

Hi Ralph,

We tested Compress+Load without using Zip/Unzip, and it didn’t work.

It is really weird, you said that you have loaded 150 million on Windows, but we tried to load on Windows as well, and it didn’t work.

I think that the problem is on FTP process, because the compressed file works whether I reload on MF.

Does anyone have other hint?

Thanks.

Did you use the SHORT_RECORDS option. On mainframe you can decompress records in a way that fields at the end of the record are omitted. In order to compress such decompressed records on Open Systems, you must specify the SHORT_RECORDS parameter.

Regards,
Wolfgang