InsertSQL from file blank records

I have followed the flat-file tutorial and the JDBC tutorial. Now I’m trying to make a package read from a flat file and put that data into the database. It reads the data from the file; I can see the data in the Results tab. The JDBC adapter inserts a record for each data line in the file. But each line inserted into the database is NULL in every column I have the fields in the Data Type mapped to the table columns. It doesn’t get an error message. What could be missing? I can’t see any clue when I run it in debug. I don’t now why my data didn’t get into the inserted records. I’m new at this, so I might be missing something obvious.

Hi Amy,
This response is late, so maybe you have already resolved this. Anyway, the issue appears to be that contractorSourceFileDT/recordWithNoID is a document list (containing multiple documents) and contractorsFromFileInput is a document (only one document). You need to loop over contractorSourceFileDT/recordWithNoID and invoke the contractorsFromFile adapter service within that loop.

No, actually, this problem still isn’t solved. Thank you, Mary, for responding.

The recordWithNoID was actually supposed to be the default record type but there was also a header record I was trying to recognize. Didn’t work.

Maybe I’ll make it have just the one record type and have it so that if the CONTENT of the record looks like the header, then just skip to the next record.

Thanks for recognizing this, Mary. I’ll see if I can use this clue to fix it. That would be great!

Unfortunately, I don’t THINK the solution is with the default record. I had already tried recreating the layout to try to solve something else I had observed. The header-record layout no longer exists. But the recordWithNoID is still under the Document Type object the way the mapping in my original picture shows.

I just don’t really know what I’m doing for sure. The documentation does not cover it all, even when supplemented by the forum.

Thanks for trying, Mary.

Hi Amy,

as Mary pointed out already, the flat file is returned as a list of records (one line → one record).

You need to loop over that list and map the current instance to your insert service.
Remember to drop the input of the insert service afterwards so you will not get outdated data when certain fields are empty in the next line.


Hi Amy,
I agree that the default record is not the issue. Unless there is something I don’t see, looping over contractorSourceFileDT/recordWithNoID and invoking the contractorsFromFile adapter service within that loop should insert records into the adapter service target. Of course, as you note, you need to ignore the header record if it is not being identified as such by the flat file schema. Can you confirm via running in debug mode that you actually have documents in the contractorSourceFileDT/recordWithNoID documents list?

Thanks, Mary and Holger.
Yes, it is reading in the values and I can see them when I run it in debug. And I can run the JDBC adapter alone and it will insert some values I give it. And it had been inserting a record into the database for each record in my source file, so it must have been progressing through the series of records.

Currently, it is NOT inserting records because it’s getting stuck, telling me that something would be truncated. I think that means I’m closer; that it’s trying to insert more than just NULL values to the database now! So far, for the life of me, I can’t find what’s too big. 9_9 There are 59 columns and 800 rows! I’ve been working on it most of the day. Doesn’t help that the fields change in the adapter if I accidentally roll the mouse wheel. Argh! Hey, I think I’ll try a smaller file! :slight_smile:

Thanks very much for your suggestions. I’m trying to get to where I can see whether I’m doing what you’re saying.

I did a version using LOOP and it successfully inserts records with data. (Yay!!) But I need to do this using REPEAT rather than LOOP because this is going to entail large files. The version using REPEAT is still inserting all NULLs.

Can you see in the image of these two services why the LOOP delivers data and the REPEAT only inserts NULLs?

The LOOP and REPEAT are not intended to accomplish the same thing. The LOOP processes and array (your document list in this case) and the REPEAT executes a sequence of code until a certain condition occurs. REPEAT will be of no use in the case of inserting the rows via your adapter service.

Why is the file size of consequence? Are you trying to break it up into smaller pieces with fewer documents each?

I do think a smaller file would be best during development. And the loop will work.

I’ve read the articles about how to process large files and expect the ones I’m dealing with to be large. The articles tell how to use REPEAT with an exit condition of the iterator being NULL. Using REPEAT means that the whole file isn’t read into the memory at once. My process does successfully loop through the file and stop inserting to the database when it’s done reading the file. It just doesn’t put values into the fields put into in the database.

So you think maybe I’m missing something that LOOP does and that REPEAT isn’t doing for me. I was surprised when I found out that I can’t write just one record; it has to be enclosed within LOOP or the writing to the database doesn’t work. That hadn’t occurred to me. Do you think my problem with REPEAT is something like that?

People do use REPEAT for this. It’s written about. They just don’t give an example close enough to mine showing reading from a file.

Hi Amy,

did you check the “Handling Large Files” section in the Flat File Developers Guide for further informations?


Hi, Amy. In your image from 2/22, processContractorFileUSINGREPEAT is not looping over the parsed flat file document type. If contractorSourceFileDT/recordWithNoID is a document list, you must loop over that and invoke the insert adapter service from within that loop regardless of whether there are multiple documents or only one document actually present. You may be able map without using the loop if you assign index values, but I don’t see the point in that. I think that, if you insert the loop before BRANCH (skip header record), it will work. I’m not familiar with using the REPEAT step to handle large flat files, but what you have seems to be okay as far as I can tell.

Holger -

Yes, I have read the Flat File Schema Developers Guide. I guess I’ll read it again and try to apply it better.

Mary -

I understand what you’re saying, that the document list is a list of one. But I don’t know what to do about that in webMethods.

If I use LOOP, it will read in the whole file at one time, I believe. That wasn’t what I was after.

Hi, Amy. I don’t think it will read the whole file at once if you use REPEAT and, within that, set input pam iterate = ‘true’ when you invoke convertToValues. If I’m understanding the documentation correctly, it will parse one record at a time. The loop doesn’t control how many records are read in at a time. Setting iterate = ‘true’ on convertToValues does. And REPEAT allows you to exit when there are no more records to parse.

It worked!! Thanks, Mary!!!

Why does it need to be before BRANCH and not after it (which I also tried)?

You’ve been so good to help me that I hate to ask for more, but would you know why it’s not processing the last record? The last record in the file doesn’t get written to the database. It was one short before I added LOOP, too.

Hi, Amy. The LOOP actually makes one document from the document list available in the pipeline for processing, which is what is required in order to branch on ApplicantType. Since your BRANCH is to check the values in that one document, it must be within the LOOP. If you had the LOOP within the BRANCH, the available values in the pipeline on the BRANCH would be a document list which cannot be compared to an individual value.

Regarding the last record not being inserted, can you sent a current screen snippet?

I don’t have a screenshot. I was comparing my source file to what was in the database. The last contractor wasn’t in there and the count was low by one.

Hi, Amy. I suggest executing in debug mode using a small subset of data to determine why the last record is not being inserted.

After trying things yesterday, it stopped working again. Messed with it some more, experimenting with the LOOP placement. Then I deleted LOOP and it started working just fine again! All I can think is that moving it around put something in the Pipeline, but for the life of me I can’t find anything different between the one that works and the one that doesn’t.

Got it to do the last record by not testing the iterator for null until AFTER putting the record in the database.

Great…as long it works and things resolved good thing and yes just use caution with loop steps and iterations and track the pipeline vars some times unknowingly mess happen and clues out with testing with smaller subset of iterations.