Flat File problem

We have a requirement where in I need to create a string from a flat file. I created the schema for the flat file. We have a group of 3 recrords that repeats at a max of 200 times. Each record is of same length. All records have same record identifier. I used the below structure for the schema

recordWithNoID (Repeats 200 times max)

When I used this stucture to create a string using convertToString service, it’s generating one extra record at the beginning before record1. This is because of parent record, recordWithNoID. If I remove recordWithNoID, I can’t group those 3 child records. How can I remove that extra record at the beginning? Can somebody please advise?


In the schema you must have given the Position or Area. Keep both as unused for the recordWithNoID.


Where have you mapped the output of convertToString ??



I didn’t use Position or Area in schema. They both are unused. How can I group the child records so that the group repeats? The structure I am looking for is like this.


I mapped the output of the convertToString service to a string variable in pipeline out. I need to write it to a file. That part is ok, but I am confused about this group of records repeating. They all have same record identifier and they are of fixed length.

Can somebody shed some thoughts on this?

i think , u need to create document array for each record in the document type. i.e , record1 (doc array)
record 2 (doc array)
record 3 (doc array).
then try to map this to convert to strings and see .

What does your input data look like? It appears that you have made the assumption that the problem is with recordWithNoID but that may not be the case.
The problem may be with your assumptions on how you expect the data and what you’re actually getting.

I created a structure like this and this approach has worked for me.

recordWithNoID (repeats 200 times max)
----record2 (repeats 1 max)
----record3 (repeats 1 max)

I was able to eliminate extra record at the beginning by using recordWithNoID.
recordWithNoID has structure of record1. Thank you all for your inputs.