Fixed Length Flat File Processing Need Validation

Hi, all,

I want to run this by all of you regarding my attempt to parse this flat file into an IS document via a flat file schema. Below is the flat file layout (each record has a record id as the first character);

MasterHeaderRecord: 80 bytes, 1 only
-BatchHeaderRecord: 80 bytes, 1 or multiple
–DetailRecord1: 80 bytes, 1 or multiple
–DetailRecord2: 80 bytes, 1 or multiple
-BatchTrailerRecord: 80 bytes, 1 or multiple
MasterTrailerRecord: 80 bytes

I built the ffs based on the above layout, but when validating the ffs, it gave me “found no record”. I was thinking about breaking this whole string into an array of strings, and then perform the rest of processing. Sort of wanting to take advantage of this ff package, if any one of you came across some creative ways of handling such nesting structure of flat files, pls advise. thanks a lot…

Sample data:
A00COMPANY111000000088801072414031111111111111111111111111TELESCOT00000008881111
B111111000112001072320010724SEBP90017743004TD1BANK111111111111110099999991811111
R0000010001000000020000000000000000000000605020411802111111111111111111111111111
2200107230004X9L5U1R1111111111111111111MS1BXXXXXXXXXXOANG11111111111111111111111
R00000200010000000146780000000000000000NW274200003201111111111111111111111111111
2200107230004J7K4I0R1111111111111111111MS1LXXXXXXXXXXU11111111111111111111111111
R0000030001000000013492000000000000000LMS409100015902111111111111111111111111111
2200107230004X7W2J1R1111111111111111111DONXXXXXXXXXXXLSON11111111111111111111111
Y1111110001000300000000000481701111111111111111111111111111111111111111111111111
B111111000212001072320010724SEBP90017743003ROYAL1DIRECT1111111110099999991811111
R00000100020000000181980000000000000000000LMS25050011111111111111111111111111111
220010723000306286111111111111111111111SUZXXXXXXXXXXXBELL11111111111111111111111
R00000200020000000150000000000000000000000LMS27680215111111111111111111111111111
220010723000300336111111111111111111111MOXXXXXXXXXXXELL1111111111111111111111111
Y1111110002000200000000000331981111111111111111111111111111111111111111111111111
B111111000312001072320010724SEBP90017743809BCCCU11111111111111110099999991811111
R00000100030000000093520000000000000000000LMS26120019111111111111111111111111111
2200107230809447450111111111111111111111BERXXXXXXXXXXXXXXXXXXXXAR111111111111111
Y1111110003000100000000000093521111111111111111111111111111111111111111111111111
X00COMPANY1110000000888000006000000907200000020111111111111111111111111111111111

Jake,

Can you please elaborate more on the flatfile schema creation what you have done so far and attempting to parse the ff?

so that we can help you accordingly.

what are the RecordDefinitions from the sample shown above.

HTH,
RMG

I created the following record definitions in a ffd:
MasterHeaderRecord
-BatchHeaderRecord
–DetailRecord1
–DetailRecord2
-BatchTrailerRecord
MasterTrailerRecord

Then I created a ffs using the record definitions in this ffd. Meanwhile, for each record, the first character of the record is the record identifier. Maybe I miss something in this…where to specify this record ID…as each record has a different definition…
Thanks…

J-

Jack,

So this means you will have the record definitions for each record to identify from the flatfile for parsing ff.So can you describe on that info?

The reason you are getting found no records is the schema is not able to matching the recorddefinitions you created and unable to parse the loops/lengths,identifiers etc…

HTH,
RMG

the record definitions are defined in a ffd and then a ffs is created via such record references to these record definitions…I want to find a way to identify each record…and the loops. So far, I specified the first character of each record is a record identifier - but could not define the value of such record identifier…

Actually in order to work the parsing you should provide the right record definition and make sure this always exists in the flatfile.

If you are not finding then use recordWithNoID.

HTH,
RMG

Hi,

Need an urgent help on the following flatfile issue.
we have a flatfile of the following structure that is coming into webMethods.

Header record
update record
delete record
trailer record

Header always in first position and not repeatable
update and delete records are repeatable and in any order
Trailer always in the last position and not repeatable

Flatfile schema that we created is working fine but in the results, i see all the similar records are grouped. In the sense,
Result is

Header record
All the update records
All the delete Records
Trailer

But, we want the result to be in the order ie coming in. Example
if I receive H, U1, D1, U2, D2,T. My result should also be in the same order and it should not be H, U1, U2, D1,D2,T

Hope, I am clear. Please me know if you have any suggestions to resolve this issue. Thank You

Kalpana,
I think you’ve chosen the “ordered” property of the schema to be false.
That explains why all the similar records are bundled instead of being sequenced as they occur in the input file.

Try with “ordered” property of the schema set to “True” and make “delete” record a child of “update”.
Shubhro

If the update and delete records are always a pair with update followed by delete, then Shubhro’s plan will work. But if they are independent and the delete could come first (“repeatable and in any order” according to Kalpana’s post) then the plan would break down. If the second case is true then it may be possible to treat both update and delete as undefined data using a default record format. How successful this would be probably depends on the structure and similarity of the two records.

Tim