I am using ConvertToString service before writing to a file and the problem is that all the values in ffvalues are correctly populated but in the String Output one of the fields value is missing and is not writing to file.why is this happening?. i rechecked Flatfile schema and everything seems to be OK.is there any bug in ConvertoToString.
(but if i am setting a default value for that field in FlatFile schema then that value is getting populated in the String Output field but if i map anything to that field then again its missing in the String Output field.)
Try sticking a pub.flow:tracePipeline call in your Flow before the invoke of convertToString. You will get a dump to the IS console of every key in the pipeline along with the Java type.
Thanks for your immediate response. i did checked with tracePipeline Service. but no chance its also showing the same way with all the values correctly populated in the pipeline but some fields are missing in the output of the ConvertToString service.
should i create the Flat file schema again,will that solve the problem.
This thread is two years back, but thought of reopening it :). I have an issue here with convertToString service. I have a positional schema defined. And parsing using the same is not giving me the first field, although i can see it in the pipeline. Can you all people advice on what may have gone wrong. It’s happening with all the positional schemas i define in any environments.
Hey guys,
Let me elaborate the situation:
I have defined a flat file dictionary, with records A,B. I have created a delimiter schema with B under A int he hierarchial strucutre
A
Field1
Filed2
Field3
B
Field1
Field2
The fields have a nth position extractor.The first field is having nth position 1, second field2 and soon. A and B are my record identifiers. Int his scenario when i m doign convertToString the first field is not gettign populated.
Hey,
I tried that but with no luck. I changed the total schema itself for a work around. I made it recordWithNoSchema and hard coded the record identifier in the flow. Now my record identifier becomes field 0 and the fields there by follow it. Still wondering why the nth position with record identifier didn’t work for me
If you specify your record delimiter as ‘Nth’ field 0 (first field) then you should identify the data as beginning with field 1. This will cause convertToString to produce output like this: AField1Field2. If you specify the record identifier as beginning with position 0 and the data as beginning with field 1 then the output would look like this: AField1*Field2. Missing fields are more common, in my experience, with fixed and variable length records, rather than delimited. I have seen it happen when, for some reason, the mapping to the IS document type results in the fields being out of order. That is, your pipeline document looks like:
A
-Field 2
-Field 1
You can trace the service in Developer and look at the pipeline to see if this is happening. I got around this once by “initializing” the document by creating a blank instance of it using the Set Value modifier before mapping to it. Someone else may have a better workaround. HTH,
Hi
Even we faced some issues like these, we got some very crucial patches for Falt file, developer, and IS.
Pls make sure u have the fixes for Flat file, and the service packs.
Murali makes an excellent point. I don’t think you mentioned which versions of IS and Developer you are using but But for 6.1 many fixes relevant to flat file processing issues are contained in ID_6-1_SP1 and IS_6-1_SP1. If you haven’t already applied these fixes, perhaps they will help with your problem. HTH,