I’m encountering a strange problem at my end. Please take a look at the attached image that shows you the service and the mapping details. I have published the contents of a flat file and have written a trigger service to perform the insertion of the data obtained into the database. Now, from the entire content of the file, three columns (City, Zip and State) are getting inserted while the rest of the data is not at all being written into the database.
If you look at the figure, you can see that I loop over the published document’s array ‘empDet’ and then make the call to the insertSQL1 which inserts the data into the database. The rest of the services are just for the debugging purposes. In the debugLog statement just under the loop call, I try to log the value of the Termination Date field. However it appears null. In the very next step I carry out a convertToString (of the published doc’s contents) and then write that string to a file. And that file written contains all the information and even the Termination Date which just appeared null in the debugLog service just above. As a result, all the fields other than the three mentioned above are going as “null” into the database. We have been stuck with this for the past couple of days, not knowing what/where we are goofing up. If you have encountered such a problem before, please let me know of the solution. The docs and the adapter services have been validated as I have tested it with some values. And if it was an error while insertion, Oracle would have thrown errors which is also not happening.
This problem can be solved only after debugging your entire flow (mapping,looping,published document,insert adapter service step etc…)
The problem could be with your flow mapping itself which may be messing the document(flatfile-empDet)in the trigger subscribing service.
Please once again make sure the published document have the data in it at the point when triggering service receives it and also check the loop step (In-Array-empDet)and step thru each iteration of the documentList and you should see this as a document and make sure the flatfile document fields are correctly mapping (see in the map properties link whether the empDet document array is still showing in the Indexing,if so you have to unmap the elememnts and remap again )to the Adapter Insert service (fields).
If possible provide us screen shot of your mapping flow or upload a sample package with all the flow,sample data that we can run directly)
Like I have mentioned, the debugLog immediately after the loop statement does not print the values other than the 3 fields that seems to have no problem at all. However, immediately after the debugLog, I do a convertToString and then write the string to a file and the file written in this manner has the entire details of my published document.
I’m uploading the package for you reference. Please check.
I will look into the package and will let you know if i notice anything.
Just a note don’t start any package name with (Wm).Since this is only used and applicable for webMethods provided default packages.
Please take some time and register your self in this site.So that everyone will glad to respond.
The work which you are doing is for realtime assignment or training like?
As i told you it is caused due to yoru mapping mistake like documentReference structure.
So install this package i made some changes in the publishable document type,and now the insert service will work fine and you can see all the data in DB.
Actually i have noticed initially in the FlatFile Schema document fields are not matching with (documents:empDetailSchemaDT) like Address Line 1=Address_Line_1,Address Line 2=Address_Line_2…similarly for all other elments like firstname,lastname etc…in the documentReference of the subService1 service.(you can notice this before loading this attached package since it will override the existing one and you will know what i am telling about).
Remember all time debug the flow and check the pipe line like documentStructures,fieldnames etc…
Anyways the problem is solved now,so move further in your process…
Note:After installing this package Sync this document (documents:empDetailSchemaDT) to Broker again
The stuff worked. I modified the names in my flat file dictionary to match the ones that were appearing in the document type. I had a hunch that the field names would be the factor since “City”,“Zip” etc. were words that were matching all across and they were getting entered, but I was not aware of the place where to perform the modifications.
So this would mean that while validating the document type against the schema, only the structure and the finer details like the name of the fields are not validated ?
Glad you understood the problem based on my comments.
Actually convertToValues validation is against the FFSchema,not on the documentType.So the mistake is in the documenttype with field names different but still i am not sure why the generated documenttype has different field names.did you created this manually?
Anyways move further in your process and make sure anytime that documenttype structure created correctly.Always check the pipeline thoroughly it will help us in such a situations and even i found the problem that way only.sometimes simple mistakes take whole day of our time to resolve.hmm