MAJOR MAPPING ISSUE IN 61 PLEASE LET ME KNOW IF YOUbVE EXPERIENCED THIS

We are having major issues with some mappings we upgraded to 6.1 from 4.6. Services that have worked perfectly before will fail now because of missing maps that ARE NOT MISSING… they are drawn in developer but in the server log you will see the standard:

Copy failed: No source data available: to=/E1EDKA1_Temp/PARGE, from=/E1EDKA1_Source/PARGE

Print the pipeline and the values are there, but the map simply will not take whatsoever. In some cases we have redrawn the maps and it will work then but what can cause this issue??

Has anyone else experienced these types of mapping problems in the new version?? None of our maps are complex, simple IDOC to IDOC conversions from one SAP to another… These exact maps have worked in 4.6 production for the longest and now they do not in 6.1… Any ideas??

Hi Jim,

We are also having a kind of similar issue,but here its not an upgrade but we are working on 6.0.1 version and its a direct mapping from an Idoc to an FlatFile structure and everything works fine but if we look at the Server log these messages are coming up.We couldnt figure out why they are coming are if they are warnings that we could ignore.

The message we are getting is like this
[609]2004-03-15 09:33:12 CST [ISC.0050.0019V2] Copy failed: No source data available: to=/ZMATERIALMASEXT1, from=/DB_SAP_Catalyst.MaterialMaster.Docs:ZMATEXT1
[608]2004-03-15 09:33:12 CST [ISC.0050.0019V2] Copy failed: No source data available: to=/DB_SAP_Catalyst.MaterialMaster.Docs:ZMATEXT1, from=/ZMATEXT1

it doesn’t point out the fields,Please let me know if you find out anything on your end or is there any mistake on our side.

Thanks in advance,
kamal

Kamal,

This is actually a warning, not an error. You have your debug log level set high. If you lower it to four or one (like in production) you will not see these.

It is just telling you that you have a mapped field that is null. If for some reason you want to get rid of this message, for each error, you right click on the map line (wire) and choose properties. Then, in the first tab, you copy the field name, so in your case, you would put the following:

%DB_SAP_Catalyst.MaterialMaster.Docs:ZMATEXT1%<>null

And if the value of the above document field is not null, then it will map it, otherwise, it will not map and then not log the warning message.

Otherwise, you can just turn the log level down.

HTH,

Ray

We’ve seen this problem manifest itself with 4.6 - 6.0.1 upgraded flows - it was the result of some strange loop/indexing definition issues.

As an example, save your flow.xml when you find a broken “map that isn’t there” - then redraw the map, and compare the new flow.xml to the old. In our case there was an extra index being thrown in there somewhere.

At least that’s my memory of it - it was a while back that we figured out what was wrong and fixed it. We haven’t had a reoccurence since (since obviously we’re not developing any 4.6 flows and migrating them anymore

Hi,

I am trying to Map ROW_ID field of “Notification Document Published” from JDBC Adapter to Siebel Query Service in a flow service using pipeline . Siebel Query Service needs ROW_ID as input .

When i am checking by running the flow service in Developer . It works fine but when i run by generating the actual Notification . It shows on the server that ROW_ID = “” . Its not able to map the Document Published .

Please suggest
thx
Dilip

Hi all,

I am getting the exact same problem as Dilip (mapping two fields -the source field being non empty!- works fine in debug mode with Developer but fails as “Copy failed from… to…” in non-debug mode)

Did someone ever get the solution to this issue?

Thanks in advance

Philippe

Hi again,

I forgot to mention that I also have an error regarding a failed transaction (“commit failed: more than 1 local trans enlisted”) although my service flow only executes read-only access to the database for routing and crossref purpose.

I was wondering whether this transaction issue might be the cause for my source document type fields being emmpty, meaning that the Copy Failed statements would actually be correct statements! Unfortunately there is not much I can guess from the log as to what the transaction error relates to…

It seems that the debug and runtime modes do not share the same underlying engine (as they lead to different results), this is quite surprising.

Below the complete error log

Sorry for the incomplete post!

Philippe

2004-11-16 17:32:57 CET [ISP.0090.0004D] PMO: HubProcess – PMO: Start of Service

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/getCanonicalXRefDynInput/application_field_value, from=/employeeUpsertJDBC/GENDER

2004-11-16 17:32:57 CET [ADA.0001.0101D] Connected to database on “localhost” with “LOCAL_TRANSACTION”.

2004-11-16 17:32:57 CET [ADA.0001.0103V1] Begin local transaction.

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/employeeUpsert/EmployeeUpsert/ID, from=/employeeUpsertJDBC/ID

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/employeeUpsert/EmployeeUpsert/lastName, from=/employeeUpsertJDBC/LASTNAME

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/employeeUpsert/EmployeeUpsert/firstName, from=/employeeUpsertJDBC/FIRSTNAME

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/employeeUpsert/EmployeeUpsert/gender, from=/getCanonicalXRefDynOutput/results[0]/canonical_field_value

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/employeeUpsert/EmployeeUpsert/department, from=/employeeUpsertJDBC/DEPARTMENT

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/employeeUpsert/EmployeeUpsert/address, from=/employeeUpsertJDBC/ADDRESS

2004-11-16 17:32:57 CET [ISC.0050.0019V2] Copy failed: No source data available: to=/getRouteDynInput/criteria_value, from=/employeeUpsertJDBC/DEPARTMENT

2004-11-16 17:32:57 CET [ADA.0001.0101D] Connected to database on “localhost” with “LOCAL_TRANSACTION”.

2004-11-16 17:32:57 CET [ADA.0001.0103V1] Begin local transaction.

2004-11-16 17:32:57 CET [ISP.0090.0004D] PMO: HubProcess – PMO: End of Service

2004-11-16 17:32:57 CET [ISP.0090.0004D] PMO: HubProcess – PMO: End of Service

2004-11-16 17:32:57 CET [SCC.0121.0034E] commit failed: more than 1 local trans enlisted. xid = [FormatId=45744, GlobalId=octo-pmo/1100621965592, BranchQual=2] rxid = {2}

2004-11-16 17:32:57 CET [ADA.0001.0105V1] Rollback local transaction.

2004-11-16 17:32:57 CET [ADA.0001.0105V1] Rollback local transaction.

2004-11-16 17:32:57 CET [SCC.0121.0050I] rollback-only flag is set. rolling back transaction, xid = octo-pmo/1100621965592

2004-11-16 17:32:57 CET [ART.0114.1007E] Adapter Runtime: Error Logged. See Error log for details. Error: [ART.117.4036] Adapter Runtime (Adapter Service): Unable to commit transaction. Transaction state:Transaction is rolled back .

2004-11-16 17:32:57 CET [ISS.0015.9998E] Exception → com.wm.pkg.art.error.DetailedServiceException: [ART.117.4036] Adapter Runtime (Adapter Service): Unable to commit transaction. Transaction state:Transaction is rolled back .

2004-11-16 17:32:58 CET [ISS.0098.0049C] Exception:com.wm.pkg.art.error.DetailedServiceExcep

You will not get any of these errors if you lower the debug level on your server to 1. You can also try level 4. You are in verbose mode, so you are really just getting warnings.

It just means that there was nothing in the field that you are mapping from.

Ray

Hi Ray,

I had well understood from your previous posts on this thread that:

  1. this “Copy Failed” message comes from empty fields,
  2. this is only a warning
  3. this message can be removed by adding this not null condition on the mapping wire.

However my point is that the document received by this flow service is not empty, as I proved it to myself with a savePipelineToFile located at the beginning of my flow service…So why does it think that my fields are empty? I thought this might have something to do with the transaction issue, but I am lost as to where there might be a transaction in my service as I perform SELECT only queries on my DB and did not activate AuditLog for any step of my flow, this must be related I guess to internal wM transaction with the TriggerStore or something like that???

I will continue digging into this transaction issue. If someone has already already encoutered this transaction error message and could shed some light on this, I would be most grateful:

2004-11-16 17:32:57 CET [SCC.0121.0034E] commit failed: more than 1 local trans enlisted. xid = [FormatId=45744, GlobalId=octo-pmo/1100621965592, BranchQual=2] rxid = {2}

Thanks for the help,

Philippe

Philippe,
A couple of questions:

  1. Your logs show that 2 connections are enlisted
  2. Are you using explicit transactions with try/catch sequences
  3. Are you using the process runtime?

-brian

Brian,

  1. I have no clue what these two connections relate to

  2. I am using the standard try catch handling:
    A. Sequence on Success
    A.1 Sequence on Failure (try)
    A.1.1 Do some stuff
    A.2 Sequence on Done (catch)
    A.2.1 getLastError
    A.2.2 savePipelineToFile
    B. Debug.Log(“flow completed”)

  3. When I execute in debug mode from Developer, there are no errors happening (no going to the catch block, nothing special in log file).

When I execute the complete flow service at once (no debug) from Developer, it does not jump into the catch block. However a popup informs us of this commit failure upon flow completion (and the log contains the error I posted earlier). The error comes after it has completed all my functional steps (I can see my “flow completed” debug.log in the log file) but before the flow service was technically completed (or so I guess as I use local publication and the subscribing process does not get triggered)

Thanks

Philippe
PS: we do not have the Copy Failed errors any more, I do not know really why…I have very strong doubts about the reliability of the mapper as I experience simple mappings (eg c=concat(a,b)) not working while the next one works…

Some progress on our issue:

The flow service that fails with this commit failure acutally contains two Dynamic SQL flow steps that each performs a SELECT query. Each of the two SQL flow steps uses its own JDBC connexion. When we remove one of the two flow steps, the commit error disappears and the flow completes successfully.
The problem is not linked to the SQL query itself as we performed this test (running the flow service with a single Dynamic SQL step instead of the two sequential steps) for each of the two Dynamic SQL steps. We are running in LOCAL_TRANSACTION mode.

We go on digging into this. It someone has some hint for us, we’re interested!

Thanks

Philippe
PS: We’re using Dynamic SQL due to what looks like a mapping bug in Developer that prevents us from using the standard SELECTSQL Service

Problem solved.

The issue came from the fact we were using two separate JDBC connections using the LOCAL_TRANSACTION within the same flow service.

We successfully tested both solutions:

  1. Set both SQL steps to use the same JDBC connexion in LOCAL_TRANSACTION
  2. Set one connexion for each SQL step in NO_TRANSACTION mode

It is very surprising to note that in debug mode there were no errors being raised. Looks like transaction management is not supported in this mode.

Thanks to all for your help,

Philippe

Hi,

my transaction problem is fixed. However I still experience this “Copy failed” mapping issue. I have two mappers that do a very similar job (fairly simple mappings, nothing special), one works, the other one not…

When I run the flow in debug mode, I see my input data on the result panel (all fields being populated with some value) and after having executed all of the mappings step by step (no error being raised), I can see my output structure on the result panel being empty…

Is there something I might have omissed or is that simply a mapper bug?

Thanks

Philippe

Solved.

Well it suffices to remove the mapper and to redo it for the mappings to work. I would suspect that we modified the structure of the object being manipulated at some point and that the mapper got lost somehow.

This means that each time you modify a document type, you have to carefully retest all of the impacted mappers, even if the modification should not have any impact on the mapping, as webMethods does not tell you that the mapper got lost. That’s what Unit Test is for I guess:-)

Philippe