Testing in Developer - Data from previous executions gets st

Hi,

We’re encountering strange behaviour while testing our process. We have a model (built in Modeler) with some underlying flow services.

The problem we are encountering is as follows:

We are entering new data for each of our tests. For each test, this data is visible in the pipeline for half of the steps in the Model. But suddenly, after a join in the Model, all the remaining steps contain the pipeline data of the PREVIOUS execution.

The only way we can resolve this at present is to restart the IS and then run the test again. Then we see the data from our latest test in the whole pipeline - as we would expect. However, if we run another test, we once again see the data from the last execution pop up in the middle of the pipeline, at our join step.

Has anybody encountered this before or have any idea what may be causing this phenomenon?

Interesting…
Pipeline management is one of those things that you need to do when using FLOW and Modeler.

Modeler has an option that allows you to send a STEP in the process model all the pipeline data that you are working with or a subset (specified by the input/output signature of the STEP).
Remember that any temporary element(s) that you add to the pipeline is retained in the pipeline unless you clean them up by clearing or dropping them.

Refer to CHAPTER 4 Creating a Process Mode - page 55 of the webMethods Modeler User�s Guide Version 6.1 � the parameter is �Express Pipeline�.

Also depending on the type of join � if it is an asynchronous join waiting on a broker document, then you may need to check the correlation service to ensure that this service is not placing temporary variable in the Pipeline.

Nino…

Hi Nino,

We understand that data remains in the pipeline during an execution, but with our problem, it’s actually the data from a PREVIOUS execution of the process that is being retained in the pipeline somehow.

(By the way, we don’t have any joins that are waiting for broker documents, only a join waiting on the result from 2 Flow Services on the same IS)

Thanks,
Josh

Josh, you may have looked at this already…but I’ll mention it just in case…

Make sure you do not have runtime caching enabled for your flow/java services. If you look at the Settings tab for your service in wM developer you will see a check box for Caching. If this is checked then wM will cache the contents for the input pipeline, so multiple calls to the same service could return the same results.

I’m not sure if this is your problem but it did cause us the same problem when we didn’t realize one of our services had it enabled.

Regards,

Wayne

Thanks for your answer Wayne. One of my colleagues has also just discovered that this is exactly what the problem was!

For anyone else who reads this thread, I’ve also posted the solution at wmusers together with a small screenshot:

http://www.wmusers.com/wmusers/messages/117/49094.shtml?1109093915

Thanks to anyone who’s looked into this,
Josh

Hi Josh,

Glad you found a solution.

The trick with cacheing a Flow service is to use “pub.flow:clearPipeline” as the last step in the cached service, setting the “preserve” list to include only the values your service will return.

As you would notice from stepping through Flow code, the entire pipeline from previous steps is passed in to your Flow service - and any variables you create and don’t explicitly drop are returned from it. Good pipeline management is an important part of good Flow coding, and will help with performance too.

Why you need to do a “clearPipeline” is tied up with how cacheing and pipeline management work.

Under the covers, a pipeline is effectively a Java Map of name/value pairs. Each Flow service “invoke” creates a new Map, copying the entire contents of the current pipeline - then invokes the service. The invoked service adds/removes variables from the copied pipeline - returning the updated pipeline. The updated pipeline is then merged with the original pipeline (so new variables are added and existing variables are over-written). Variables that you drop in the invoked service have no effect on the merged pipeline - they simply aren’t there to merge.

When a service is cached, the cache is just a lookup table with the name of the service and the input values it was called with, pointing to a saved pipeline of the output from the service the first time it was invoked. When a service which has been cached is invoked, the cache manager simply hands back a copy of the saved pipeline which is then merged with the current pipeline.

As you discovered, this sometimes has unintended results! I found this out the hard way when I tried to cache an adapter service call and started processing the same sales order over and over again…

Anyway, if you do a clearPipeline as the last step in your cached service then the pipeline to be merged will ONLY contain the variables you want to be merged. (And if you want to cache an adapter service call, write a Flow wrapper that calls the adapter service and does the clearPipeline!)

Hope this helps.

Steve Ovens
webMethods Professional Services
Melbourne, Australia