Help Feel confusing on pipeline in conversation

when I using BI to design processing models which will be implemented on TN, I feel confusing on how the BI and conversaction script manages the tasks’ inputs,outputs and the pipeline, if one step does not have any output, the result also can be accessed by the followed steps, and if I use a savePipeline service to store the pipeline in a task, it CONTAINS all the pipeline fields that generated by the pre-excuted tasks but we can not see them when building a task service which generated by BI.Well, I can not find any detailed description about how BI and conversation scripts managing tasks and pipelines in the WM documents, this cause us feeling unconfident on using BI.
SO, is there any expert can help to clarify the concepts and mechanism or relationship about the BI, conversation script, task, task’s input and output and Pipeline?


This isn’t really a BI/conversation script issue. It’s the nature of the pipeline.

The pipeline is just a variable pool (read global variables). You can throw anything you want into it and pull out anything that is in it. The part that can be confusing is that at development time, BI and Developer can only show you the variables that they “know” about from the services/tasks you build. This includes the declared inputs/outputs of services that are called and variables you create within your service. The development environments can’t determine the stray variables, if any, from services that have been called .

If you are certain that a variable of a particular name will exist in the pipeline, you can just use it within your service. Many of the wM-supplied services place variables in the pipeline that are not declared as outputs. For example, will place a variable named success in the pipeline and set its value to false if the sender ID of a document isn’t related to the current user. This isn’t declared anywhere and isn’t described in the docs. But a service can check for the existence of a var named service and it will work.

It’s sloppy but manageable.


Please report instances of ‘extra’ pipeline variables coming out of built-in services to IS support as bugs. This is sloppy and easily fixable - either by defining and documenting the variables or making sure they are dropped.


Fred–Good idea. I wonder how aggressive wM would be in doing some of this. Probably not very, but I’m a pessimist. :wink: I think the issue is simply the sheer number of services that need to be checked. Checking coded services to see if they are littering the pipeline adds excitement to the mix.

TN seems to be the biggest pipeline litterbug. And the pipeline isn’t the only area of concern. A little digging into the bizdoc envelope shows that it is treated rather loosely too–e.g. the Errors list can have quite a range of record types attached to it, some of which are documented, others which are not. This grab-bag approach is incredibly flexible but can be quite maddening when you’re trying to unambiguously handle processing and errors.

A fundamental change in how variables are managed is needed. The global variable nature of the pipeline is the achille’s heel of IS. Unfortunately I don’t know how it could be gracefully moved to another approach without breaking all existing FLOWs. Perhaps the wizards at wM can figure something out.

My 2 bits.

P.S. Pipeline litter and pipeline litterbug–those sound kinda cool! Maybe I’ll trademark those terms. :wink:

Pipeline litter has been something that’s come and bit me multiple times. I’ve complained to WM support and PD more than a year ago - but they weren’t exactly tripping over themselves to fix it then.

A particularly prolific pipeline polluter (that’s another catch-phrase), if I recall correctly, was

I guess this is the reason WM offers pub.flow:clearPipeline.

The danger with using pub.flow:clearPipeline, though, is that you clear the good with the bad. I only use it as the very last step in a Flow invoked from a browser.

Because variables are global by default, many webMethods developers experience difficulty managing them. This is most apparent with skilled Java developers who are used to variables that are local by default.

In time, managing pipeline variables becomes second nature but it is still extrememly difficult when multiple developers are working one series of Flows. If one developer forgets, another developer can feel the consequences and not know where to begin debugging because, after all, his code looks proper.

Proper planning and careful coding can eliminate all but the most sneaky Pipeline Polluters. Be diligent in planning your Flows when sharing development responsibilities, drop objects as soon as it is possible, and do not use the webMethods variable names for your own variables. These include:

    []value []node []record []string
    There are many more, of course, but you can all recognize the pattern.

    These are just a few tips. I am sure that others can lend more.

> Because variables are global by default, many
> …
> do not use the webMethods variable names for your own > variables. These include:
> value
> node
> record
> string

Hah! If the fine people who created Flow are reading this thread, they must be squirming in their seats (they’re not old enough to roll in their graves just yet). I’m sure these gotchas and workarounds were exactly what they were trying to avoid in Flow. If only they had enforced the rule that “services may only inject variables into the pipeline iff they have declared these variables in the service signature” – then we would have been spared this pain!

It’s unnerving to find that undocumented variables have crept into your pipeline, and even worse to find them binding unannounced to service inputs. I once had a case where two similarly named records were in the pipeline - one junk, and one correct. And the junk record wound up binding to my service input, overriding my explicit mapping (I had mapped the correct record to the input). In such cases, you either have to rename the correct variable, or use clearPipeline soon after the junk variable creeps in.

> The problem with pub.flow:clearPipeline, though, is that
> you clear the good with the bad.

Hmmm. You can use the ‘preserve’ input to “preserve the good”. Is that what you meant? You do have to be careful to preserve all variable you may need in the Flow steps below though.

A possible related question: can anyone tell me what “validate-in”, and “validate-out” actually do (see the properties tab on each flow invoke step) ?

Then, how can BI help us? no BI then no conversation script, but unfortunately I think it makes the things even worse. first it generate 4 inputs for your task services arbitrarily but when you select the implement service for BI although use the same service that generated by BI itself, your will find the display of the input in the process model is ruined and the output becomes disappeared, also it adds the user parameters as the process level “Additional settings”, but once you select it you can not remove it.

the questions are : how can we make things under control if we using both the top-down and bottom-up approach?what’s the relationship between the BI generated input and output and the pipeline and what we draw on the processing model?