How to tell if flow service "run" or "debugged"

Hello! I’m wanting to know if, within a flow service, there’s a way to know if said service was called by invocation/run or by being debugged?

Reason: I want to create a utility that goes at the top of every main flow service that will:

if service is called via “run” or “invoked”, then save pipeline and continue
if service is called via “debugger”, then load saved pipeline and continue

Thanks for any help in understanding this. Cheers

Hi Daniel,

one option might be to always savePipelineToFile and loadPipelineFromFile (remember to use the same static file name on both services), but in case of debugging disable the save-Step before starting debugging.
This can be achieved with a variable defined in the first map step of the service and a branch on this variable for the save step.
This variable can then be altered during debugging in Designer to skip the save step.

This will only work with savePipelineToFile and loadPipelineFromFile, but not with savePipeline and loadPipeline as these data will get lost during IS restart if such should occur.

Regards,
Holger

1 Like

I always flag the use of these s as “proceed with caution.” These services can be helpful certainly, but can be very disruptive when mistakenly left enabled. Especially if they get enabled in prod. Can be difficult to detect/track down. And the effect can be catastrophic.

An approach that may help: create “*_Test” packages that correspond to each “real” package. For example:

MyPackage
MyPackage_Test

In the _Test package is where you place all the helpers need to test/debug. Save/restore pipeline. Static test values. Whatever. Then you step through the _Test service to debug the “real” services. This has a number of advantages, including not cluttering the primary code with test code. The _Test packages never get deployed outside of dev.

Elaborating a bit more on naming down the tree, say you have this package with folders and services.

MyPackage (top-level folder, within the MyPackage package)
..Folder1
....serviceA
....serviceB
..Folder2
....serviceX
....serviceY

You’d create a test package for this: MyPackage_Test

MyPackage_Test (top-level folder, within the MyPackage_Test package)
..Folder1
....serviceA (folder, not a service)
......serviceA_normal ("happy path" test, calls MyPackage.Folder1:serviceA with whatever inputs you want)
......serviceA_invalidDate (check behavior when input is invalid)
......serviceA_nullValues (check behavior when no inputs)
....serviceB
......serviceB_normal
..Folder2
....serviceX
......serviceX_normal
....serviceY
......serviceY_normal

We use this approach with a custom-made unit test framework. Works reasonably well, though as with most approaches managing the test data can be a pain.

HTH.

1 Like

This does not answer the original question but remember: It is recommended to disable this feature in production environment. To do this, set watt.server.pipeline.processor to false in your prod integration servers.
Here’s what the IS Admin guide says:

watt.server.pipeline.processor
Specifies whether to globally enable or disable the Pipeline Debug feature. When set to true,
Integration Server checks the Pipeline Debug values for the service at run time. You can view
and set the Pipeline Debug values in the Properties view in Designer. When set to false,
Integration Server ignores the Pipeline Debug options set for the service in Designer. The default
is true.
Enable this property in development environments where testing and debugging is performed.
In a production environment, however, disabling this property is recommended.
Important:
You must restart Integration Server for the new value to take effect.

2 Likes

Hey guys. So I’ve been experimenting and using suggestions, and came up with a handy little util that seems to check all of my boxes. The util is called debugRestorePipeline and it works like this:

  1. You place this utility service at the top of your flow service you that want to be able to debug in the future.
  2. you add an input variable to the flow called “debug”
  3. if you populate the debug variable, the util will load the pipeline.
  4. If you leave it null, the util will save the pipeline using a filename based on pub.flow:getCallingService
  5. However this all only runs if your extended property “watt.server.environment” = DEV or QA, which gets checked for at the beginning of the util service.

So now in normal circumstances while developing or testing, the pipeline saves to a single file-per-flow service. Then anytime you want to debug, just run the debugger with “debug == true” and the last pipeline will be loaded. But when you deploy to PRD, it ignores all that logic. This way you never need to change the “Pipeline debug” property.

Let me know if anyone has any thoughts or improvements they can think of. Cheers

Hi Daniel,

Have you looked @ watt.server.pipeline.processor and Pipeline Debug capability for a service?

watt.server.pipeline.processor
Specifies whether to globally enable or disable the Pipeline Debug feature. When set to true,
Integration Server checks the Pipeline Debug values for the service at run time. You can view
and set the Pipeline Debug values in the Properties view in Designer. When set to false,
Integration Serverignores the Pipeline Debug options set forthe service in Designer. The default
is true.

Enable this property in development environments where testing and debugging is performed.
In a production environment, however, disabling this property is recommended.

Please search for Pipeline Debug in Service Development Help - https://documentation.softwareag.com/webmethods/designer/sdf10-11/10-11_Service_Development_Help.pdf

Regards,
-Kalpesh

Sorry, did not see this message earlier. I repeated the same information that you have described in the thread below.

To which services would this be added? One needs to be a bit careful. Add this to a common service that is called a lot by different packages and you can get chaos. Is the intent to limit it to top-level services?

Everyone has differing experiences, but I rarely use save/restorePipeline. I think the last time I used it was last year. And before that, years passed as well. Have not found it useful in very many scenarios. YMMV. Just be careful. :slight_smile:

[Edit] I forgot one of the common troubleshooting tools we use. All top-level services are configured with “Enable auditing” set to “When top-level service only” or sometimes “Always”. And with “Include pipeline” set to “Always”. This captures data for each run, not just the “last” run. Export the pipeline using MWS and we have a starting pipeline for the service. This is our most common approach when it is determined that the issue looks to be data-driven. Can be useful in production too, since some errors only occur in production.

Caveats for having save/restorePipeline* as a “global” approach:

  • Captures only the most recent run. If something runs after an error condition, the error pipeline is gone.
  • If the input is a *node, will not work with savePipelineToFile. The underlying object is not serializable. The restore will not load any document.
  • If the input is a stream (which is usually should be for anything that might be “large”), will not work with savePipelineToFile. Same reason as above.
  • If one wants to debug with something other than the saved pipeline, have to take steps to get around the restore.
  • Can use the non-file versions, but depending on the number of services and pipeline sizes, can be a memory issue – the saved pipelines remain in memory for the life of the JVM.

HTH.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.