How to create loop within loop within loop to read document and grab a key to get the value using Webmethods

I am trying to grab Authorization key which is from a document field and then its value

Document field looks something like this:


"document": {

    "protocol": "http",

    "subprotocol": "HTTP",

    "http": {

        "requestUrl": "/rest/test",

        "method": "GET",

        "requestHdrs": {

            "Accept": "application/json",

            "**Authorization**": "**Basic whatever**"


        "ipInfo": {
            "localPort": "5555",   

            "remotePort": "64880"





Basically loop through entire document then within the document loop create another loop for http then within http loop create another loop for requestHdrs to eventually grab Authorization key and its value.


you can write nested loop. drag loop inside loop. i won’t suggest doing that though… loops are costly operation.

btw from the picture, i don’t see any document as loop so it may be just matter of straight map

Actually looking for code snippet:

Here is an example
Document field in webMethods looks something like this


In webMethods there is a different way to loop through this field

Something like this
This is the loop to read through entire document field

IData AllConvertedHeadersToDocument = IDataUtil.getIData( pipelineCursor, “AllConvertedHeadersToDocument” );
if ( AllConvertedHeadersToDocument != null)
IDataCursor documentCursor = AllConvertedHeadersToDocument.getCursor();
while( {


Now I am trying to read below

“document”: {
“protocol”: “http”,
“subprotocol”: “HTTP”,
“http”: {
“requestUrl”: “/rest/test”,
“method”: “GET”,
requestHdrs”: {

            "Accept": "application/json",
            "Authorization": "Basic whatever",
            "Cache-Control": "no-cache",
            "Postman-Token": "0806e356-767c-482d-a1f9-9c7c5636559d",
            "Host": "localhost:5555"


Trying to get requestHdrs level.

This is an in built feature.
Map is not going to work with in built feature.


Hi Mohammad,

I agree with Mangat, that there is no looping needed in this structure.

The fields marked with a Document icon are just for grouping fields.
So you can try to access the source field as “/document/http/requestHdrs/Authorization” and just map it to the target.
As an alternative you might want to expore the WmPublic package for the folder “pub.xml”. In this folder you will find a service for querying documents
See IS Built-In-Services Reference for details.



Folks I am having a rough time figuring this thing out.
Here is the run down

I am using image
to get all headers. The result comes out in a Document field.
I create a java service and create a Document field and mapped the passed headers from above in built service into the newly created Document field for my java service for further processing.




I right click my java service and generate code with all fields
It generates the code something like this
// pipeline
IDataCursor pipelineCursor = pipeline.getCursor();

		// GrabbedHeaders
		IData	GrabbedHeaders = IDataUtil.getIData( pipelineCursor, "GrabbedHeaders" );
		if ( GrabbedHeaders != null)
	IDataUtil.put( pipelineCursor, "Token", val );

Document field stores result as key and value pairs

What I want is to grab Authorization key and its value.

This is the Document Field where all the passed Headers are stored
IData GrabbedHeaders = IDataUtil.getIData( pipelineCursor, “GrabbedHeaders” );
Its called IData
I dont see any method to grab the key and its value in below screen directly:


I hope you folks understand where I am getting at.
Its very simple I need to grab the Authorization key and its value.
Please provide if you have code snippet.


Mohammad, for this requirement why do you even want to create java service? is there a reason that you absolutely need to write java service.

1 Like

I need to decode the token and perform validation. I am a Java developer but newbie with webMethods.


okay… you don’t need java service for this.
look into map. this is a straight map. input to output pipeline


Thanks everyone for the response but nothing worked.
Came up with a solution with so much playing around with the field itself.

Here is the solution:

    // pipeline
	IDataCursor pipelineCursor = pipeline.getCursor();
		// Headers
		IData	Headers = IDataUtil.getIData( pipelineCursor, "Headers" );
		if ( Headers != null)
			IDataCursor HeadersCursor = Headers.getCursor();
				// i.http
				IData	http = IDataUtil.getIData( HeadersCursor, "http" );
				if ( http != null)
					IDataCursor httpCursor = http.getCursor();
						// i.requestHdrs
						IData	requestHdrs = IDataUtil.getIData( httpCursor, "requestHdrs" );
						if ( requestHdrs != null)
							IDataCursor requestHdrsCursor = requestHdrs.getCursor();
							String Authorization = IDataUtil.getString( requestHdrsCursor, "Authorization" );						
	// pipeline
	IDataCursor pipelineCursor_1 = pipeline.getCursor();
	IDataUtil.put( pipelineCursor_1, "Result", "Result");


For those coming upon this thread later, do not follow the Java code shown. Using Java is not necessary for this.

There are a couple of techniques to use in FLOW to easily get the value of any header. Here is a simple example.

Call getTransportInfo as the OP ntoed.

The trick is to manually add the named var you want to the untyped document in the pipeline. Right-click on the doc and insert a string var named Authorization.

Then create another var in the pipeline and map from the source field to the target field.


Now you can do whatever you want with the value in authorizationToken. And requestHdrs can be dropped.


Sufran – you need to resist almost 100% of the temptations to drop to Java. Most everything can be done in FLOW. There are certainly times when Java is needed/useful but you should endeavor to have that be a rare occurrence. This old, old post may have useful guidance/info. Integration Server and Java - #22 by reamon

1 Like

Yes Reamon, when I came up with this solution I was newbie at that time, actually still am. I started working on webMethods a month back or so. But you are right. I found out the solution later that you have mention above using direct mapping but at that time I didnt know.

I want to piggyback on what you just mention about limiting the java code and use flow more.
And this is regarding Json schema validation that you provided a way out for me.
I am on the next level now. Hence the question is, when provided schema that I am validating throws some errors on fields for format/required check. They are default errors comes inside list as bunches. Is there anyway I can convert them into custom errors without using any java code at all.
Here is the example of default errors are thrown inside list:

So for example
I want to convert this
[ISC.0164.0030] JSON schema validation failed: required key [logo] not found
to something like
Logo cant be null or empty

another one
[ISC.0164.0030] JSON schema validation failed: string [12345678] does not match pattern [1]{9}$

ID must be 10 digits.

I want to convert all of them errors into custom errors without using any java code at all in this case.
Any clue Reamon?


  1. \d ↩︎

Glad you had determined how to do the mapping in FLOW earlier. It does take some time to learn how to use FLOW.

For the errors item, you can create a service that does the translation. Loop over the errors document list and for each do whatever fits for how you want to translate. A couple options:

  • BRANCH on errors/error to create your own message. However, it looks like the error codes may not be granular enough for what you want. E.g. JSV17 is likely the same code used for any missing field, not just logo.

  • BRANCH on errors/message and use regex in the label to determine the specific error that occurred. Then you can return any value you’d like to describe the error. You could possibly parse the message value using pub.string:tokenize to split the string on the : character so you could omit the [ISC… part of the message. Be aware of what tokenize will do if : appears in the error message itelf – particularly if payload data will be present. May need to “reconstruct” the string. You could also use regex with pub.string:replace to strip the message prefix.

  • Define a lookup table. Use pub.string:lookupTable, possibly with regex, to do the translation.

  • One thing you may find helpful, and gets some Java coding for you :slight_smile: is to create a couple of utility services. Most companies create their own “public” package, a la WmPublic, to define various common utilities. A couple we have are splitString, which simply exposes String.split(), and matcher, which wraps java.util.regex.Pattern and Matcher. Something similar may be helpful for what you’re doing here.

I’m obviously not familiar with what your top level service is intending to do, but I would strongly recommend against doing too much in terms of validation and customizing the error messages. You’ll likely spend an inordinate amount of time on error messaging rather than on functionality.

Is your top-level service providing the functionality directly? Or is it passing the request data on to another server/component that does the work? If the latter I would strongly recommend against doing validation “in the middle.” Let the target component worry about what it does and does not need.

1 Like

Reamon thats a lot of information to digest.
However to answer your question, As a provider when Json request body comes in, it has to be validated and anything fails, you are not getting any further and all the validation errors will be thrown back until every input in the request body is fixed. You are stopped there. Thats the whole idea.

Is wM IS really the complete “provider” or is it more a front-end providing access/minor transformation to something else? My query is geared toward understanding what is done with the JSON payload. Will it be passed on to another system, then it responds and wM IS passes the response on to the caller? Is all the processing being done on wM IS? Or is it doing some initial work and calling another system/app?

If the wM IS components are passing the data on to another app, via their API, what will that app do in the face of missing/incorrect values? If that app returns an error, you might consider just letting it do the validation.

In other words, if you’re creating components in wM IS to “protect” calls to another application you may not want to do that. Let the application protect itself. It knows its rules. No need to repeat them in wM IS.

But as noted, I’m not familiar with the solution you’re building so this suggestion may be off base. My intent is to maybe help you avoid current/future pain. :slight_smile: This is definitely in the YMMV category, but in the past I’ve done this very thing, validating every incoming document before handing it off (some transformation perhaps) to another system. Just about every time we’ve done that in wM IS, it came back to bite us and we removed it and had the end point handle it. Of course there are times when that isn’t possible – end point is a DB table. But if it is an app API, let them enforce the rules.

1 Like

As a provider, our first concern is to protect our API. We dont want any garbage or malicious information. We will stop you there. So plan is to do protect/validate and massage data to pass it to backend service for further processing and then whatever we receive we have to do conversion to pass info to the service that is calling us. So its like chaining process. It will be like a microservice. The difference is we are not using any frame work like Spring boot.

So there is alot to be done here. My next goal is to handle exceptions.
I tried your suggestion.


I got close with this. But like you said might have to build some utility services to accomplish some intricating work.
Glad you are here to usher me. Appreciate your help very much.
I am going to be putting alot more postings in upcoming time.
I am new at this. So I am learning and implementing as I go.
Its a process but candid truth I really like this WS designer. I want to build on this and get better. Snippets/codes/screen shots etc will help alot more for better understanding. you probably seen in every one of my post, I provide screen shots/codes that I am having issues with. So that person that is trying to understand your problem will have a better understanding. I understand most of the stuff that I am doing is common/advance work but it is the way in real world.


Sure. A provider should protect itself. But my question is asking whether or not the wM IS-hosted component is “the provider.” Based upon what you’ve shared, it seems likely that it is not. It seems that it is providing another “interface” to an existing provider. What is the nature of the “backend service?” Are you calling a defined API? What does it do when given bad data? I ask because if it is already “protecting itself” adding “protection” in wM IS is not necessary and simply adds complexity for no real value.

If the “backend service” being used already validates data and returns error messages, leverage that. Don’t redo it. Keep the wM IS interface component as thin and light as possible. Be “pass-through” as much as possible for request/reply interactions. Don’t add redundant capability.

Other random notes:

  • Don’t overuse transformers in a MAP step. Specifically, don’t use a MAP step that has just one transformer call in it. Just INVOKE the service. Helps with readability.
  • Don’t be misled by the wM documentation that infers that transformers run in their own thread. They don’t. The caveat they give is intended to indicate that the order of execution of transformers in a single MAP step is undefined.
  • Use caution with a hard-coded index. Any change at all to the message format, including validate adding just one more digit to the error code, will break it.
1 Like

Wiseman speaketh.

Easiest way to grab such value is actually to capture output structure of getTransportInfo, create a document with that output structure and map getTranportInfo output to a reference of the structure. Once you do that - you can visually map specific fields without any looping or java required.

1 Like

I am not sure of that at this time on where it will hosted eventually once it goes to production. We are going to call another Java service that is actually calling Oracle DB. They are established JavaEE services. They throw there own custom error codes and we take them and convert them to our own custom code and pass it on. Basically I dont have control over that. Thats just the requirements.

That’s what we are trying to do, keep everything thin as possible but we can only decide as we move further down the line, what to keep and to remove.

Are there other alternatives to MAP that I can leverage Reamon? what if I want to do some conversion during looping process or add some checks using branching, Is there any other way to attain that?

they throw there own custom error codes and we take them and convert them to our own custom code

What is the value of converting the error codes? Understanding that “requirements” is an overloaded term, we often forget that some “requirements” are really just design decisions. If it were me, I would press for why error codes/messages would need to be changed.

As example, does changing [ISC.0164.0030] JSON schema validation failed: string [12345678] does not match pattern [1]{9}$ to string [12345678] does not match pattern [2]{9}$ really matter?

In any case, there are many ways possible to translate. Think about how you would do this in Java (but don’t do it in Java) and translate that to doing the same in FLOW. BRANCH statements (same as switch/case). Lookup table. You could externalize the lookup table by defining it in an XML or JSON file and loading it at service execution. Or in a DB table. You could use regex where appropriate for string matching. Create focused helper functions as needed for matching/replacing. It all depends on how sophisticated and complete you need to be with translating the errors returned by the Java EE components.

  1. \d ↩︎

  2. \d ↩︎