Some basic questions on Reverse Invoke

Just wanted help on some doubts on the architecture for Reverse Invoke (RI). The scenario is for Integration Servers receiving connections from external B2B partners.

  1. Do RI servers always terminate SSL connections? Does user authentication management (eg, Basic Authentication IS credentials, Client certificates) have to move to the RI servers?

  2. One benefit I’ve heard about RI is that the ‘real IS’ is moved ‘within the enterprise’ (i.e. it gets taken out of the DMZ and the RI takes its place). Is this really a benefit? If the RI server passed on a request that exploited a vulnerability in the internal ‘real IS’ (e.g., in a package installed on the ‘real IS’ only), the ‘real IS’ typically would have more internal access than it would sitting in the DMZ.

Hence , is it better to site both the RI (fully accessible by outside partners) and the ‘real IS’ (completely firewalled from the outside, but permitted to connect to RI and say, to the internal EAI IS only) in the DMZ?

  1. I believe this is the case but perhaps someone else has better info.

  2. I’ve never been a big fan of the RI server. I usually propose the use of an inbound proxy (reverse proxy) that network folks are more familiar with. You’re right, that if the RI server is compromised, then it can arbitrarily invoke any service on the internal IS. But I don’t see any benefit to doubling-up the IS, having 1 “outside” and an “EAI IS”–the same problem would still exist, albeit at one more level down.

The primary benefit of RI is that there are no firewall holes opened to get to the internal IS. The internal IS establishes the connection with the RI server (outbound). Monitoring this path is key to detecting intrusions. If desired, one could also restrict the IPs that the internal IS can access. But this is usually more trouble than it is worth.

Thanks Rob. I’m not a fan of RI either, because the benefits aren’t very clear. A reverse proxy whose platform is not webMethods may be more secure than an RI which shares most of the codebase of the internal IS.

The benefits of RI are the same as with a reverse proxy–a single point of entry to monitor, simplified firewall rules, etc. I’m not sure about the “more secure” part. I’m not aware of how RI would be any less secure than other proxies, but I may be wrong.

My biggest reason for recommending other proxies is because the network people are more familiar with them–they generally have no idea how the IS RI works nor how to configure it, so it ends up being the responsibility of the integration team to manage. And I’m all for pushing off work to other groups! :slight_smile:

Sonam,

Here is my experience/understanding:

I set my RI server in DMZ, enable HTTPS and tell my customers that the receiving URL is https://server/invoke/MyCompany.Receive. My RI only does a pass-through - it doesn’t do any authentication. The HTTPS gurantees that the communication from customer to my RI is secured. MyCompany.Receive is a general wrapper to the TN receive service sitting in the internal IS.

In the real internal IS server, you can set the Access Mode for Inbound Messages as “Deny by default”, while only allowing the MyCompany.Receive service to come in. Even if the RI server receives a request that exploites a vulnerability in the internal IS, it will be denied.

By the way, I still remember you posted a service here several years ago using wm.tn.doc:recognize and etc… That was a good one! :slight_smile:

Thanks Rob.

Shumin - it’s great to be remembed for that article :slight_smile: Thanks.

Since the RI does a pass through, is it correct to say it does not not terminate any SSL connections (or parse HTTP requests they carry), but simply passes on the HTTPS stream to the internal IS?

So if the RI server received a request that exploited a vulnerability in an internal package (say, in a WmEDI service) it would be passed into the internal IS. Is this scenarios possible?

Sonam,

Here is my understanding: The customer request will be passed to RI via HTTPS. Then the RI server will decrypt the HTTPS stream and pass the plain request to the internal IS. This is based on the fact that the digital certificate is installed on the RI server and internal IS wouldn’t have the capability to decrypt the HTTPS stream. Internal IS server will in turn check the allowed list, if the service request is allowed, then it opens a non-secure socket to communicate with RI.

If RI server receives a request that exploites a vulnerability in an internal package, it would be passed into the internal IS. But the IS server makes sure it is in the allowed service list before executng it. That is why I always put a wrapper on the popular services, such as wm.tn:receive and only allow customers to use the wrappers. I can do any check I want (stream size, user name, password and etc.) in the wrapper to block the malicious requests.

In my recent AS2 experience with a customer, the data is transmitted to RI via HTTPS. After it is received by the internal IS, it is still signed and encrypted. The internal IS does the validation and decryption. That is why I think AS2 is a safe way to exchange data even without a customized wrapper.

Regards,

OK, I see - thanks Shumin. So the RI does terminate HTTPS connections… this scenario uses Client Certificate Authentication (i.e., ‘Security > Certificates > Client Certificates’) and has browser based management tools for import/export certs, do user mapping, etc – so these tools will need to migrate to the RI.

It’s good you use wrapper services :slight_smile: thats the right way to do it for directly exposed IS services. RI offers another level of indirection anyway in the service actually invoked, so it may not be necessary there.

Both the internal IS, and the RI server ports will be ‘Deny by default’ so permission-based security will be similarly tight in both scenarios. The issue is with weaknesses in internal services called by the wrapper services (for example, setting a $iteration=“3” or access = “validated” datum into the pipeline). BTW, this code can be used in a simple Java service to remove extraneous data from the pipeline (set the the ‘filter’ string array).

	for (int i = 0; i < filter.length; i++) {
		String filterMePlease = filter[i];
		while (pipelineCursor.first(filterMePlease)) {
			pipelineCursor.delete();
		}
	}

I had discussed this here [url]wmusers.com

Sonam,

I have the customer certs management and user mapping done in the internal IS server. I don’t think RI server needs to host anything other than its own digital cert to make HTTPS available.

If the RI registration port is SSL - webMethods/SSLSOCK and “require client certificate” is selected, then it needs to install the certificate of internal IS and map it to a user in RI.

Could you please tell me more about “another level of indirection” you mentioned?

Our netwoek engineer set up firewall rules to allow a specific set of customer IP addressed to access our RI. This is another protection before “Deny by default”.

Some thoughts about filtering pipeline:

  • We always need to identify who is the requester. there should be a variable name/value pair that can uniquely identify the customer. We can build two tables:

Table A:

VarName VarValue ServiceToInvoke InputPamrtersID (FK to B)

username Ariba CallCXML 1
ID Government CallEDI 2

Table B:

InputPamrtersID ParamName
1 password
1 xmldata
2 EDIDATA
2 COSTCENTER
2 URGENCY

(This is like we are doing TN recognize step plus having a processing rule in place :)))))) )

We can have a service that check the incoming pipeline to identify the requester and the parameters that need to be reserved. Then we can dynamically populate the reserved list and run clearpipeline.

This sounds a little crazy :-). I just woke up in a shinny labor day afternoon.

Thanks Shumin - it’s a relief to know the cert management tools won’t move.

The other level of indirection – please ignore that. It’s just speculation on my part. I’ve just played around with RIs and I was thinking that as long the RI URL could be set to whatever you wanted to (eg: invoke/myCompanyreceive), it could be mapped directly to wm.tn:receive, or a wrapper service in the backend as needed.

That scheme looks complex but may just work and resolve one of the drawbacks in my filterPipeline scheme - the need to hardcode the variables needed in the pipeline. :slight_smile: In my case, I had to filter the pipeline prior to authentication (which was being done by another system after getting info from the pipeline), so it was pretty important.

New in IS 7.1 – RI is now the “Reverse HTTP Gateway”.

I remember the “proprietary protocol” between the RI and internal IS being projected as a security feature. (!.. though SOCKS is fairly well documented). The connection between the two is now HTTP 1.1 streaming (maybe without parsing the document?). So webMethods RHG (formerly WM RI) now looks more and more like a straightforward HTTP Reverse Proxy, except that the connection between the proxy and the destination endpoint is initiated by the endpoint.