Has anyone ran into this before?
We have an IS 10.3 running alone, acting only as an Enterprise Gateway (or “reverse invoke”) and nothing else. It is running using embedded database.
The issue is not frequent, but it is a bit annoying.
“Out of the blue” requests get stuck in the EG and until someone complains, “the hell” has already set up.
There is nothing in th server.log that explains this issue.
There isn’t custom packages in this server.
Requests will pile up in gateway when the internal IS has lost connectivity or the pool has reached capacity. Gateway has no connections in the pool to which to forward the requests – and waits in the hopes an internal IS reconnects to which the requests can be sent.
Things to review:
Connection settings of the internal IS to the gateway, including max pool size.
When this occurs, what is the state of the internal IS?
Network or firewall activity that might be clobbering the connection.
You can set up alerts to notify someone when pending requests are accruing. Either via MWS, maybe CC. Or a log watcher.
As temp workaround , enable and disabling the reverse invoke port will solve your need.
Also check the below properties
Specifies the time (in seconds) the Internal Server will wait
before closing an unresponsive connection to the Reverse Gateway Server.
The default is 0, which means do not time out (a timeout period of
If the Reverse Gateway Server does not make a request to the
Internal Server on a given connection within a specified amount of time,
the Reverse Gateway Server will make a ping request to the Internal
Server on that connection. This time period is controlled by the
following property on the Reverse Gateway Server.Specifies how often (in seconds) the Reverse Gateway Server will
send a ping request to the Internal Server. The default is 60 seconds.
Ping interval on the Reverse Gateway Server must be
less than the timeout period on the Internal Server. For example, if the
timeout period on the Internal Server is 180 seconds, the ping interval
must be less than 180 seconds.
Thanks Dinesh for noting the specific extended settings. It reminded me to check ours – and I noted we have left ours at 0, which is not a good idea. Adding that to our “to do” list.
@reamon thanks for the direction.
@DINESH_J thanks once again for the detailed settings.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.