Limit number of concurrent connections to receipient?

I’ve been looking through empower for a way to do this, and have been coming up with either a cryptic answer or nothing at all.

I’m looking for a way to limit the number of connections to a destination endpoint when sending concurrent transactions to a destination.

Ex: a partner has said they can accept 20 connections from us at one time - I’m looking for where I can set that based on a processing rule, or even via a UM trigger (or both). Is this possible? I’d think it would be, and I’d think it would be something very easy to find, but I’m having difficulty.

This is in 9.7 environment.

I appreciate any and all help/suggestions.

-Les

Can you provide some details about the current implementation and source/targets involved with business logic if any.

What type of connection? HTTP?

What is the activity causes the service that calls the target to run? Scheduled task? Broker/UM event via trigger? Invoked by another service that is called by some other system?

These items will help inform possible approaches.

Sure - this is an HTTPS post. There is a processing rule which calls the service which does the posting to the partner.

We process the documents via UM, then route the document to TN where the processing rule is called and then the HTTPS post is made to the partner.

Let me know if you need more details.

Thanks.

I assume you’re using wm.tn:receive. If not, please share how the document is being given to TN.

How is the processing action configured? Invoke a service or deliver a document?

If invoke a service, is it sync, async or service execution task (async)?

If deliver a document, is it immediate or scheduled?

The intent here is to determine if async activity is introduced when processing in TN. If so, then controlling the max number of connections may be a bit of “cannot guarantee but can get close enough” aspect.

If you need to control the max number of connections to a given target, with your setup the simplest approach may be to set the trigger to be concurrent with 20 or less (should usually do less) max threads and have the TN processing be sync. If there are multiple nodes processing events from UM, reduce the max threads so that the total of all doesn’t exceed the max you want.

If TN activity is async, you can get close by setting the max threads on the trigger low enough that during “peak” activity, TN is able to keep up with the flow of events without exceeding the 20 max.

HTH.

The document is given to TN via wm.tn.routeBizDoc

processing rule is configured to run a service async. the service contains the details on posting to the partner.

We would like to start out with a lower number, like “5” and move up to their max 20. I’ve tried changing the UM trigger max execution threads to 4 or 5, but it seems like UM does not process the documents concurrently.

So you’re doing doc recognition in a separate call. Cool.

What is the reason for having TN run the service async? It would seem that what you’re using this for it does not need to be.

How have you determined that it does not appear that multiple IS threads are processing documents from UM? (UM doesn’t process the documents, IS retrieves them from UM – it is up to IS to do the concurrent processing, not UM.)

What is driving the 20 as the limit? Do you anticipate that there will actually be 20 calls at one time occurring? I ask because perhaps it is simply a case where the end point says “max 20 concurrent” but your system will never even get close to doing that. Indeed, depending upon what this integration is doing, perhaps it should not be making concurrent calls (e.g. maintaining order of creates/updates may be important).

It may not need to be async at the TN processing rule, I was testing changing the setting in our lower environment. Currently in production we’re sending the transactions synchronously.

I was thinking by changing the processing rule to async, it would send multiple documents at a time.

The 20 limit is determined by our partner that is receiving the documents. They have stated that they have 20 receiving threads to receive the documents from us. I feel that’s optimistic, and it’s probably closer to 10 or even 5 at a time, based on my testing I’ve been doing over the past week.

We are trying to increase the throughput to the partner, as sending all our documents serial is becoming too slow with the volume we are experiencing. When the entire process was designed, the volume was 5 times lower than what it is today. The partner has requested that we send certain documents concurrently (channel 1), while keeping the others serial (channel 2). I have been able to split these, but the issue is now to send those in channel 1 concurrently at a limit no more than 5 at a time.

I am looking into UM configuration as well, as it looks like UM can only process so many documents at a time, and it seems like when I flood UM with 200 concurrent and 300 serial, it processes the 300 serial faster for some reason. I am thinking that there might be a setting that we need to increase to handle this, but I haven’t found where it is yet.

That is good additional info.

When setting the rule to async, you’re handing over control of things to the TN service execution engine. You’ll want to dig into details of that to determine how/if it will spawn multiple threads running at the same time.

My guess is that UM is not the bottleneck (though I’m not a big fan of UM so far). It is going to be IS that you’ll need to look at. Again, it is not UM that is processing things. UM puts the docs on the queue/topic very, very quickly. It is IS retrieving and processing them that will be slow (relatively speaking).

In IS Administrator, check the max threads that can be used by trigger management. Overall and per trigger. How many nodes do you have? What are the trigger settings such as queue size and refill level?

Appreciate your help Rob and the quick responses.

In IS Admin, the max threads by trigger management are 400 (100% of server thread pool)
On the trigger, I have max threads set at 4.

We have only 1 node.

On the trigger that passes the document to the service for TN, I have been tweaking the queue size and refill levels. Initially it was 10 queue size, and 4 refill level. Right now with my latest testing, I have it at 20 queue size and 8 refill level.

Here is what I’d do:

On the trigger, set max threads to 15, queue size to 100 and refill size to 10.

In TN, set everything related to this integration to sync. Let the IS trigger be the “throttle.”

Trial-and-error will be your main guide. And the fun part is your test environment is likely to be different from your prod environment for this.

Thanks, i’ll give that a shot.

One other question, does UM or TN put higher priority on serial vs concurrent?

I’m seeing strange results when I’m throwing a bunch or concurrent transactions to UM, those process quickly - but as soon as a bunch of serial transactions are sent to UM, the concurrent transactions go slower and smaller numbers.

Is there anything to explain this?

It is primarily about IS and how it manages event retrieval, dispatching and acks.

To help figure out what might be happening, can you share info about the doc type(s) and trigger definitions. The thing I’m thinking is if you’re using the same doc type, but different triggers then everything is being handled by the same topic – the handling of which needs to account for serial and concurrent, which may slow things down.

Thanks - this is a flow of how the transactions are processed, note that the basic adapter notification selects each group 1000 transactions every 5 minutes, and there are 2 versions of this running at the same time. One selects those which a channel “1” and the other with channel “2”. Channel 1 is set to concurrent in each trigger, capacity 100, refill 10. Channel 2 is set to serial in each trigger - capacity 10, refill 4.

Generally it’s the same doc type until the step “Process/map outbound document to specific transaction to be sent” - then it becomes a more specific document.

START
Basic Notification selects records from DB with status “1”, update status to “y”
publish document from adapter to UM

 receive document subscription from UM
 Map to outbound document, set record in DB to status of "2" (in process), and publish to UM

 Receive subscription to Outbound document
 Process/map outbound document to specific transaction to be sent
 Publish specific transaction doc to UM

 Receive subscription of specific transaction document from UM
 Create XML and route to TN
 TN Processing rule (sync) sends via HTTPS to partner
 Receive response from partner
 Process Response, publish to UM

 Receive subscription of response from UM
 Process response, write to DB with status of "3" and if record was rejected/accepted.

END

I think it is due to the “same doc type” being used for both paths to the “Outbound document” step. I’m assuming there are 2 triggers defined here, one for concurrent and one for serial. The triggers are using filters to distinguish? Try using different doc types instead.

Yes, there are filters on the triggers which receive the outbound document at the step described at:
" Receive subscription to Outbound document "

The filter on each trigger looks at the channel number in the document.

The same doctype is used for both, you thinking if I used a different doctype (ex: OBDoc1 and OBDoc2), it would resolve?

Wouldn’t I need to change the steps where it’s published to publish to the corresponding doc types? (step named: "Map to outbound document, set record in DB to status of “2” (in process), and publish to UM ")

Yes.

Yes.