Scheduler and Server Thread Pool

Hi, can anybody tell me the difference between the scheduler thread pool and server thread pool.

  1. if i run a process via scheduler, is it going to utilize the scheduler thread pool only or the server thread pool as well. What if i run the same process via wmethods developer?
  2. How do we determine the size of the thread pool. What factors should be kept in mind.
  3. If we increase or decrease the thread pool size, is it going to affect the performance in terms of how much time a service is taking to get executed.
    [take a scenario where i am just running one service at a time]

Thanks in advance!

The scheduler will utilize the threads from the server thread pool.But the percentage of threads that the scheduler can use, is controlled by the resource setting. For example if the scheduler throttle is set at 50% , then the scheduler can use a maximum of 50% of the available server threads and not that a separate pool with 50% of the initial server threads.

This throttle is common for scheduled system tasks and user tasks.

you can use the resource setting page to view the size of thread pool.

Admin console → settings → resources;

The factors depends on your application.

It would very well affect, if the thread settings are not appropriate.
example: if the trigger throttle is set to a very high value and when there is a heavy traffic in terms of publish/subscribe, the dispatcher would use most of the available thread and other services would not have enough server threads to utilize.

Thanks and Regards,

Can I pick up this topic again?

Is there a certain rule of thumb as to how many threads are recommended depending on load and the power of the machine IS is running on?

Our customer is running into problems with 200 Threads and 75% Scheduler Thread Throttle not being enough on their environment. The server.log is flooded with errors like this:
“Scheduler: Resources unavailable: Rolling back due to scheduler thread throttle reached:150”

They have a lot of pending tasks in the scheduler (documents waiting for resubmission or forwarding), which seem to cause trouble after a while when they get too many.

Their IS 7.1.2 is running in a cluster environment (with two IS instances IIRC).

Thanks in advance.



Please help :frowning:

We had the client increase the thread pool to 250, but after a few days this still didn’t seem enough, as the above mentioned error was massively written to the log again. This time only with 188 instead of 150 (75% of 250).

I believe it does not make much sense to infinitely increasing the server pool size, does it? Therefore, I could use a rough recommendation on how to calculate the size.

Thank you very much in advance.

As you correctly understood, increasing thread pool is not the permanent solution to your problem… the best soultion is to find out the root cause of problem … as to where all the threads are getting consumed, if its not getting released by appliction where is the problem, you might have some bottlenecks in your design… troubleshoot it for all these possible questions … and come on conculsion if alloted thread count is really not enough or there is more severe problem due to bad code/desing/client application etc…


Thank you for your feedback.

The trouble is that we cannot reproduce this on our dev environment. We don’t have the same hardware setup as our client which makes it really difficult to analyze where our application might be consuming (and potentially not releasing) the high number of threads. Besides, “it was working yesterday” :slight_smile: With the previous version of our application we didn’t encounter this kind of problem.

I don’t know if it is relevant but the most significant change in this one is that all services and adapters have been marked as “stateless” whereas they were mostly running as stateful services with this being the default setting when you create a new service/adapter in wm-Developer.

Anyway, thank you for your input. We will sure have to investigate where the leak is but that won’t be fixed before the next scheduled release, while this is currently an acute issue :-/