Load balancing

You can use Apache HTTP server with Apache Tomcat Connector (mod_jk) to distribute workload to several JBoss or Tomcat instances and give users transparent access to these through a single URL.

In the example I use Apache 2.2.10, mod_jk 1.2.30 and either two instances of Tomcat 6.0.26 or JBoss 4.2.2 with NJX 8.1.2 on a Windows XP machine.

Install Apache:

  • Just unzip the downloaded archive.

Install mod_jk:

  • Copy mod_jk-1.2.30-httpd-2.2.3.so into the Apache modules directory.
  • Add a file mod-jk.conf into the Apache conf directory (example contents see below).
  • Add a file workers.properties to the Apache conf directory (example contents see below).
  • Reference the file mod-jk.conf in the Apache httpd.conf file with a line
Include conf/mod-jk.conf

Contents of file conf/mod-jk.conf:
(find detailed information on the parameters under http://tomcat.apache.org/connectors-doc)

# Load mod_jk module
LoadModule jk_module modules/mod_jk-1.2.30-httpd-2.2.3.so

# Where to find worker.properties
JkWorkersFile conf/workers.properties

# Where to put jk logs
JkLogFile logs/mod_jk.log

# Set the jk log level [debug/error/info]
JkLogLevel info

# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

#JkOptions indicate to send SSK KEY SIZE
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories

# JkRequestLogFormat
JkRequestLogFormat "%w %V %T"

# Mount your applications
JkMount /* loadbalancer

# Add shared memory
JkShmFile logs/jk.shm

Contents of file workers.properties
(find detailed information on the parameters under http://tomcat.apache.org/connectors-doc):

# Define list of workers

# Define Node1

# Define Node2

# Load balancing behaviour

# Status worker for managing load balancer


  • The ports specified in worker.node1.port and worker.node2.port are the AJP13 ports of your JBoss/Tomcat instances, not the HTTP ports.
  • In JBoss the set of ports to be used is defined in conf/jboss-service.xml
  • In Tomcat the ports are defined in conf/server.xml
  • You must use sticky sessions. Due to its connection to a Natural server session, a session created on a given JBoss or Tomcat instance must not be transferred to a different instance.

Configure your JBoss worker instances:

In deploy\jboss-web.deployer\server.xml add the jvmRoute name to the Engine element (corresponding to the worker names you used in the file workers.properties), for instance:

<Engine name="jboss.web" defaultHost="localhost" jvmRoute="node1">

In deploy\jboss-web.deployer\META-INF\jboss-service.xml set the attribute UseJK to true:

<attribute name="UseJK">true</attribute>

Configure your Tomcat worker instances:

In conf\server.xml add the jvmRoute name to the Engine element (corresponding to the worker names you used in the file workers.properties), for instance:

<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">

Configure Application Designer:

in cis/cisconfig/cisconfig.xml set the attribute createhttpsession to true:

cisconfig ... createhttpsession="true" ...

Now start the Apache HTTP server and the worker instances and access your application under the HTTP port of the Apache HTTP server.

1 Like

Hi Thomas

Thanks for the information on loadbalancing. I have some followup questions:

  1. Do you have an idea or suggestion on how to deploy our user developed applications to the various instannces of JBOSS? Should we basically copy the subdirectory with our application from one instance to another. Alternatively can all the instances of JBOSS have a link in them to point to a single instance of the application subdirectory (I am thinking not, but want to ask)?

  2. Do you perhaps have a suggestion of the number of simulateous users to plan for a single instance of JBOSS. From what I read it seems that 250 threads per instance is on high side. For example, if we have 1000 users that might be using the system at the same time (say 8am) should we have at least 4-5 instances of JBOSS? I understand that threads do not translate to the number of users, but am trying to think of what the max load per server might be.

  3. Is having a single instance of APACHE as the load balancing server a single point of failure? Do you perhaps know of way to spread the risk over more than 1 Apache server?