I would like to know what is the best practice for my requirement:
I have a cluster of 2 IS (active/active) and I have developped a JDBC pooling notification to trigger a flow service. This flow service MUST be executed on both IS each time that a JDBC notification is triggered. What is the best practice then?
I had in mind to configure my pooling notification to send a message on a JMS topic and then develop 2 flow services (let’s call them ServiceA and ServiceB), and configure one JMS trigger to route on both services (one route with no message selector and a specific durable subscriber and another route with no message selecter and another durable subscriber).
My only concern with this mechanism (please stop me if I’m wrong) is that I cannot use durable subscribers.
So it means that if one of my node is not connected at one moment and it was supposed to receive a notification, it will not reprocess it. So I can loose messages.
Oliver,
You could have the polling notification sending the message to JMS Topic… From here, you could have two different JMS triggers that invokes Service A and Service B independently. These JMS Triggers should be set with two different durable subscribers. What it means is, we are creating two client queues to receive message from a single TOPIC.
Yes sure but, I would need to create and manage two different packages (I can’t variabilize the name of my durable subscriber, so to deploy my mechanism in Production, I need to create in development two packages and to duplicate the code)
When you have two services, service A and service B, both should have different logic…
Is your requirement that, same code should be executed two times, once in every server?
May be, I am not getting the actual requirement what you are asking for…
Is this the same service that must be executed in both IS or different services in different IS?
No problem. Actually I had in mind to implement a local cache mechanism (the wM service caching is not appropriate for my requirement). So I want to manage a local Map on each IS and expose services to consume data.
So whenever I have some movements on my database I want to propagate this event on both IS to have a local cache synchronized.
Well that would be a workaround, not a real solution. I can’t understand that it would not be possible to fullfil a simple requirement: one notification (JMS message) = one exection guaranteed on each IS of a cluster
I agree with Holger that Terracotta distributed caching would be a better option for you. However, I also understand that Terracotta may not be an option for whatever reason, extra licensing cost being one of them.
With this said, try this: one package, one service, one trigger, one JMS connection alias BUT …
On server A, set the Connection Client ID for the alias to one value, and;
We never had such a requirement on the clustered ISes in our project.
Our current project is not using active/active clustering for ISes.
We currently only do HW-based clustering (2 boxes where the mountpoints switch from one box to the other if the first one fails, high availability mode). There is a second set of boxes available in another datacenter for the case the main data center crashes totally (disaster recovery mode).
Not really. This capability (or lack thereof, depending how you look at it) has been consistent from the beginning. In other words, it has always been implied that, for most situations, a service should be executed only once in a cluster.
There have been relatively recent changes to the IS to allow the same service to be executed in parallel in multiple servers in the cluster though, such as the fact that the scheduler now allows you to create a task to be executed on all servers. That scheduler capability doesn’t seem to fit well into this particular use case though (or perhaps it does - not sure how current the caches really have to be.)
Over the years, I have seen other implementations of custom, distributed caches within the Integration Server. The approach in those was slightly different than the one discussed in this thread. In those implementations, a list of all Integration Servers (or remote server alias) in the cluster was maintained in a configuration file and whenever a trigger event occurred (e.g. polling notification document is received), a service would loop through this list and it would invoke a specific service on each server to cause the caches to be refreshed.
A slight variation to this is to retrieve the list of clustered servers from the IS itself (e.g. by calling wm.server:getClusterNodes). I must say, however, that I think a solution that leverages messaging, as Olivier is attempting, is more robust.
Sorry but your advice does not work:
On server A, set the Connection Client ID for the alias to one value, and;
On server B, set it to a different value
This simply causes a round robin between all JMS triggers
Sorry for the delayed response. I’ve been very busy and I haven’t had time to look through the forums in a little while. I had some time today so I decided to catch up on some of the posts I had missed.
I gave my own suggestion a try and I must tell you that it does work for me as I had expected. I am using IS 9.6 with the Broker as my JMS provider.
I have a cluster with two Integration Servers containing the same exact package, same exact JMS trigger, same exact JMS connection, etc. I started by leaving the connection client ID exactly the same on both servers. I published the test document multiple times and I confirmed that the documents were being distributed to the IS’s in a round-robin fashion, as they should. From MWS, I also confirmed that the topic only had one client listed in the Clients tab with the name ##.
I then changed the connection client ID on one of the servers, and when I published new documents to the topic, both servers started receiving them simultaneously. From MWS, I also confirmed that the topic now had two clients listed in the Clients tab.
You may have come up with a different solution by now, but I wanted to make sure I closed the loop here for others who may come across this thread in the future. If you haven’t come up with a different solution, please give this another shot because it should give you what you’re looking for.