I wanted to run by the group some options which are being discussed around the usage of publish and subscribe model.
Here is the scenario:
Transactions are received into TN. These need to be processed into various back-end system. Currently none of the backend systems can connect to webMethods. webMethods has to push these to them.
Approach1:
TN Processing rule initiates a service, which converts the transaction into a canonical format and publishes to the Broker. Multiple trigger client reads the Canonical back to IS and process the document into multiple backend systems respectively
Pros:
In future if a backend system can pull transaction they can create broker clients and subscribe to the documents.
Cons:
Too many hops and broker overhead for no real benefit in current situation.
Approach2:
Maintain a DB table with services to execute. TN Processing rule initiates a main service, which reads this table and executes each and every service in the table (Table lookup by some sort of a key).
Each of these service will have code for processing into each of the backend system.
Pros:
No overhead of broker and multiple hops
Is there a better way. Please give inputs on each of the above and other approaches.
Is there any relationship among the target systems? Can one of them fail and all the others succeed? Or if one fails, they must all fail? Do you need to track progress of any/all of the targets?
You’re probably overestimating the effect of the “too many hops” and “broker overhead.” Both are most likely meaningless in the scheme of things. And you’re probably underestimating the impact of a “main service” that reads a DB and does the dispatch work. This will likely become complex as it evolves to address flexibility, tracking, scalability, etc. and you’ll need to add administrative processes to update/manage the list.
The decision about whether or not to introduce the broker and pub/sub should not be driven by performance considerations. Pub/sub is useful:
When there are multiple and independent systems that need the same data and the list of interested systems actually changes from time to time. There should be an immediate or near term need. Don’t introduce pub/sub just so “maybe a system in the future can be added.” If the need isn’t there now, it most likely won’t ever be or will be such that the current implementation would need to change anyway. Creating reusable canonical documents is hard. If only one system is being connected at present, the canonical most likely will be inappropriately overly influenced by that one system–making it hard to add new systems without requiring changes.
As the communication basis for a higher-level capability, such as BPM. BPM focuses on managing the process, and can deal with system interpendencies more readily in a model. In this case, pub/sub becomes a behind-the-scenes player.
Another possibility is to use the profiles in TN to indicate “subscriptions” to specific document types. You can use the TN Console to set extended fields to set the subscriptions and indicate the appropriate processing service.
Another possibility is to use Modeler/Designer to model the integrations. This would be most useful if there is some interdependence between the target systems.
I agree with Rob, very good points. I would add that the broker is well suited for most stuff however long term storage of data isn’t one of them. The interface into your transactions as you know is the MwS :eek: . Once folks want to keep transactions around in the integration layer I generally turn towards using TN. It does a better job in my opinion of handling that requirement. I don’t like the mega-service idea either, to much complexity and a lot maintenance headaches. Keeping things small and simple have a lot of advantages.
As far as canonical documents go, I personally don’t like spending to much time coming up with the perfect master model. It can become the very problem it was trying solve. I like assembling documents with smaller reusable pieces versus the big bang approach. That seems to lessen the impact of the change that is not suppose to happen.
Since the requirement is only IS A publishes and IS A subscribes as well, how come noone wants to go with Locally pub/sub approach? What are the cons or pros to that approach?
One con I would put to that approach is performance. The IS local pub option is not nearly as good as the broker. If you have high transaction volumes or throughput requirements I would go with the broker versus the IS.