1tomany distribution list using TN

Our environment is IS 6.1 on Solaris 8. Is there some TN configuration we can do to generically FTP a single file from one sender to a distribution list of recievers? (We have reasons for not using Broker in this scenario.)

Thanks in advance,
Julie

Hello,
I would like to say no, as a transport is a sequential action. If you don’t mind using local triggers, you can do as might be expected with a Broker without actually sending information out to it. You could have an ftp service that will send to each individual (giving the same rules apply to all) and it will run against the trigger in which ever fashion you like.
Good day.

Yemi Bedu

Although only one processing rule will fire when a document is submitted to TN, there is no reason why the flow service that gets invoked cannot re-submit the document to TN multiple times (for example, with routeXML) with different recipients. Since you are not using the Broker, this would allow you to log and manage the delivery status of those re-submissions to the different recipients independently.

But try whenever possible to not bypass the Broker, as sending canonical messages through the Broker is the key to realizing the strategic value from your entire integration platform.

Julie,

You can do this with a processing rule in Trading Networks. You can submit the document into TN using either routeXML or tn receive. Either way, a processing rule will pick it up.

At this point, you’ve recorded the base document (original) into TN for processing. The receiving flow service could either query an external database for the information required, or you could query the TN database for the information required. In any case, the flow service would then resubmit the document back into Tn for delivery.

For this scenario to work, you will typically have a status field in the document that the processing service will key off. Once you submit it back, that field would have to be different, or it would treat it like the original which would create a continuous, never-ending loop.

By taking these steps, you will have a record of the document as it flows in/out of your system.

Another possibility, is to have a flow service query a database that contains the ftp data, structure each message and then routeXML the doc into TN. You would need to create a way to determine which messages had been sent out in the case of a failure so that TPs don’t receive the same message twice (unless they are using TN which should produce an error since duplicates are usually not allowed.) In any case, you would need to determine a way to make sure all of the recipients received the message and that they receive it only once (replicate TN functionality here.)

The IS/TN configuration is pretty old school, but it works very well.

Good luck.

Ray

Perhaps this e-zine article would be helpful:
[url=“wmusers.com”]wmusers.com

It describes how to do subscription-style delivery with TN.

I respectfully disagree with Mark–my position is that you should bypass the broker early and often. :wink: Using canonical formats is doable with IS/TN too. I may sound like a broken record (I’ve posted this viewpoint many times) but IMO the majority of integrations do not need the pub/sub facilities of Broker, truly the only value-add that it has.

IMO, TN is a broker too (tn:receive is the publish, rules are the subscriptions)–it simply has an easily-overcome restriction of matching only one subscription (just one rule). With the process described in the e-zine article, the limitation is eliminated.

I won’t go so far to say that TN and the Broker are functionally equivalent but if you can do the job with one tool, why use two? There are times when Broker should be used. I simply contend that most of the time, it doesn’t need to be used.

Rob

Hi Rob,

I see the Broker as delivering key strategic benifits that are often overlooked in the rush to “get integrations done”:

  1. Enable and reduce the cost of change. By de-coupling the sender from the receiver, when your client replaces a receiving application, the sender can remain unaware. In the current excitement over web services, I see a lot of people building point-to-point integrations. (Good old inter-application spaghetti!) I’d go so far to say that web services have made it easier to build bad integrations than ever before.

  2. Robustness and performance. The Broker implements a high-performance store-and-forward engine that can be scaled up onto multiple computers over a dispersed geographic area. I have always had concerns about the 1 TN instance per integration server limitation. Configuring multiple TN delivery hops to get a message across an ocean doesn’t sound real good to me.

  3. Implements an instantly probe-able “data highway” (credit Rockwell Automation) for business control data in a way that is architecturally identical to manufacturing plant “data highways” for plant control data that are the basis of current manufacturing process control. This is where things get really exciting, leveraging your investment in webMethods for Business Process Monitoring and control.

As the e-zine article points out, you can write an IS service to simulate this behavior using “generic” TN rules and development of some TN aware logic. But the Broker is specifically designed for this and delivers it “out-of-the-box”.

Coding a getSubscribers service against profile fields assumes that you have a somewhat stable understanding of the things that make a business document interesting to a potential subscriber. Trigger filters defer the “what is interesting” decision until the last possible moment… when you are creating the subscriber, perhaps years after the publisher was developed.

So respectfully disagreeing: considering that the Broker requires 1 service invoke to publish, and one trigger object to subscribe, using the Broker is no more difficult than simulating pub/sub with TN maybe even easier.

I’m guessing that we would both agree that webMethods still has a way to go in seamlessly blending the Integration Server and Broker technologies.

:sunglasses:

I will be the happiest camper on the block with the functions of the Broker are much more transparent to the Integration Server environment.

Regards, and thanks for all your great work on this forum. You have taught me a great deal of the years!

You’re too kind! I fear I get on a soapbox a bit too often and I appreciate the group tolerating my diatribes–well, at least publicly they seem to be tolerated since noone has yet told me to quiet down.
:wink:

You make some very valid points. Here are some additional thoughts to ponder (playing devil’s advocate to stir discussion/debate).

  1. The “decoupling” and “location transparency” items are almost always brought up when talking about broker vs. IS. I agree fully that Broker provides these things. What I think people tend to forget is that IS provides these things too. Simply by virtue of a sender talking to IS instead of directly to a receiver. IS is a broker in the general sense. It is an intermediary between two or more applications. The sender of a doc has no idea where the real recipients are, what format they really want, whether they are up and running, whether they are in the same server rack or half a world away.

I agree that web services is contributing more to the point-to-point approach–much the way IS services tend to get coded. But that isn’t always bad. Much is made about the mn potential interfaces vs. m+n potential interfaces. In many cases, real value is achieved in pursuing m+n. It’s been my experience, however, that mn is never a real possibility. A typical app needs to talk to 1-3 different apps, and that’s it. Not only that, 1-2 years after one is done implementing the cool integration, it gets scrapped for something else so what’s the point of implementing flexibility (at the cost of complexity) when the shelf-life is short anyway?

Don’t misunderstand me: there are cases where m*n is a real issue to deal with and the use of “canonicals” and other techniques is essential–I just contend that the need for that really isn’t that often.

Does using Broker automatically provide a greater level of decoupling than IS/TN? My assertion is no, it does not.

The coolness of pub/sub is that implementers don’t have to do much of anything to replicate a document to multiple receivers (configure an adapter or two). Beyond that, IS/TN and Broker can do just about the same things, though in different ways.

  1. There is no doubt that Broker kicks major booty when it comes to messaging throughput. For some integrations, it is a must.

From another viewpoint, the Broker has to have high throughput cuz it doesn’t do jack squat otherwise. It doesn’t do transformations. It doesn’t do much logic (filtering, message ordering). It does one thing–accepts published documents and distributes them to queues for each subscriber. That’s about it. Everything else is left as an exercise for the adapters. So-called “client-side queueing” (which is a debate for another thread), transformation, logic, business process management, database access, http/ftp communication, web services, etc., etc. is all the domain of IS. All that high-powered throughput gets throttled by an interpreted adapter run-time environment. Broker is great if it needs get a lot of docs to a lot of different adapters–but if there’s just a few adapters and all the docs go to them, then it’s a “hurry-up-and-wait” scenario as docs sit on queue waiting to be processed by relatively slow adapters.

  1. Interesting, but yet again, not unique to (nor actually done by) Broker. Broker just moves the data amongst queues. The power of BPM is provided by something else, not Broker. Again, don’t get me wrong–I’m not saying Broker doesn’t provide benefit. A BPM engine can easily subscribe to docs of interest. And wM has written their stuff to use docs published to Broker, not IS. But they could have just as easily done so.

I agree that managing

Thanks Rob,

As always, great food for thought!

I agree that you can get decoupling and location transparency with IS. You can convert in and out of canonical document formats entirely within IS services. A canonical message strategy is important independent of message transport mechanism. I hope to back up that assertion below.

I would disagree that implementing a canonical strategy on the IS makes it a kind of “Broker” providing optimal location transparency and decoupling. A message broker is defined by it’s store-and-forward and pub/sub capabilities. (And as you point out, its design patterns don’t require much else). It delivers the messages reliably with high speed, filters them, and delivers them to client queues.

I agree that the adapters are the bottleneck. We scale by adding additional Integration Servers. In that situation, the capability of having multiple Integration Servers doing pub/sub through a common high-speed messaging component looks even better.

I’ve never had any problems with Broker administration adding any significant headaches. The Broker has always seemed like the “never breaks” part of the integration environment. It’s almost always the Integration server that I’ve seen getting bounced for one type of problem or another. (Witness the plethora of “graceful shutdown” scripts available on wmUsers) And as you say, the job the Broker does is conceptually much simpler than that of the Integration Server.

I have to admit that using the Broker and Integration Server together in an efficient manner was not intuitive until webMethods 6. And as we agreed before, there still needs to be more work in blending the 2 technologies, for instance:
The syntax rules need to be the same in both environments.
There should not be 2 places for filters.
IS/Broker document synchronization should be utterly transparent and invisible to developers.

As you point out, you can make the IS acquire store-and-forward and pub/sub capabilities with some clever configuration and development. But it’s the Broker that was designed for this. If you worked hard at it, you could build an entire e-commerce website entirely on the Integration Server. You’d be using less technology and fewer servers. Why not do it? I know you’d agree that while it might be technically possible to do it, the architectural “power curve” of the Integration Server is not in serving up general web site content or implementing large and complex business logic functions.

I’m not sure I understand what you are referring to when you mention the “extra layer of stuff to deal with” regarding the Broker. If you mean the occasional document type synchronization issues, then I’d agree. Hopefully, this will get fixed in the next major release. I really don’t care if the Broker disappears entirely as a “separate” entity. As long as there is some high-performance component providing scalable and reliable store-and-forward, and publish-subscribe (out-of-the-box) I’ll be happy.

While a business problem today looks like a point-to-point solution, over the years we’ve seen business environments change constantly, often within a few months. These days, it seems that management (much less technical types) cannot predict what the business will look like in a few years. Use pub-sub today to expose your order update process to your e-commerce website. Your company may add a new application in 6 months that needs visibility of that information. Why commit to rigidity when building in flexibility requires so little additional effort? I’ve gotten payback from this approach many times over the years.

Now the really exciting stuff coming along, i.e. Business Activity Monitoring, and Business Process Optimization, technologies like webMethods Optimize, depend on free access to th

Mark wrote:
“I would disagree that implementing a canonical strategy on the IS makes it a kind of “Broker” providing optimal location transparency and decoupling.”

Sorry if I implied that using canonicals makes IS act like a broker. I’ll clarify: IMO, IS is a broker (lower-case ‘b’ is intentional). No qualifiers on that statement. It provides location transparency out of the box–a sender doesn’t know where any of the receivers are and vice versa. It decouples senders from receivers (they don’t know about each other). Senders send docs to IS in a particular format and a “miracle occurs in step 2” to send the document in possibly another form to 1 or more recipients.

Pub/sub isn’t necessary to provide location transparency/decoupling. It makes establishing data paths easier to deal with for sure, but it doesn’t provide “more” location transparency/decoupling. Those things are achieved in full as soon as message senders talk to an intermediary instead of directly to recipients.

Many Broker implementations don’t end up using canonical message formats–thus while they’re not point-to-point at the communication level the end-points are essentially tightly coupled at the message level and thus more or less a point-to-point implementation, despite the use of pub/sub.

I guess the reason I keep debating this side of the argument is this: there seems to be a prevalent point of view that Broker is absolutely necessary to achieve location transparency/decoupling and IS doesn’t provide any without a bunch of work. IMO, this is incorrect. Broker isn’t a miracle pill and IS isn’t inherently a point-to-point solution.

“Your company may add a new application in 6 months that needs visibility of that information.” Absolutely, and an IS-only solution can be used to address that too. Pub/sub is not the only way to solve this (and I contend is rarely the way to solve it).

Wrt the “extra layer of stuff to deal with” comment, to use Broker one needs to install an additional set of software, usually on another server. The extra layer of stuff pertains to the administration associated with installing, configuring, monitoring, etc. of Broker.

Kind readers out there, please don’t misunderstand my POV on this topic. Broker is a good product and should be used in many instances. I have taken a full drink from the “pub/sub punch bowl”. My first integration projects used Broker (at the time known as ActiveWorks from Active Software) and it provides a great deal of flexibility (and complexity at times) for solution design.

My main points are: 1) location transparency/decoupling can be achieved without pub/sub; 2) pub/sub provides great flexibility, but sometimes at the cost of more complexity than is needed for the particular scenario being addressed; 3) IS is not inherently a point-to-point solution; 4) Broker is not inherently a fully-decoupled solution (refer to message format comment above); 5) most business integrations don’t need pub/sub (the Broker definition of such).

Rob wrote:

“Many Broker implementations don’t end up using canonical message formats–thus while they’re not point-to-point at the communication level the end-points are essentially tightly coupled at the message level and thus more or less a point-to-point implementation, despite the use of pub/sub.”

Absolutely! I think this is worse than point-to-point with just the Integration Server because it conveys the illusion that de-coupling has occurred when it has not.

I would agree that many ( wouldn’t go so far as most, at least with my recent work ) integrations do not require pub/sub when they are originally specified ( Broker definition or otherwise ).

My reason for doing pub/sub as a general rule is that:

  1. Business conditions change quickly.
  2. The additional work for doing pub/sub with the Broker vs. point-to-point has been an hour or less per integration for me.

Specifically:
Add 1 flow step to publish the canonical.

Create a trigger and trigger-handler service with 1 map step and 1 invoke to de-envelope the publishable document and invoke the receiving service.

Use save/restore pipeline in the trigger handler to allow you to debug (step through) the flow after receiving the published document.

With such a low cost for doing pub/sub, why not spend the extra few minutes and create something that can easily adapt to business conditions that weren’t originally anticipated?

Beyond that Rob… I guess we will have to agree to disagree on this one.
:sunglasses:

Thanks again for your feedback and hope to see you some time on an interesting project!

We really do agree on principle.

I don’t use Broker by default, though I almost always do the conceptual equivalent of pub/sub using something–with IS I use TN. The steps to doing that are almost exactly the same as what you describe with Broker.

So why not use Broker you ask? One reason is because there are more moving parts with a Broker-based solution, which has to be weighed against the benefits gained. Using TN as a pseudo-pub/sub engine cuts the number of moving parts and is by far easier to deal with on many fronts.

“…that can easily adapt to business conditions that weren’t originally anticipated?”

The underlying assumption being that this sort of flexibility can only be achieved using Broker? I do not agree with that assumption–which is the basis for this whole thread! :slight_smile:

Using Broker is not a low incremental cost. Installing and managing a Broker server is not trivial.

I hope that I’m not being perceived as saying “I’m right, you’re wrong” in any way. The approach you have of using Broker by default is an absolutely valid thing to be doing–not that I’m in any position to be handing out seals of approval :wink: My intent is to point out that IS alone is quite capable of providing flexible, robust solutions that in many ways are simpler than using Broker. Pub/sub is a Good Thing. But it’s not Every Thing. :wink:

Rob wrote:

"Using Broker is not a low incremental cost. Installing and managing a Broker server is not trivial. "

I can only assume you’ve had some bad experiences here. My experience on the 6.1 platform (Windows) has been that the broker server installs with the rest of the webMethods platform and creates a default broker with no additional effort. Just configure the Broker name, port, and client prefix after install and I’m done.

Configuring the Integration Server’s JDBC connection pools after (or during) install takes much more time than setting up the Broker.

Broker and monitor are automatically set up as Windows services, so nothing to worry about on machine reboot.

Where did you encounter any significant effort with this? What were your management headaches?

Regards

Mark,

Rob Eamon has been around the broker technology for a long time. I first started on this board back in 2002 and I looked to Rob for insight, help and plain good advice.

While it may seem easy to install and maintain, you’ve probably never experienced broker meltdown. I have and so have my customers. The only way to safely process your messages with broker is through hardware clustering and even then, it can be very tricky.

One recent client used broker in a win2000 environment. We had nothing but trouble in their particular environment. Prior to and post engagements turned out troublefree.

As you pointed out, the infrastructure relies heavily on broker for auditing purposes. One thing you may not know is the extreme amount of Internal messages that are passed around on the broker. If you get a chance, download the old manager from the advantage site and take a look at the traffic activity. Even when the broker is doing “nothing”, something is going on.

Also, take note on how easy it is for you to connect to the broker with the manager. What does this tell you?

I agree with Rob on this one.

Ray

“I can only assume you’ve had some bad experiences here.”

Not at all. It’s just that, as Ray ably points out, engineering a robust production environment to include Broker as a component is more involved than just the automatic install that wM provides. The first step in most cases is to put Broker on it’s own box, as having it run on the same box as IS (adapters) has a significant impact on performance as the processes compete for CPU and memory. One quickly gets to having multiple Broker machines and multiple IS machines to provide at least rudimentary failover capability.

I don’t mean to appear to be bashing Broker. I’ve used it successfully in several projects (though it’s been a while now) and will likely use it again when the situation calls for it. Where we differ is that you prefer to use Broker by default while I prefer to not use Broker by default–two very valid approaches.

About all I can add is that I have been using the Broker since Active Software days, and running it on the same machine as the “main” Integration Server. Processing many thousands of messages daily. I do run adapters on a separate server. This environment is serving a Fortune-200 sized company and still has a lot of headroom left.

It’s not the Broker that has been the management challenge. It’s been the Integration Server… getting locked out of the Admin console, scheduler hangs, JVM memory leaks, etc, etc. Through them all, the Broker just keeps on running without stopping.

I don’t let these headaches disuade me from using the Integration Server though! webMethods product management folks have told me it is their goal to make the Inetgration Server as stable and as the Broker!

I’ve never seen a Broker meltdown. Can you elaborate? Perhaps the type of loads I’ve been throwing at the Broker are in its “sweet spot”.

Anyway, given my experience, (easy to use, never breaks) you can see why I always use it.

Regards

Mark,

You bring up some memories for me:

  1. The days when webMethods didn’t have an installer and everything was manual.

  2. JVM memory leaks. I have spent a great deal of time installing, uninstalling and tuning JVMs depending on the environment. Solaris (imagine) has the tightest integration between the JVM and OS than any other. I have had less trouble on Solaris than any other platform. AIX had the best support (IBM) of any platform that I have experienced. HP-UX sux. Opps, sorry. I’ve had nothing but headaches on EVERY HP-UX deployment. Windows has been fairly trouble free for me, but the memory leaks relating to competing processes at the OS level leaves me frustrated and the boxes locked and ready for reboot.

  3. I have audited every single engagement/deployment/project that I have participated in over the last several years to tweak/tune/optimize the Integration Server so that I didn’t crash. The devil is in the details I supposed. Many times, a JVM change and a bump up in the server threads do the trick.

  4. I’m a late bloomer with Broker. I took training in the Old Stuff but never really used any of it. I worked on a very stable system that didn’t require my intervention. When webMethods introduced 6.x, it brought with it the abstraction layer for broker implementation. I still use the old manager and document tracker apps.

  5. JDBC Adapter: I still use WmDB when I can to verify/validate and do quick and dirty stuff. My clients who have used webMethods prior to the 6.x release normally have a substantial code investment in WmDB. I like it.

In my opinion, Integration Server is much more stable than Broker. But, it doesn’t include renegade code (endless loops, etc.)

Thanks.

Ray

Thanks for sharing your experiences. I’ve got a few hpux-java scars deploying some apps on Tomcat, and absolutely agree.

I once had an opportunity to talk with someone in the HP Java lab. I was complaining about one of the differences in their JVM versus Sun. ( Ever heard of the “doCloseWithReadPending” HP JVM option?) Anyway, this guy had the nerve to assert that it was more important for HP’s Java to do things the “right” way than to exactly mimic the Sun JVM. “Write Once, Run Everywhere”… sure! :sunglasses:

The 1.4.2 IBM JVM bundled with webMethods doesn’t include all the new classes from the Sun 1.4.2 collections framework - clients get nervous about switching JVMS so… more adjustments by platform.

I also think it took awhile for webMethods to get serious about hpux.

Threads… always seem to get conflicting configuration advice from different people. Sometimes from the same person on different days!

Still interested in hearing any Broker meltdown war stories. I and others would benefit by learning about the traps.

Regards