Trading Networks vs Optimize


My team members and I have proposed a ‘concept’ for a project that consists mainly of a pool of webMethods translation and delivery services managed by Trading Networks processing rules. The services will be grouped by ‘interfaces’ and the principle for each interface is the following:

[FONT=Arial][FONT=Verdana][FONT=Arial]• Any interface has exactly:

[INDENT]one TN document type
one TN Processing Rule (one rule assigns both translation service & delivery service)

[INDENT]one translation service
one delivery service
[/INDENT][/INDENT][/FONT][/FONT][/FONT]When we presented this concept we were asked if the concept will utilize Optimize for process in any way. It does not as this is not a model-based solution. (which to my understanding is a requirement to utilize Optimize)

If we were to change the concept into model based instead of utilizing Trading Networks processing rules, what benefits/advantages would the model based/optimized solution provide that our current trading networks approach can’t provide?

Thanks for your time!

You can have TN interact with a model based translation service. A document received by TN can kick off a process.


Thanks for the reply, your suggestion was one of the possible ‘concepts’ that we had once considered to go for.

The reason we chose having the processing logic inside trading networks only is to have a streamlined solution that involves the smallest amount of webMethods components and building efforts as possible as we would not need to create a model. Another reason we chose this approach is because we possess limited webMethods knowledge overall and are familiar with Trading Networks. We created a ‘sample’ of our approach and so far it seems to be working.

If we were to create a similar solution utilizing a model (kicked off or not by TN) and utilize wM Optimize for Process, what advantages would this approach have over our current solution?

I forgot to mention that Optimize for Infrastructure can monitor TN activity. And since TN is an IS component, IS monitoring applies as well.

For Designer/PE, the logic is usually split between the model and the model steps (IS services). Generally, the logic in the model is coarse-grained and fairly minimal. Simple branches, maybe some looping.

The value in creating a model for the processing logic (consider the model to be the “top-level” service invoked by the TN rule) is that step transitions are logged to the PE DB. Depending on how granular you make each model step, you can better see the progress made within the model execution without needing to write a bunch of logging code yourself.

The benefits of using Designer/PE (my opinion only):

  • A visual representation of the steps (marginal benefit).
  • Automatic tracking of the model execution as each step is logged. Tools to view and restart.

A TN-only approach can achieve similar logging if you place logging calls in your services at the right places.

I would offer that switching from a TN-only approach to one using Designer/PE wouldn’t provide much difference. TN is document-oriented. PE is process/model-oriented. But they end up boiling down to the same thing–set up your processing and track what happens.