Why use webMethods for batch interfaces?

My boss asked me to give the pros and cons of using webMethods for batch interfaces. For example, a company FTPs a file to our company and we need to upload it into a mainframe product’s database. We currently have a script that FTPs the file to a server, then use webMethods to detect the file arrival, convert the data into the format the mainframe product requires and then turn around and FTP it to the mainframe. We could have FTP’d the file directly to the mainframe and then written a Cobol program running on the mainframe that converted the data into the correct format.

Besides the fact that we wanted to use webMethods because it is more interesting than mainframe Cobol programs, the main reason was we wanted to have a common way to implement all interfaces, whether real-time or batch, so we didn’t have to be experts in all languages and end-systems.

That may be the only reason we need to give, but if anyone has any other good reasons, please post for all to see.

Thanks!
Richard

My two-bits on the use of an intermediary for integration (none of them are wM specific). Whether the interaction is batch or not wouldn’t seem to matter, IMO.

Pros

A point where flexibility can easily be leveraged. If the exchanged data needs to go to more than one place, the intermediary can do so without impacting the end points. Basically, a place to accomodate change.

A change in one of the end points (communication, format, etc.) can be insulated from the other end point(s).

A common exit/entry point for all applications. They don’t talk to each other directly. They talk only to the intermediary. This provides a degree of decoupling for the apps, at least at the communication layer.

Can decouple the data format from each end point. The intermediary knows how to translate between them, and the end points have no idea. A COBOL program can do that too but typically it’s lumped in with the rest of the “application,” at least logically. Thus, the “application” has dozens of interfaces that the app team has to support, instead of a few. The app teams should strive to keep the number of interfaces they have to a minimum–and the ideal goal would be to have but one interface that communicates to the intermediary. The integration team handles the rest.

A location to record the exact contents of the data submitted by or transmitted to the partner. This can help with troubleshooting and may be needed for business/regulatory purposes (non-repudiation). I can’t remember if you guys ended up with TN, but it’s a good place to keep exactly what was sent, in either direction.

Auditing the interactions can be consistent and simplified.

Common error, monitoring and reporting facilities are more likely.

A well-known place where all integrations are “done.” Thus, analysis of change impact can be simplified.

Governance of solution design and implementation can be easier.

The intermediary insulates end points from outages by the other end points. In the strictest sense, using an FTP server to exchange data between 2 apps is using an intermediary–but people don’t usually think of an FTP server that way. (Plus, it doesn’t provide format decoupling.)

If the use of an intermediary is part of the integration architecture (don’t know if that’s the case in your circumstance), then it would seem that not using the intermediary would be the exception. Thus, justification to not use the intermediary would need to be compelling, rather than needing to justify the use of one. This falls in line with the notion of consistent implementation approach.

The intermediary as focal-point for integration development provides a better opportunity for code reuse. Have a service in IS that parses that intelligent account number? Use it in all 10, 100, 1000 integrations. If an intermediary isn’t used for translation, then reuse can be a bit more difficult.

Cons

For some integrations, point-to-point is just fine. The need for flexibility may not be there. The amount of code that could be reused, minimal. If the data is very specific to one app, and the processing of it needs to be atomic (all data succeeds, or none succeeds), then point-to-point is probably a better approach.

If the transformation requires lookups or sophisticated translations then it may be simpler to implement a point-to-point solution, where the end-point may have “better” access to the needed data. The intermediary may not have ready access, and thus the solution can become more administratively complex, or run-time intensive, than the benefits it brings.

Using an intermediary generally means more hardware. More administration.

For using IS specifically, knowledge of FLOW is far more scarce than COBOL or PL/I or Natural.

When using Broker, the communication mechanism is proprietary. One can now use the JMS interface but that isn’t as integrated with IS as the “normal” IS-Broker communication.

The intermediary is often a misunderstood, mysterious black box. It becomes the scapegoat of many, many issues, even when the error didn’t occur there. It’s often the victim and reporter of issues in other systems, but gets the blame because the data “didn’t get there.” This can be a huge distraction for both the integration team, the app teams and prod support.

Bottom-line: IMO, the characteristic of being batch doesn’t alone indicate the use of an intermediate or not. All of the factors above should probably be considered.

why we are using web methods???

what is the difference between web methods and other eai tools

what is the diff b/w web methods and other eai tools

It is all single world (webMethods ) ofcourse it’s similar to other EAI/B2B Middleware category.Now webMethods is owned by softwareAG (explore your self at API Integration Platform | Software AG for more info closely)

what other tools are you referring to and trying to comparison specifically?

HTH,
RMG

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.