Adapters and Intelligent Adapters

Hello all,

What is the difference between static adapter and Intelligent Adapter.How to distinguish between these two?Are there any intelligent adapters that can be downloaded for trial for Oracle?


I think you may be referring to “standard” adapters and intelligent adapters. These are Enterprise Server (webMethods Broker) concepts and do not apply to Integration Server.

If wM Broker is what you’re truly interested in, then you’ll want to contact your wM sales rep to arrange for a trial. The basic difference between standard and intelligent adapters is that intelligent adapters are scriptable. Standard adapters have fixed functions and cannot be extended/scripted.

Thanks for information.You are right in assuming standard adapter as static adapter.But my question scope is not just limited to webmethods.Now a days in EAI terminilogy everyone is using these terms.So I want to know what differentiates these two types of adapters.Also some people use “Technology adapters” and “Application adapters”.Sometimes all these are confusing.Is there any informative article available which clarifies all these doubts?


As far as I know the ‘standard’ Vs ‘intelligent’ terminology is webMethods specific. As to the second set of terms, from my experience in EAI, I think of Technology adapters as those that enable you to communicate with a technical platform (e.g. IP sockets or e-mail) while an Application Adapter is one that understands the protocols of a particular application (e.g. SAPor Peoplesoft). Which leave the question of where a DB (other than ODB/JDBC, which I believe are technology adapters) adapter fits.

IMO, distinctive names for different types of adapters is pure marketing drivel. It really is immaterial whether an adapter is a “platform”, “technology”, “language”, “application”, etc. adapter.

The thing that is important: can I connect to ‘X’ using vendor Y’s toolset. That connecting to ‘X’ is supported directly in a broker/server or through an adapter is certainly important to know for implementation details but otherwise, who cares?

For example, if I need to push a document/message/event to application Z using Websphere MQ, I know that usually means using piece of software that is referred to by some name the vendor has chosen (e.g. adapter, connector, bridge, gateway, etc.).

Does knowing that an adapter is a technology adapter help in any way? Not that I know of. An adapter is an adapter. Any adjectives in front of the word adapter are pure fluff IMO.

Just my 2 bits. Sorry for overstating my case.

Rob, I agree with you 100% on this concerning the marketing hype. I do, however, disagree on whether it matters to know which type you are dealing with.
With what I would consider a technology adapter I do not expect to need to understand much more than the technology involved and this is seldom a problem since technologies are fairly clearly defined (somewhere :-). However, with an application adapter I must take into account the level of understanding of the application that will be required or the level of involvement of application specialists who are not middleware aware. Either way, if these issues are not considered at an early stage the project can come of the rails.

But yes, all adapters basically do the same thing (act as enabling middleware for the middleware ?).

Hi all,
Thanks for various information but when I consider any adapter it should not only connect to databasae but also should do lot more things like identying events like inserts,updates ,deletes(from an application).It should also able to aggregate multiple events and data sources .for example if I need customer order information about a customer then the details may exist in multiple systems and adapter will have to connect to these,get the data and preserve this till all the details are obtained.Will the adapters all do this or it is just a marketing hype? Also webmethods adapter,crossworld adapters are capable of doing this?


I believe that the webMethods DB adapter can be configured to trigger on inserts, deletes, etc. Also, though I have never done this, I believe that a trigger (within webMethods, not in the DB) can be configured to trigger on a combination of multiple events. Though from your query it seems that where multiple sources are involved you are looking at the queries to these sources being triggered from within webMethods, possibly initiated by an insert into another DB. All of this is possible, according to the documentation that I have seen, though, as I said, I haven’t done all of this, so you will have to read the manuals. Unless somebody else can help

To eai: the combining of data from multiple systems gets to be your job. I know of no adapters that perform as you describe (but I’ve certainly been wrong before).

“Packaged integrations” (a wM term I believe) seems to match what you’ve described. It is a collection of components that implement a particular process or processes. CrossWorlds is the thought-leader in this space with their “Collaborations.” wM came out with packaged integrations some back in the ActiveWorks days. Not sure where those are at now (may require a Professional Services engagement?). These have what you describe–event definitions, process models, etc.

For Mike: Good points. However, consider that “application adapter” doesn’t really tell you anything about the adapter. In particular, it does indicate how much inherent understanding of the application the adapter has.

Many adapters, particularly early ones, did nothing more than provide connectivity to their respective apps. You could move things in and out of the environment, but the specifics of what gets moved and when are left as an exercise for the reader.

For example, the early PeopleSoft adapter did nothing more than hook into the screen-scraping interface that PS provided (I hope I’m not remembering wrong and confusing the PS adapter with something else). Driving the screens, matching up display panels with events, was an implementation task for the developer. In this sense, this “application” adapter was no different than what you describe as a “technology” adapter. It truly had no clue about the processes and objects that PS offerred behind the scenes.

Bottom line: one needs to dig into the detailed operation of an adapter to determine its capabilities. The adjective prefixed to the word adapter does not imply a standard level of capability.

The DB adapter can indeed be configured to create triggers in the DB. The trigger is not “in” webMethods, but created in the DB by wM on behalf of the user. Other threads have covered why letting the DB adapter handle the create and drop of triggers (and possibly the buffer tables) is a Bad Thing.

One can most assuredly pull data from multiple sources to create a single document–but you get to put that together using multiple adapters and scripts. That’s what gives us all great job security! :wink:

I somewhere read that adapters from Saga Software have the functionality that I have written earlier.Has anyone tested these? Also whether webmethods Oracle adapter can identify the events like inserts,updates triggered from an application independently? I dont want to use insert,update triggers on a table to catch the new data?

Saga Systems, with their Java-base SagaVista product, was acquired by Software AG last year. SagaVista has been rolled into EntireX. (

Take a look at that product line to see if it does what you’re after.

You can use the wM Oracle DB adapter without triggers but I’m not sure why you’re hesitant to use them. Without triggers, you’ll need to construct/configure operations to poll the tables of interest (run a select statement) every N seconds/minutes. The trick here is that you have to create a select statement that can identify only new or updated rows. Sometimes that’s easy, sometimes that’s impossible–it depends on what columns exist that could be used to determine new and updated rows.

Deletes are another monster entirely. Unless you only mark a row as deleted/expired, and don’t really delete it, you’ll never be able to detect deletes.

Triggers simply add entries to a buffer table for the adapter. The triggers add rows to a buffer table (the structure of the table and how much data from the record is copied depends on what adapter you’re using and the configuration). The adapter polls this table but doesn’t have to do any figuring to know if records are new, updated, deleted.

It has been my experience that once the triggers on the tables (and depending on the DBA group can take a while to get installed!) they don’t change.


Actually the requirement is capture changed data only and there are no timestamp fields in the table which can be used to find out only new,updated records.The scenario is the inserts,updates and deletes are done through an external application.Also the table has huge structure like 70 fields or 130 flds out of which updates can happen to any field.Now many triggers are required to handle this.Further the buffer tables have to be cleared immediatly after picking up data so avoid same data being fetched.Suppose the data is picked up (select * ) buffer table is cleared and there is some network problem and data is lost.Now I wont get the data again since the table is already cleared. So if the inserts,updates and deletes are identified by adapter automatically as and when it happens(most important) then we can eliminate these triggers and other custom coding.

Polling the table is a better idea compared to triggers .Also records are physically deleted in most cases and not marked for deletion.How exactly can we use adapters in such scenario?

This is my thinking.Do correct me if there is any other efficient way of doing and I am wrong in assuming these.


The ATC manual describes the process in detail (as do the Intelligent DB adapter docs) but here’s an overview of how the ATC and trigger mechanism work together. I’ll use an insert example.

  1. An application performs an insert to a table.

  2. A trigger is configured to fire after insert. Here’s an example trigger:

create or replace trigger MY_TRIGGER_AI 
after insert on MY_TABLE 
for each row 
    insert into MY_TABLE_BUF 
        (business_object, component, level_num, operation, keyId, aw_rowid) 
        ('OraObject', 'PaymentAdvice', 0, 'I', :new.KEY_ID, MY_TABLE_BUF_SEQ.nextval); 
  1. The trigger writes a record to the buffer table. The keyId can be anything that uniquely identifies the row in the table. I’ve used single fields, concatenated fields, and ROWID (Oracle) successfully. The buffer table has the fields indicated in the trigger above.

  2. The notifier is an adapter running on some machine, often on the broker server. It is configured to poll the buffer table every N seconds. It does a select on the buffer table. It copies all the records to a staging table. The staging table has the exact same structure as the buffer table.

  3. Using configuration data from event definitions, the notifier creates an event for each row in the staging table and publishes them. Upon successful publish, the staging table is cleared. These notification events only have key data to identify the row in the database. An ATC will do a request/reply (select statement) via a database adapter to retrieve the data from the real table. (Intelligent database adpaters work a little bit differently.)

“Further the buffer tables have to be cleared immediatly after picking up data so avoid same data being fetched”

The buffer table will have an entry for each database activity. If a row is changed twice, then there will be two notifications fired. Depending on the changes made to the row, you may lose one of the changes (e.g. column B changed to ‘ABC’ and then to ‘XYZ’ right away before the adapter has picked up the change–both update notices will show column B as being ‘XYZ’). Often, duplicate updates such as this are not a problem. If your integration requires publishing each and every change, then you’ll need to devise a scheme to capture those–some sort of trigger and supplemental table.

“Suppose the data is picked up (select * ) buffer table is cleared and there is some network problem and data is lost.”

The staging table is not cleared until a successful publish to the broker. The data is safe in the database. The notifier and the broker used the publish sequence number to make sure no event is published twice.

“Polling the table is a better idea compared to triggers.”

I think you’ve already seen that for your case this is impossible unless you want to change your schema to add columns for insert date, update date and logically deleted flag. Triggers are an alternative to changing the schema and often the only way this can be done since db changes are oft times impractical/disallowed.

For deletes, a delete trigger can record the key fields of the deleted record (account number, part ID, sku, whatever) so that your event can inform other systems of the delete.

“if the inserts,updates and deletes are identified by adapter automatically as and when it happens(most important) then we can eliminate these triggers”

Fundamentally, triggers are really the only way databases allow you to capture these activities reliably. There is no magic way for an adapter, which is really just a client running SQL statements, to do any of the fun

Thanks rob.Great answer.Many thanks for explaining with an example.
As you say,we cant alter our table structures to introduce new columns.So triggers are the only way for us.As per your answer (No.5) how differently intelligent adapters handle events compared to database adapter? Can we use intelligent adapters with Is partner edition or they are only meant for ES? Do we have to create this staging table or adapter creates it automatically?

For item 5, I think you may have misunderstood. The comment “Intelligent database adpaters work a little bit differently” pertained to the operation of the ATC. The operation of the intelligent DB adapters is described quite nicely in the docs.

“Can we use intelligent adapters with Is partner edition or they are only meant for ES?”

At this point, the intelligent adapters are a wM Broker thing only. Look for this to change in the very near future.

“Do we have to create this staging table or adapter creates it automatically?”

The adapter can create its tables. DBAs often get a little wierded out granting create access so you will need to get them involved to possibly grant the proper rights at least temporarily.

Dear All,

Thanks you very much for the information you provided regarding adapters and intelligent adapters and Rob in particular for explaning with an example how an adapter works.The discussion was very useful and quite informative.

Thankg again