Our environment is very strict about database permissions issued to application servers. Jdbc insert notification performs create/drop of trigger and table in the database any time notification enabled/disabled. While not a big issue in dev. env. we decided not to give such permission in test and prod, but create db objects using db scripts.
Will it create problem for notifications? Would it affect ability to enable/disable notifications?
We have a similar situation in our environment and have actually found some benefit in doing what you propose - write the triggers and buffer tables outside of wM. It works fine, but you need to use basic notifications. The most significant benefit we have seen is when you disable the notification. The triggers continue to “queue up” work in the buffer table because they do not get dropped. You don’t get this when wM manages the buffer table and trigger. It is however slightly more work to manage the db objects yourself.
Tim,
your comments indicates that current webMethods JDBCAdapter Notifcation technique is not a preferred to use and instead its good idea to use basic notification that should poll custom buffer tables created outside of WM like similar what we did in the earlier versions of Enterprise Server.
Am i understood correctly? Pls,correct me if i am wrong.
TIA,
Tim, thanks for feedback - you confirm our thoughts on that.
We would go with basic notifications if dbas woudn’t let us have the permissions in production.
RMG, the technique seems not perfect the least. The requirement to have permissions to create triggers AND tables seems excessive. Moreover, enabling/disabling notifications (not basic ones) creates/drops database objects - too much coupling.
I realize that the goal was maximum transparency and easiness for webMethods developers but the enterprise solution could be more flexible a and a bit more demanding on developers for the sake of enterprise standards.
We probably end up creating dedicated database for the tables with JDBC notifications so that given permissions can be well isolated. Of course, such approach is not extremely flexible.
Thanks Greg, for detail explanation.
RMG, I would agree with Greg. It seems to be less than perfect, at least for us. I really like the fact that wM is trying to “shield” the developer from having to do some of the db work, but it is really not practical in our environment.
I would be interested in knowing how others handle the case where a notification needs to be shut off and the buffer table and triggers are dropped from the database. How can you know what data was changed during this time frame?
I insist on the developers in my area making the triggers, buffers and sequences static once we get to our QA/Test environments.
And when we go against our OLTP/ERP database, or any database that my team doesn’t “own” we have the DBA’s create the objects for us (after we’ve modeled them in a sandbox/dev environment).
What that means is that we use ALL Basic Notifications. I do however encourage my developers to use the Packaged Insert/Update/Delete Notifications to model their objects in order to save some development time, but nothing goes to QA or Prod using them.
Basically, if you use Automatic Notifications (that automatically create triggers and buffer tables) you cannot capture events while the Adapter Notification is disabled. However, the buffer tables/triggers created by automatic notifications are only dropped when the Adapter Notification is disabled - not when the Adapter Connection is disabled or when the database connection is down. You have to ask yourself “how often am I likely to disable an Adapter Notification against a Production database?”
The best method in my opinion (and the one most fancied by our customers), is to use Basic Notifications (as described in previous messages). You then simply create your own trigger that populates your own buffer table which is outside the reach of the wM adapter. This removes the possibility of “missing” database events because your adapter notification is disabled.
Architectural purists would argue that this creates a tight coupling between the integration and the application, but you have to be pragmatic about these issues and address them on a case-by-case basis.
Personally, I think this kind of flexibility is what sets webMethods apart from its Tier 1 competitors. But then that’s just my opinion
Andy Withers
Solutions Architect
Glue Ltd:
www.glueltd.com
Here is another blow to wm implementation of insert notification.
During initial load we are getting 100s thousand records inserted in the table. The notification was publishing docs to the broker at 1/sec rate (or slower). That was crazy. It turned out to be performance of deleting single record from the buffer table (that’s how it seems to work: after each publish it deletes corresponding record). Delete was slow since it took table scan over 100s of thousand records.
As soon as we introduced index on wm_rowid column in the buffer table we got expected performance back. Now, the index will be gone any time we disable notification - not good. So, back to basic notification implementation but following the pattern of insert notification.
Thank you, Tim!
Glad to have helped.
You will be interested to know that we also add an index on the “rowid” column for the reason you listed above, and for performance when there are no rows in the buffer table. On all of our basic notifications, we add a where clause that says where rowid > 0. We found that buffer tables with lots of inserts/deletes creates a very high water mark in Oracle. Without the where clause, the polling does a full table scan. Even though there may not be any data in the table, Oracle can still have physical storage allocated to the table and do thousands of reads to get zero rows returned. The index prevents us from having to frequently rebuild the buffer tables. Hence another reason we only use basic notifications. In fact, we have found in 6.x, that you can’t even use basic notifications, because you don’t get to specify a where clause! We are writing our own select and delete adapter services along with a flow to implement this. Kinda too bad, but we haven’t found another way to do it yet.
Tim,
You may try to create a view on buffer table with necessary where condition. The view still should be updateable and you sure can use views from the standard JDBC Adapter. Have you tried this?
Hey guys,
i got an even more annoying thing.
I tried to disable and enable my polling notification, but for some reason the buffer table was not dropped. So at the moment I am not able to recreate the polling notification. Deleting the buffer table manually is not possible because some process is locking the table…
Can any one tell me why it didn’t delete the buffer table and sequence?
Secondly I had the following problem: I inserted data in the database on which the trigger is created, so the buffer table was filled, when the Integration server was down. I started the Integration server and nothing happened…
Can anybody give me a possible reason/solution?
Kindest regards
Wout
Hello Wout,
These are my thoughts on your problems (but don’t shoot me if I’m way off )
Maybe the reason why you can’t delete the buffer table manually, is the same as why it isn’t dropped? Some process is locking it (but I wouldn’t know how to check things like that)
About your second problem: are you sure the trigger was ‘active’ on AS400 file while the Integration server was down? Did you check your buffer table to see if the values were in it? I tried the same thing over here, and the values were in the buffer table and were caught when IS went back online…
We run on SQL Server (you are obviously on Oracle) so I should be careful not to translate things literally.
One case when no notifications will fire is when I have the process inserting bunch of records into notification table. Only after it’s done the notifications will start firing - obviously it’s related to locking in the database. I suspect (never tested) that if I change property of jdbc connection I use for notification from LOCAL_TRANSACTION to NO_TRANSACTION then this should disappear.
If you had had SQL Server I could give the following explanation: there is a process inserting records into the table. The process gets blocked indefinitely (for very long period of time) or just never stops updating. It holds its locks and prevents notifications to fire. Now, you just need to re-think all of this in Oracle…
HTH
Gregory
Greg K.,
According to the 6.0.1 docs, you can only use LOCAL_TRANSACTION, not NO or XA for adapter notifications. It would not let me enable the notification when I had it set to NO and the documentation explicitly says it won’t work with XA. So the commit has to go all the way for the whole set.
Yemi,
We have successfully used the JDBC Adapter Notification for NO_TRANSACTION type in IS6.01.
It was fast,Sorry it is LOCAL_Transaction only too many things going on…