Porting JDBC Information

Hello All

How to Port JDBC information …from one box to another ?

Thx

Kiran

If you need to port the information for the adapters you can archive the package containing the Connections and Install them on the other box.

Thanks,
Brain.

And thats why, Kiran,
Its a good practice to have individual adapter connections in separate packages. This will help in not creating any inter-dependencies between adapters.

Also ensure that you always have package dependencies set up correctly which will help determine the package load sequence.

Please read the chapter on “package management” (pg44-45) of the JDBC User Guide for more information.

  • Saurabh.

I was asking for the JDBC Connection Profile we setup through the admin web page ? How to export this ?

( Adapters > JDBC Adapter > Connections
& Adapters > JDBC Adapter > Polling Notifications )

( In other thread RMG mentioned about exporting jobs.cnf and port.cnf for file/ftp polling and schedule job configurations )

Thx

Both Connections and Polling Notifications require you to select a package on your Integration server. As the other posters have said, export that package and you’ll have the connection information.

Kiran,
JDBC adapter Connection and Polling Notifications information are not stored in cnf files in v6.x.

They are part of the packages which you select while configuring them.

Hope this makes it clear to you.

  • Saurabh.

Kiran,

I told about Ports and Scheduler Jobs moving in that post and dont compare that.

All the created JDBC connections are stored in a Package(see in the developer)

So moving JDBC connections to another IS,you have to export the
Package that mentioned when creating the JDBC Adapter Connections(IS admin page).

HTH,

webMethods Deployer lets you build deployable packages that contain and IS package as well as many server configuration settings such as users, ACLs, ports and… JDBC Adapter connections.

You can specify overrides on those settings that are unique to the target environments. An example of this would be usernames, JDBC URLs, etc for your target environments.

Deployer might be overkill for very simple environments, but administrators in larger environments will find its learning curve well worth the effort, IMHO.

Information on Deployer can be found in the Bookshelf section of Advantage.

Mark

Hello All,
I am using JDBC Adapter (with Oracle9.1) in webMethods6.1.
In our case, the schema in the database differs between Dev, Test, UAT & Prod. For (e.g) in Dev, the schema is PKMSDEV whereas in prod it is PKMSPROD. Using deployer I can override all the parameters as Mark mentioned (username, password, dburl etc) except the schema name. Is there anyway to update the schema automatically as code is migrated across to test,UAT & Prod? Thanks for your help

Hello to all,

Well, I must first say that I don’t know Deployer at all, so what I will say is perhaps only valid if you’re not using Deployer.
According to what I known, there is no way to change the schema name when deploying, except by manually editing all the services.
Thus, to me, you will have basically two alternatives :

  • You use a user that has its own schema, so that the temporary are not in your “production” schema, but then you will have for each deployment to override manually (I mean using developer) the schema
  • You use a user that connects by default to the schema you want. In that case, you don’t need to modify anything when delivering, but you will have your wM temps tables in the same schema as your “production” data.

I personnaly really prefer the second one, but I give my customers the choice, explaining both of them.

Best regards,

Chtig

Have you tried using <current> in your JDBC Adapter service’s table specification? You could then use Deployer to change the username specified in your connection, thereby changing the schema to which you connect. HTH.

Thank you Chtig & Michael,
I tried the options, it works well if the username is tied to the particular schema. In our implementation, the schema & the username (integration user) can not be linked for security reasons.

Combining your suggestions I intend to follow these steps:

1)Create the user(integration) to connect to own (integration)schema by default.
2)Create a stored procedure in the integration schema for data-access in the second (application)schema.
3)For each deployment, change the (application)schema name in the stored procedure in the database manually.

This way the schema name is separated from webMethods environment & stored in the database. All the data-access stored procedures can be combined in a single package in the database, this way a global search & replace the schema name in the package will do the job.

  • Kolappan

Hello Kolappan,

That sounds good, but I guess you are not using JDBC notifications, right ? I never found a correct solution for these, else than using current schema or modifying the coding every delivery.
Well, just by curiosity, when you said “for security reasons”, what did you mean exactly ? I mean what did the project manager or the dba have as elements to say it was not secure to connect to the data schema ? (well, I’m no DB specialist, but I never found a justification that satisfied me, so in case you have one, I would be interested).

Best regards,

Chtig

Hi Chtig,
In our environment, there are two separate schemas, one for application & the second one for integration. The reason for having two separate schemas is,

  1. The integration user is NOT allowed to do certain DML operations in the application schema like create tables/delete rows/drop table etc (update table is the only DML allowed).

  2. The integration schema deals with interface tables & triggers, and is independent of any application (something like middletier) whereas application schema deals with actual application.

  3. If there is a need to do data manupulation activities we can certainly able to do it using the integration schema.

  4. For notifications,
    a) create a interface (cache) table in the integration schema.
    b) create a trigger on the base-table in the application schema.(must be created by DBA).
    c) Populate the cache table based on the trigger (based on the flag say data_sent = ‘n’)
    d) Create “Basic Notification” in IS with delete option to poll the cache table periodically & invoke the flow service.
    e) Finally update the base table with data_sent = ‘y’

I hope I am clear, let me know if you need any clarification on these.

  • Kolappan