Change database schema runtime

Hi Experts,

We have a service that fetches (via JDBC SQL Select adapter) data from a Oracle DB. When we run the service in in our dev environment we should fetch data from a schema called eq. schemadev, from test: schematest and prod: schemaprod

We have tried solving this with pub.jdbcAdapter:updateServiceSchema service, where we first check the environment and sets the correct schema according to its value. This works as long as the adapter service is under “Lock for edit” state but when we release the lock we get the messages:

[ADA.1.608] Cannot update schema.
[ART.117.4050] Adapter Runtime (Adapter Service): Unable to update adapter service GUEAFMS_Organisation_PAOMS.MAP.adapters:sql_selectAgressoOrganizations. Service either needs to be checked out or lock for edit before updating

In our development environment this is fine but how can we use this in a test/production environment? We do not allow developers into test and production environments

Anyone run into the same problem? Is there another service to use for this? Can this be set automatically under deployment of the package maybe?

Our environment:
Oracle 12g
webMethods 9.9

Kind regards

Niklas Ottosson
IT-Developer
University of Gothenburg

Adapter service will be tied to a JDBC connection where you mention the server and schema details when you deploy the assets to the higher environment, you have to modify the connection details in JDBC connection. If you want to change the connection at runtime please see below:

Review “Dynamically Changing a Service’s Connection at Run Time” from “9-10_Adapter_for_JDBC_Install_and_Users_Guide.pdf”

Hope this helps.

You can run a service using a connection other than the default connection that was
associated with the service when the service was created.

To override the default connection, you must code your flow to pass a value through the
pipeline into a service’s $connectionName field.
For example, you have a flow whose primary purpose is to update a production
database. However, you want the flow to have the capability to update a test database,
with the decision of which database to update to be made programmatically at runtime.
The output signature of the flow’s first service contains a field called Target. The flow
could branch based on the value in Target. If Target contains the value Production,
the second service in the flow would ignore $connectionName, thus using its default
connection to connect to (and then update) the production database. However, if
Target contains the value Test, the second service in the flow would use the value in the
$connectionName from the pipeline and connect to (and then update) the test database.

When looking into the documentation I read the following

===================================================================
Keep in mind these restrictions when using dynamic connections:
“Both connections, the default and override, must use the same database schema”

Using the same schema on both connections do not help me I’m afraid. I need to change the schema (and I really only need one connection too)

Hi,

Do you need to change the database schema on runtime or only during installation?

In my case, I do not want servers in one environment to be able to talk to servers in another so the artifacts which have environment-dependent properties are decoupled in their own packages, so they are easily configured on install.

If you are using WmDeployer, there are mechanisms to set these properties on installation time. Other deployment tools should also have this feature.

Best Regards,

@lisboa

Changing them during deployment would be ideal. As you say, a server from one environment should not have contact to servers in other environments.

According to the Deployer documentation we use a Runtime-Based Deployment. What mechanics are you talking about here - I’m having trouble finding anything that could help I’m afraid, or does your solution need Repository-Based Deployment with ABE?

Kind regards

Hi,

can you try to define the schema in adapter service as “current schema”?
Than it is referring to the schema bound to the user mentioned in the adapter connection.
In this case you will only need to change the connection on the target accordingly.
When using Deployer this will be defined in the map under “Variable Substitution”.

Regards,
Holger

1 Like

Niklas,

Years and years ago, I ran into the same exact issue you described. Back then, the solution that I settled on was to use the Dynamic SQL template to create my adapter services with SQL statements that looked sort of like this:

SELECT * FROM ${schema}. …

The actual schema was stored in a properties file and I would simply pass it in as an argument when calling the adapter services.

This is one way to accomplish your goal, but I must admit, it’s not great. I tend to avoid Dynamic SQL if possible.

So what other alternatives are there?

First, let me confirm this: even though the schema name changes from environment to environment, I’m assuming that each environment has its own separate database server in which each respective schema resides, correct? In other words, we’re not talking about a single database server that hosts all three schemas, are we?

Assuming separate database servers, then one option is to do what Holger has suggested and simply don’t specify a schema in your adapter services. Instead, use . Now, I imagine that that your challenge is that the default schema for the account you’re connecting with is the not the same schema that has the tables you want to query, is that true?

In that case, then my suggestion is to ask the DBA to create a logon trigger for that account that changes the default schema for that user. This is simply done via the SQL:

ALTER SESSION SET CURRENT_SCHEMA =

If your DBA does not want to do this, then yet another alternative is to use synonyms. Synonyms are sort of like symbolic links. So, in other words, you could create synonyms in your default schema that point to the actual tables. Then, instead of pointing the adapter services to the actual tables, you can point them to the synonyms, makes sense?

If none of these options are viable, let us know and we’ll take another stab at it.

Percio

1 Like

I want to thank you all for the ideas on how to solve this problem!

We have now decided to take another approach to the problem and add a manual step in the deployment process of these specific packages were we will lock the adapters, run the pub.jdbcAdapter:updateServiceSchema as an utility to set the schema and then unlock the adapters before enabling them in test and production. This seems to set the schema correctly and to remember that setting over time (we could even do this before we build our packages in the dev environment - the schema change transferred over into the test and prod environment via the build packages).

Again a huge thanks for the support!

@Holger - Interesting solution that we will look into more
@Percio - Yes your assumptions are all correct and we will over time look into these solutions more. Synonyms is probably the solution down the line, and for all new integrations with this type of db setup

Kind regards

1 Like