We’re creating connections in our development environment, pushes changes to source control and then using Jenkins to deploy our packages to both a test and a live server.
The issue we’re running into is that when deploying the same connection from one system to another (This is only usually done once but we still want to use Jenkins to do the deployment) we’re running into the issue of having to manually reconfigure the connection details.
Is there any way we could setup the connection details in environment variables and the connections will use these so that our source has the same information for all environments and we don’t have to do annoying variable substitution at the Jenkins layer.
If we can’t do it on the integration server itself then we’ll fall back to the deployer doing variable substitution but this feels very clunky for the preferred method.
Maybe it’s something SoftwareAG could think about as they’re pushing CI and CD more with each version.
IMO this is the only possible way for now since Jenkins just executes some command line scripts from the deployer. Jenkins does not add any functionality.
(Apologies that this response and the one below are to a over month-old thread – but perhaps it will be helpful in some way.)
IME, the use of variable substitution is error prone and we try to avoid it whenever possible.
Connections and custom configuration files are usually inherently environment specific. That’s why (at present) we never auto-deploy connection definitions nor configuration files. We create them manually via IS Administrator (we have custom .dsp pages for config files) and update them manually when needed.
This works out fairly well – after all, connection pools and configs don’t change often. We’ve found that the amount of effort to “automate” deployment of such artifacts to FAR exceed the occasional manual edits. And the likelihood of errors (“oh, I forgot to update that script/file during checkin”) when defining for automation is eliminated.
Another thought is defining the environment-specfic components (connection pools, wM and custom config files, etc.) as separate entities in the VCS. Then have Jenkins set up to deploy the artifacts per environment. E.g. your JDBC connection pool for app XYZ has 3 items checked in to VCS, one each for dev, test and prod.
The above is exactly what we’ve opted to do. We’ve only got a dozen or so different connections to worry about so what we’ll probably do is push the connections out via CI and then manually configure them for each environment before enabling.
Our general experience is that every manual activity is error prone. And configuring connections is no exception. Hence we deploy everything, including connections etc. (of which we have many) using the deployer and Var Subst. Needless to say that the descriptions of the projects to create are also created automatically (see the first sentence) from the files checked in separately to VCS. So far, it works quite well for us.
If you deploy a connection via CI it seems that the password details are not maintained as they’re not held in source control (In the model that we’re using) so the deployed connection won’t enabled without errors until it’s correctly configured.
The issue we ran into was that the VarSub isn’t a simple process to get your head around and once we understood what we were trying to achieve we ran into security concerns of where the usernames and passwords for database users would live.
Writing a CI solution that ticked all our security requirement boxes just wasn’t possible without a lot of changes. Maybe when the product better supports connections moving between environments we’ll review the process.
Agreed on the “every manual activity is error prone.”
Have you had cases where var substitution has been wrong? We had that a couple of times in the past (with different toolset - not wM/Deployer).
Or cases where the checkin was wrong?
For me, creating/editing connection pools and config files directly in IS Administrator is preferable to someone editing something else hours, days or weeks before it is to be deployed. Because deployment time comes and “oh, the connection config is wrong.” Then we get to chase things to find out where the mistake is. Bad check-in? Script error? Bad substitution var? No substitution var?
For me, this is a case where automation adds time to the overall effort, rather than reducing it. Definitely a YMMV item though.
Do you have specific thoughts on how “better supports connections moving between environments” might work? Without some sort of templating or var substitution?
Which method have you use to make the Var Subst work with a CI (I use Jenkins)?
To avoid security problems I would have the connections/notifications/listeners packages created from the DEV environment without usernames and password - this way, even if they would be activated by mistake they wouldn’t work and wouldn’t create a “account locked” problem.
We already have all the connections/notifications/listeners in their own packages separated from the code packages so as to removed them from the automated deployment cycles.
additionally the SAP Adapter Notificatifications are still not available for deployment using Deployer with Var Sub, only the connections are.
Therefore I follow a similar approach as Gerardo by creating the Connections Package manually on each enviroment (in most cases this is an one-time task, sometimes we have to modify the connections to point to another instance on partners side) and only deploy the depending packages.