I am currently working on improving the CI/CD pipeline for webMethods. We have been using broker and we are planing to upgrade the environment soon. During the upgrade I want to prepare a UM server so that we can start migrating our triggers. We donβt have LSD license but we have an unlimited contract so we can still use local IS and we can use git on our local and we have been doing this for a long time. Buying LSD license is probably not an option. So my plan is to configure git for packages and jars, but I never configured git for CI/CD before.
My questions are:
As far as I know, git is just like an external directory that keeps history like svn (most basic explanation). So I think I can sync packages on both target machine and the local development machine without an issue. If the changes arenβt reflected, we can just reload the package after sync. I can probably figure that out. But since git mostly acts as a directory system, it wonβt have any idea about the external assets, like publishable documents. What are my options about these IS external assets? Do I need to manually trigger those services which does the sync? Is there a simpler or safer way I can do this?
Packages are already in git repo. In order to sync these packages with the local IS, our current approach is to sync them in a seperate folder and import them using eclipse import. I thought we could simplify this by syncing them in to replicate/import directory and then pushing them from replicate/archive folder. Is this a good idea? Is there anything I need to be careful about?
Did anyone ever implement a similar CI/CD without an LSD license? If so, can you explain to me how you did that?
We donβt use BPM right now, so we donβt need to consider it in requirements, but we may use it in the future, so it is nice to have but not a must.
I am not speaking from experience, as we still use Deployer runtime deployment, but perhaps you could use pub.publish:syncToProvider or pub.utils.messaging:syncDocTypesToUM as a package startup service?
Thatβs what I thought to use as well. Do you know if there is a service to discover all the publishable documents in the package? Otherwise I need to manually discover these services and keep the list up to date manually to. I am looking for a solution with minimal human tasks as possible.
I tested this and it works. All I need to provide is the package names as a list. Currently my only challenge will be after migrating to kubernetes cluster. I donβt want this task to run each and every server, all I need is once from any after deployment.
I will keep the topic open for awhile to discover how others solved this.
Are you planning on deploying the packages to the target by simply checking them out to the target server and then activating or reloading the package? If so, how do you plan on handling environment-specific items like connections?
I have worked with CI/CD quite a bit, and in most scenarios, it involved using Deployer to deliver the packages to the target servers. The reason being that Deployer can handle those environment-specific items via its varsub feature. If you use Deployer, it will take care of syncβing the doctypes for you.
Regarding #2:
Iβm not sure I follow your comment regarding packages/import and packages/archive. I typically clone the Git repo somewhere on my machine and I create a symlink from each package directly into the packages directory. I suppose you could directly clone into your packages directory, but then you have to be sure to ignore Wm* packages, for example. In a nutshell though, the package you actively work on is the same package that is in your Git repo. Thereβs no need to copy or archive anything.
Regarding #3:
Yes, but we ended up developing our own local development plugin to make the whole thing more seamless. Our plugin wraps an Eclipse Java project around each package and sets up the build paths, making Java service development much easier.
There are basically a few things that need to be done if not using an existing plugin though:
You have to move the package into the packages directory, for example, via a symlink
You have to compile and frag the package in case it has Java services because *.class and java.frag shouldnβt be added to version control. You can use the jcode utility to accomplish this.
You have to reload the pakcage (if it already exists) or activate it if it doesnβt.
Our plugin does these things and the Software AG plugin does some similar things if Iβm not mistaken.
As far as I know, the Software AG local devleopment plugin doesnβt do anything with publishable doctypes, and to be honest, Iβve never found out-of-sync doctypes to be an issue when doing local development. Worse comes to worst, you will attempt something that will tell you your document is out-of-sync and then you can simply sync it.
Having said this, if you must create an automated doctype sync, then the service suggested by Dave seems promising because the input documentTypeNames is optional, implying perhaps that you can sync all doctypes. When it comes to the Broker, thereβs a service in WmRoot called wm.broker.sync:listOutOfSyncs that should come in handy.
This is one of my options to consider. Adapter connections are stored in different packages. These packages donβt have anything other then connection itself. So I plan to create a startup service that updates these values with the values from the vault. I will store these services in the connection packages directly. I might use @jahntech.cj connection updater as well. Havenβt decided what to use yet. I need to test it first.
This is an excellent idea. I will probably do the same. There is 2 sides of this CI/CD configuration. One side is LSD, and the other part is deployment and synchronization. I am trying to improve both parts. If my understanding is correct, as long as git ignores are properly configured, code commitment can be achived just by git commit and git push commands. We donβt even need to use eclipse for git. So I think that part will be pretty easy. In CI/CD pipeline I plan to do implement the same steps you noted.
Do I need to manually compile java services? Would reloading a package compile these java services as well like they do to flow services. I hope it does it automatically, but I think this is not possible right? If my memory is correct, when a java services malfunctions, I always needed to make changes like a white space to make it compile again. I believe this part is already scripted, I can probably use that part of the script and move that functionality to the CI/CD pipeline.
I probably will have to use deployer as well in order to trigger a dependency check. If I can do it another way, I donβt think I will need to use deployer at all. But even if I need to use the deployer, my plan is to use deployer on the server directly. So everything will be processed on local servers(I hope). We have too many deployment servers right now. I plan to decommission all of them. I might create a single deployer active pod to do the deployment on a shared persistant storage, but we decided to divide the upgrade project to phases, and right now the containers are part of phase 2.
After I posted my reply, I saw your post about Kubernetes. Do you happen to be using MSR containers? With MSRs, thereβs a new and improved way of handling environment-specific configurations via an application.properties file. I havenβt had a whole lot of hands on experience with it, but it seems much simpler and it works relatively well.
Yep, I completely agree. In fact, when introducing new webMethods customers to CI/CD, I also break it up in those two phases: pre-commit and post-commit. I sometimes find myself using EGit (the more commonly used Eclipse plugin for Git) when doing webMethods local dev but itβs not required and Iβm not a huge fan. I tend to use TortoiseGit myself for most of my Git interactions.
The most convenient method for compiling and/or fragβing many java services at once is the jcode utility. Reloading the package wonβt do it. Thereβs a section in the Services Development Guide called βUsing the jcode utilityβ that you will find useful, but in a nutshell, these are the commands:
# Compile all Java services in a package
jcode.[bat|sh] makeall <package>
# Create fragment files for all Java services in a package
jcode.[bat|sh] fragall <package>
You could quite easily create a simple bat or shell script that takes care of checking out a package, creating the symlink, running jcode against it, and then activating/reloading the package as needed. In the very first project in which I was exposed to webMethods local dev and CI/CD in 2008, believe it or not, this was precisely how we did it at first.
If youβre creating MSR Docker images to be deployed into Kubernetes, then you will likely not need Deployer at all. During your image build step, you will simply add your full packages to your Docker image directly from your version control system. When doing full package deployments like this, dependency checking is not as critical. You simply need to ensure that all dependent packages go together, which is typically easily accomplished by keeping related packages in the same repository, but you could certainly organize them in separate repos if it makes sense (e.g. one repo with common utility packages, other repos supporting specific business functions). After your Docker image is ready, you will deploy that same image across environments and you will leverage application.properties to ensure that environment-specific values are propertly set in each environment.
If you do end up needing Deployer though, I recently worked on a CI/CD solution where I dockerized the Asset Build Environment and Deployer, which allowed me to execute deployments in a serverless Gitlab pipeline. In other words, ABE and Deployer existed nowhere else other than in those Docker images and those containers were brought up and torn down each time. No need to maintain a running Deployer server. I have more info there too if you need it.
Great discussion and a lot of interesting thoughts!
I would like to throw in an additional, and in my view critical, aspect here:
It is not about the details that have been discussed so far, but on a conceptual level; and the latter obviously has implications for the implementation details.
A VCS (like Git or Subversion) is not meant to be used as a direct(!) source for deployments. There is a lot of reasons for that and many fall into the category of compliance or even legal requirements.
What needs to be done instead is to create a release from the VCS and that release artifact is what gets deployed. Ask yourself this question: Would I download the source code for this new JVM version onto my PROD sever, compile it there, and then install it? Of course not. What you do instead is to install the latest release from the vendor.
The JVM comparison above is not 100% accurate relative to custom packages for IS. But the core aspect remains. So here is what the process should look like for me (which is by the way how commercial software is usually released today):
You need to create a release on the CI server, once the unit tests on the CI environment have passed. In the case of IS this means exporting the package into a ZIP file. Since the unit tests have passed, this package is known to be generally operable.
The package ZIP file needs to be stored in a binary repository. Usually you take something like Nexus, Artifactory, etc. for this. Both have open source versions that are not that difficult to set up. If you want to start really(!) small, you can even just use a designated directory on the file system. Although I would not recommend it, since you will need to re-develop functionality that is already there (would you develop an RDBMS on your own?).
You can also store the build in binary form in Git. The critical aspect here is that in this case Git as being used as a binary repo and not a VCS. As long as you are aware of this fundamental shift in the nature of its use, that is ok. But please try to make sure to NOT simply speak about Git then, because it will confuse people a lot. Talk about something like βbinary repo based on Gitβ instead.
When storing the ZIP file into the binary repo, you choose the path such that it contains the version number and also indicates whether this is just a snapshot triggered by the latest commit. Or you mark it as an official release that is supposed to be deployed into PROD. There are also approaches to promote a release from snapshot to official release, once it has gone through the required test stages.
The point here is that there is exactly one place where builds happen and that is the CI server. This one build is then deployed to the various stages.
I have been using this approach for more than 10 years now. It is conceptually very simple and that also makes it robust and easy to explain to auditors.
Most importantly, through the clear separation of concerns it is easy to adjust to the exact requirements. So you can use this process for IS classic and MSR; for VMs and containers; with any kind of binary repo (incl. the file system). The approach stays 100% the same.
For me this model was a game changer, once I had arrived there. It gave me guidance and greatly reduced complexity.
I will stop here, before this post gets even longer. But there is obviously more, so please let me know if you are interested in particular aspects.
I agree and I too have always been a big fan and proponent of βbuild once, deploy manyβ (or βbuild once, deploy anywhereβ). I will say though that I worked with this team in my current project where they implemented a CI/CD pipeline based on βGitlab flowβ. Each target environment had a dedicated branch (e.g. development, test, production) and a deployment was triggered by a merge into that branch. The binary was built (and deployed) each time from that target branch. I resisted the idea at first but I was impressed with how well they implemented it. I was also pleased with the simplicity of having the same pipeline apply across environments and the simplicity of not having to maintain that artifact.
Not saying itβs a better approach but I have seen folks implement it surpringsingly well.
Unfortunately no, we donβt have a license for MSR.
This is likely how I am also going to deploy it. The other option is to use a central persistant storage for packages. But I am not sure about it, if it is a good idea or not. Btw I am not worried about the package dependencies. What I am worried about is service dependencies that deployer does.
We have a similar setup like this already. So removing the artifactory out of the picture for the sake of simplicity is out of options (it already was because it turns out I canβt push jarβs to git due to company policy). So after pushing the package to artifactory, we download it to packages/inbound folder and execute a package install service? or unzip it on before the containers starts if it is containerized? What about the service dependencies. Where and how should we check service dependencies (not package dependencies)?
For deployment into a running VM, you would download into ./replicate/inbound and install with some scripting.
For deployment into a VM where IS is not running, you can just unzip it into the ./packages folder and then start IS.
For a container image you unzip into ./packages as part of the image building process.
The dependency check of Deployer does not make much sense in this kind of scenario. It was always nothing more than a poor-manβs approach anyway. You cover this and many more aspects by test automation.
Can you elaborate how this is actually a thing? I am not sure whether I have ever had this issue in the last 15+ years.
Thatβs what I meant as well, I just didnβt want to check the folder while typing. I usually donβt memorize most of the directories. I found them with muscle memory.
It is pretty easy to break dependencies when you have tons of packages. I am not a developer in my company right now. But in the past, I have broken dependencies by simple renaming a service for a refactor and accidentally forgot to update dependencies. I donβt think it will happen here, because I believe devs donβt refactor by renaming the services.
Do you have a way of implementing unit tests without WmTestSuite package? I believe we donβt have a license for it.
It is of course possible to come up with a home-grown replacement for WmTestSuite. But it would cost a decent amount of money to do so.
If your organization really does not have a license for WmTestSuite, then it does not make much sense to use containers and Kubernetes. At least that is my personal opinion. Of course it is possible to come up with something. But the main value proposition of containers here is business agility for me. Without proper CI/CD it is not possible to achieve that.
I respectfully disagree. Although Iβm a big fan of WmTestSuite, I believe you can get most of what you need done with free tools if the Test Suite isnβt available.
In my past few CI/CD implementations, all we used for automated testing was newman, Postmanβs command line tool. This is possible because, as you know, IS services (most of them at least) can be invoked via http/s using the invoke directive, so even if your services arenβt exposed via RESTful APIβs, you can still execute them and test them with tools like Postman. Itβs, of course, important to write your services in a manner that make them well-suited for automated testing (e.g. they must return meaningful output).
Now, you wouldnβt have things like service mocking available to you, but I still believe the vast majority of the benefits are there. Plus, if mocking is something youβre interested in, you can certainly spin up dummy Docker containers to act as those mocked endpoints. Things start to get a bit more complex there, but certainly possible.
Thanks, Percio. I must admit that I hadnβt thought of this.
How do you handle the organizational side? I mean the configuration to drive the tests must be structured during development, so that the CI server can run all tests easily.
I recommended implementing something similar, but I am not a developer here. All I can do is recommend something to devs, but I canβt enforce anything by myself. I think I should implement a demo and show others how it can be achieved. What I had in my mind was to implement a test endpoint for each package and making a curl call to that service with no inputs. The service will return the results, it will be basically a true or false with errors output. But this requires services to be implemented in such a way that they can be called without writing anything to anywhere.
Iβve done it a few different ways. Iβve stored the collections inside their respective packages, but then if you donβt want the collections to get deployed with the package, you have to filter them out first so Iβve stayed away from this option lately. Iβve stored the collections in a separate repo so certain folks can have access to the collections but not the code, but then you have to keep track of the revision relationship between the βtestβ and βcodeβ repos. Finally, Iβve stored the collections in a separate folder in the same repo as the packages but under a different folder (e.g. resources/test/postman/collections). I tend to like this last approach most.
I highly recommend using a tool from the beginng rather than just curl. It will make your life much easier and you will get a lot more out of it. For example, Postman allows you to embed your test cases in the collection itself so when you run the collection using newman, it will also run your test cases. Not only that, you can report your test results using different built-in reporters, including the junit reporter, which most CI tools will automatically parse for you and give you a user-friendly representation.
For example, hereβs a set of tests in a Postman collection for a silly webMethods package I created called PcGreetings that returns a βhelloβ message to the caller:
This collection is executed by my Gitlab pipeline using newman with the following parameters:
--reporters cli,junit --reporter-junit-export
This causes the test results to be written in plain text to standard out, which results in the following output in my CI/CD pipeline:
newman
pc-greetings
β api
β³ get greetings via rad
GET http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio
200 OK β 47ms time β 283Bβ 158Bβ size β 7β 2β headers β 0 cookies
β β application/json β text β json β utf8 β 72B
β {"greeting":"Hello from the development environment, P
β ercio!","count":1}
β
prepare wait dns-lookup tcp-handshake transfer-start download process total
41ms 8ms 2ms 588Β΅s 26ms 8ms 684Β΅s 88ms
β Received success reponse (URL: http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio)
β Received greeting (URL: http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio)
β Greeting matches pattern (URL: http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio)
β³ get greetings via url alias
GET http://is:5555/api/greetings?name=Percio
200 OK β 6ms time β 257Bβ 158Bβ size β 7β 2β headers β 0 cookies
β β application/json β text β json β utf8 β 72B
β {"greeting":"Hello from the development environment, P
β ercio!","count":2}
β
prepare wait dns-lookup tcp-handshake transfer-start download process total
1ms 647Β΅s (cache) (cache) 3ms 2ms 69Β΅s 7ms
β Received success reponse (URL: http://is:5555/api/greetings?name=Percio)
β Received greeting (URL: http://is:5555/api/greetings?name=Percio)
β Greeting matches pattern (URL: http://is:5555/api/greetings?name=Percio)
β utilities
β³ get count
GET http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount
200 OK β 10ms time β 335Bβ 97Bβ size β 8β 2β headers β 0 cookies
β β application/json β text β json β utf8 β 11B
β {"count":3}
β
prepare wait dns-lookup tcp-handshake transfer-start download process total
1ms 236Β΅s (cache) (cache) 8ms 1ms 48Β΅s 11ms
β Received success reponse (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
β Received count (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
β Count is a number (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
β Count is greater than zero (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
βββββββββββββββββββββββββββ¬ββββββββββββββββββββ¬βββββββββββββββββββ
β β executed β failed β
βββββββββββββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββ€
β iterations β 1 β 0 β
βββββββββββββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββ€
β requests β 3 β 0 β
βββββββββββββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββ€
β test-scripts β 6 β 0 β
βββββββββββββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββ€
β prerequest-scripts β 4 β 0 β
βββββββββββββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββ€
β assertions β 10 β 0 β
βββββββββββββββββββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββββββ€
β total run duration: 249ms β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β total data received: 155B (approx) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β average response time: 21ms [min: 6ms, max: 47ms, s.d.: 18ms] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β average DNS lookup time: 2ms [min: 2ms, max: 2ms, s.d.: 0Β΅s] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β average first byte time: 12ms [min: 3ms, max: 26ms, s.d.: 9ms] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
But it also causes the test results to be exported using the standard junit output, which allows Gitlab to parse it and display this: