How to automatically sync Documents for CI/CD git Version Control

I am currently working on improving the CI/CD pipeline for webMethods. We have been using broker and we are planing to upgrade the environment soon. During the upgrade I want to prepare a UM server so that we can start migrating our triggers. We don’t have LSD license but we have an unlimited contract so we can still use local IS and we can use git on our local and we have been doing this for a long time. Buying LSD license is probably not an option. So my plan is to configure git for packages and jars, but I never configured git for CI/CD before.

My questions are:

  1. As far as I know, git is just like an external directory that keeps history like svn (most basic explanation). So I think I can sync packages on both target machine and the local development machine without an issue. If the changes aren’t reflected, we can just reload the package after sync. I can probably figure that out. But since git mostly acts as a directory system, it won’t have any idea about the external assets, like publishable documents. What are my options about these IS external assets? Do I need to manually trigger those services which does the sync? Is there a simpler or safer way I can do this?

  2. Packages are already in git repo. In order to sync these packages with the local IS, our current approach is to sync them in a seperate folder and import them using eclipse import. I thought we could simplify this by syncing them in to replicate/import directory and then pushing them from replicate/archive folder. Is this a good idea? Is there anything I need to be careful about?

  3. Did anyone ever implement a similar CI/CD without an LSD license? If so, can you explain to me how you did that?

We don’t use BPM right now, so we don’t need to consider it in requirements, but we may use it in the future, so it is nice to have but not a must.

I am not speaking from experience, as we still use Deployer runtime deployment, but perhaps you could use pub.publish:syncToProvider or pub.utils.messaging:syncDocTypesToUM as a package startup service?

1 Like

That’s what I thought to use as well. Do you know if there is a service to discover all the publishable documents in the package? Otherwise I need to manually discover these services and keep the list up to date manually to. I am looking for a solution with minimal human tasks as possible.

Haven’t tried it, but the documentation of the syncDocTypesToUM packageNames parameter suggests it will do all the documents in a package

1 Like

I tested this and it works. All I need to provide is the package names as a list. Currently my only challenge will be after migrating to kubernetes cluster. I don’t want this task to run each and every server, all I need is once from any after deployment.

I will keep the topic open for awhile to discover how others solved this.

@engin_arlak ,

Regarding #1:

Are you planning on deploying the packages to the target by simply checking them out to the target server and then activating or reloading the package? If so, how do you plan on handling environment-specific items like connections?

I have worked with CI/CD quite a bit, and in most scenarios, it involved using Deployer to deliver the packages to the target servers. The reason being that Deployer can handle those environment-specific items via its varsub feature. If you use Deployer, it will take care of sync’ing the doctypes for you.

Regarding #2:

I’m not sure I follow your comment regarding packages/import and packages/archive. I typically clone the Git repo somewhere on my machine and I create a symlink from each package directly into the packages directory. I suppose you could directly clone into your packages directory, but then you have to be sure to ignore Wm* packages, for example. In a nutshell though, the package you actively work on is the same package that is in your Git repo. There’s no need to copy or archive anything.

Regarding #3:

Yes, but we ended up developing our own local development plugin to make the whole thing more seamless. Our plugin wraps an Eclipse Java project around each package and sets up the build paths, making Java service development much easier.

There are basically a few things that need to be done if not using an existing plugin though:

  1. You have to move the package into the packages directory, for example, via a symlink
  2. You have to compile and frag the package in case it has Java services because *.class and java.frag shouldn’t be added to version control. You can use the jcode utility to accomplish this.
  3. You have to reload the pakcage (if it already exists) or activate it if it doesn’t.

Our plugin does these things and the Software AG plugin does some similar things if I’m not mistaken.

As far as I know, the Software AG local devleopment plugin doesn’t do anything with publishable doctypes, and to be honest, I’ve never found out-of-sync doctypes to be an issue when doing local development. Worse comes to worst, you will attempt something that will tell you your document is out-of-sync and then you can simply sync it.

Having said this, if you must create an automated doctype sync, then the service suggested by Dave seems promising because the input documentTypeNames is optional, implying perhaps that you can sync all doctypes. When it comes to the Broker, there’s a service in WmRoot called wm.broker.sync:listOutOfSyncs that should come in handy.

HTH,
Percio

This is one of my options to consider. Adapter connections are stored in different packages. These packages don’t have anything other then connection itself. So I plan to create a startup service that updates these values with the values from the vault. I will store these services in the connection packages directly. I might use @jahntech.cj connection updater as well. Haven’t decided what to use yet. I need to test it first.

This is an excellent idea. I will probably do the same. There is 2 sides of this CI/CD configuration. One side is LSD, and the other part is deployment and synchronization. I am trying to improve both parts. If my understanding is correct, as long as git ignores are properly configured, code commitment can be achived just by git commit and git push commands. We don’t even need to use eclipse for git. So I think that part will be pretty easy. In CI/CD pipeline I plan to do implement the same steps you noted.

Do I need to manually compile java services? Would reloading a package compile these java services as well like they do to flow services. I hope it does it automatically, but I think this is not possible right? If my memory is correct, when a java services malfunctions, I always needed to make changes like a white space to make it compile again. I believe this part is already scripted, I can probably use that part of the script and move that functionality to the CI/CD pipeline.

I probably will have to use deployer as well in order to trigger a dependency check. If I can do it another way, I don’t think I will need to use deployer at all. But even if I need to use the deployer, my plan is to use deployer on the server directly. So everything will be processed on local servers(I hope). We have too many deployment servers right now. I plan to decommission all of them. I might create a single deployer active pod to do the deployment on a shared persistant storage, but we decided to divide the upgrade project to phases, and right now the containers are part of phase 2.

After I posted my reply, I saw your post about Kubernetes. Do you happen to be using MSR containers? With MSRs, there’s a new and improved way of handling environment-specific configurations via an application.properties file. I haven’t had a whole lot of hands on experience with it, but it seems much simpler and it works relatively well.

Yep, I completely agree. In fact, when introducing new webMethods customers to CI/CD, I also break it up in those two phases: pre-commit and post-commit. I sometimes find myself using EGit (the more commonly used Eclipse plugin for Git) when doing webMethods local dev but it’s not required and I’m not a huge fan. I tend to use TortoiseGit myself for most of my Git interactions.

The most convenient method for compiling and/or frag’ing many java services at once is the jcode utility. Reloading the package won’t do it. There’s a section in the Services Development Guide called β€œUsing the jcode utility” that you will find useful, but in a nutshell, these are the commands:

# Compile all Java services in a package
jcode.[bat|sh] makeall <package>
# Create fragment files for all Java services in a package
jcode.[bat|sh] fragall <package>

You could quite easily create a simple bat or shell script that takes care of checking out a package, creating the symlink, running jcode against it, and then activating/reloading the package as needed. In the very first project in which I was exposed to webMethods local dev and CI/CD in 2008, believe it or not, this was precisely how we did it at first.

If you’re creating MSR Docker images to be deployed into Kubernetes, then you will likely not need Deployer at all. During your image build step, you will simply add your full packages to your Docker image directly from your version control system. When doing full package deployments like this, dependency checking is not as critical. You simply need to ensure that all dependent packages go together, which is typically easily accomplished by keeping related packages in the same repository, but you could certainly organize them in separate repos if it makes sense (e.g. one repo with common utility packages, other repos supporting specific business functions). After your Docker image is ready, you will deploy that same image across environments and you will leverage application.properties to ensure that environment-specific values are propertly set in each environment.

If you do end up needing Deployer though, I recently worked on a CI/CD solution where I dockerized the Asset Build Environment and Deployer, which allowed me to execute deployments in a serverless Gitlab pipeline. In other words, ABE and Deployer existed nowhere else other than in those Docker images and those containers were brought up and torn down each time. No need to maintain a running Deployer server. I have more info there too if you need it.

Take care,
Percio

1 Like

Great discussion and a lot of interesting thoughts!

I would like to throw in an additional, and in my view critical, aspect here:

It is not about the details that have been discussed so far, but on a conceptual level; and the latter obviously has implications for the implementation details.

A VCS (like Git or Subversion) is not meant to be used as a direct(!) source for deployments. There is a lot of reasons for that and many fall into the category of compliance or even legal requirements.

What needs to be done instead is to create a release from the VCS and that release artifact is what gets deployed. Ask yourself this question: Would I download the source code for this new JVM version onto my PROD sever, compile it there, and then install it? Of course not. What you do instead is to install the latest release from the vendor.

The JVM comparison above is not 100% accurate relative to custom packages for IS. But the core aspect remains. So here is what the process should look like for me (which is by the way how commercial software is usually released today):

  • You need to create a release on the CI server, once the unit tests on the CI environment have passed. In the case of IS this means exporting the package into a ZIP file. Since the unit tests have passed, this package is known to be generally operable.
  • The package ZIP file needs to be stored in a binary repository. Usually you take something like Nexus, Artifactory, etc. for this. Both have open source versions that are not that difficult to set up. If you want to start really(!) small, you can even just use a designated directory on the file system. Although I would not recommend it, since you will need to re-develop functionality that is already there (would you develop an RDBMS on your own?).
  • You can also store the build in binary form in Git. The critical aspect here is that in this case Git as being used as a binary repo and not a VCS. As long as you are aware of this fundamental shift in the nature of its use, that is ok. But please try to make sure to NOT simply speak about Git then, because it will confuse people a lot. Talk about something like β€œbinary repo based on Git” instead.
  • When storing the ZIP file into the binary repo, you choose the path such that it contains the version number and also indicates whether this is just a snapshot triggered by the latest commit. Or you mark it as an official release that is supposed to be deployed into PROD. There are also approaches to promote a release from snapshot to official release, once it has gone through the required test stages.
  • The point here is that there is exactly one place where builds happen and that is the CI server. This one build is then deployed to the various stages.

I have been using this approach for more than 10 years now. It is conceptually very simple and that also makes it robust and easy to explain to auditors.

Most importantly, through the clear separation of concerns it is easy to adjust to the exact requirements. So you can use this process for IS classic and MSR; for VMs and containers; with any kind of binary repo (incl. the file system). The approach stays 100% the same.

For me this model was a game changer, once I had arrived there. It gave me guidance and greatly reduced complexity.

I will stop here, before this post gets even longer. But there is obviously more, so please let me know if you are interested in particular aspects.

2 Likes

I agree and I too have always been a big fan and proponent of β€œbuild once, deploy many” (or β€œbuild once, deploy anywhere”). I will say though that I worked with this team in my current project where they implemented a CI/CD pipeline based on β€œGitlab flow”. Each target environment had a dedicated branch (e.g. development, test, production) and a deployment was triggered by a merge into that branch. The binary was built (and deployed) each time from that target branch. I resisted the idea at first but I was impressed with how well they implemented it. I was also pleased with the simplicity of having the same pipeline apply across environments and the simplicity of not having to maintain that artifact.

Not saying it’s a better approach but I have seen folks implement it surpringsingly well. :slight_smile:

Percio

Unfortunately no, we don’t have a license for MSR.

This is likely how I am also going to deploy it. The other option is to use a central persistant storage for packages. But I am not sure about it, if it is a good idea or not. Btw I am not worried about the package dependencies. What I am worried about is service dependencies that deployer does.

Definitely would like to learn more about it.

We have a similar setup like this already. So removing the artifactory out of the picture for the sake of simplicity is out of options (it already was because it turns out I can’t push jar’s to git due to company policy). So after pushing the package to artifactory, we download it to packages/inbound folder and execute a package install service? or unzip it on before the containers starts if it is containerized? What about the service dependencies. Where and how should we check service dependencies (not package dependencies)?

This was my initial plan as well, but I can’t do that because of the policies. I will probably do it for the configuration repo though.

For deployment into a running VM, you would download into ./replicate/inbound and install with some scripting.

For deployment into a VM where IS is not running, you can just unzip it into the ./packages folder and then start IS.

For a container image you unzip into ./packages as part of the image building process.

The dependency check of Deployer does not make much sense in this kind of scenario. It was always nothing more than a poor-man’s approach anyway. You cover this and many more aspects by test automation.

Can you elaborate how this is actually a thing? I am not sure whether I have ever had this issue in the last 15+ years.

That’s what I meant as well, I just didn’t want to check the folder while typing. I usually don’t memorize most of the directories. I found them with muscle memory.

It is pretty easy to break dependencies when you have tons of packages. I am not a developer in my company right now. But in the past, I have broken dependencies by simple renaming a service for a refactor and accidentally forgot to update dependencies. I don’t think it will happen here, because I believe devs don’t refactor by renaming the services.

Do you have a way of implementing unit tests without WmTestSuite package? I believe we don’t have a license for it.

It is of course possible to come up with a home-grown replacement for WmTestSuite. But it would cost a decent amount of money to do so.

If your organization really does not have a license for WmTestSuite, then it does not make much sense to use containers and Kubernetes. At least that is my personal opinion. Of course it is possible to come up with something. But the main value proposition of containers here is business agility for me. Without proper CI/CD it is not possible to achieve that.

I respectfully disagree. Although I’m a big fan of WmTestSuite, I believe you can get most of what you need done with free tools if the Test Suite isn’t available.

In my past few CI/CD implementations, all we used for automated testing was newman, Postman’s command line tool. This is possible because, as you know, IS services (most of them at least) can be invoked via http/s using the invoke directive, so even if your services aren’t exposed via RESTful API’s, you can still execute them and test them with tools like Postman. It’s, of course, important to write your services in a manner that make them well-suited for automated testing (e.g. they must return meaningful output).

Now, you wouldn’t have things like service mocking available to you, but I still believe the vast majority of the benefits are there. Plus, if mocking is something you’re interested in, you can certainly spin up dummy Docker containers to act as those mocked endpoints. Things start to get a bit more complex there, but certainly possible.

Percio

2 Likes

Thanks, Percio. I must admit that I hadn’t thought of this.

How do you handle the organizational side? I mean the configuration to drive the tests must be structured during development, so that the CI server can run all tests easily.

I recommended implementing something similar, but I am not a developer here. All I can do is recommend something to devs, but I can’t enforce anything by myself. I think I should implement a demo and show others how it can be achieved. What I had in my mind was to implement a test endpoint for each package and making a curl call to that service with no inputs. The service will return the results, it will be basically a true or false with errors output. But this requires services to be implemented in such a way that they can be called without writing anything to anywhere.

I’ve done it a few different ways. I’ve stored the collections inside their respective packages, but then if you don’t want the collections to get deployed with the package, you have to filter them out first so I’ve stayed away from this option lately. I’ve stored the collections in a separate repo so certain folks can have access to the collections but not the code, but then you have to keep track of the revision relationship between the β€œtest” and β€œcode” repos. Finally, I’ve stored the collections in a separate folder in the same repo as the packages but under a different folder (e.g. resources/test/postman/collections). I tend to like this last approach most.

I highly recommend using a tool from the beginng rather than just curl. It will make your life much easier and you will get a lot more out of it. For example, Postman allows you to embed your test cases in the collection itself so when you run the collection using newman, it will also run your test cases. Not only that, you can report your test results using different built-in reporters, including the junit reporter, which most CI tools will automatically parse for you and give you a user-friendly representation.

For example, here’s a set of tests in a Postman collection for a silly webMethods package I created called PcGreetings that returns a β€œhello” message to the caller:

image

This collection is executed by my Gitlab pipeline using newman with the following parameters:

--reporters cli,junit --reporter-junit-export

This causes the test results to be written in plain text to standard out, which results in the following output in my CI/CD pipeline:

newman
pc-greetings
❏ api
↳ get greetings via rad
  GET http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio  
  200 OK β˜… 47ms time β˜… 283B↑ 158B↓ size β˜… 7↑ 2↓ headers β˜… 0 cookies
  β”Œ ↓ application/json β˜… text β˜… json β˜… utf8 β˜… 72B
  β”‚ {"greeting":"Hello from the development environment, P
  β”‚ ercio!","count":1}
  β””
  prepare   wait   dns-lookup   tcp-handshake   transfer-start   download   process   total 
  41ms      8ms    2ms          588Β΅s           26ms             8ms        684Β΅s     88ms  
  βœ“  Received success reponse (URL: http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio)
  βœ“  Received greeting (URL: http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio)
  βœ“  Greeting matches pattern (URL: http://is:5555/rad/pc.greetings.api:greetings/greeting?name=Percio)
↳ get greetings via url alias
  GET http://is:5555/api/greetings?name=Percio  
  200 OK β˜… 6ms time β˜… 257B↑ 158B↓ size β˜… 7↑ 2↓ headers β˜… 0 cookies
  β”Œ ↓ application/json β˜… text β˜… json β˜… utf8 β˜… 72B
  β”‚ {"greeting":"Hello from the development environment, P
  β”‚ ercio!","count":2}
  β””
  prepare   wait    dns-lookup   tcp-handshake   transfer-start   download   process   total 
  1ms       647Β΅s   (cache)      (cache)         3ms              2ms        69Β΅s      7ms   
  βœ“  Received success reponse (URL: http://is:5555/api/greetings?name=Percio)
  βœ“  Received greeting (URL: http://is:5555/api/greetings?name=Percio)
  βœ“  Greeting matches pattern (URL: http://is:5555/api/greetings?name=Percio)
❏ utilities
↳ get count
  GET http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount  
  200 OK β˜… 10ms time β˜… 335B↑ 97B↓ size β˜… 8↑ 2↓ headers β˜… 0 cookies
  β”Œ ↓ application/json β˜… text β˜… json β˜… utf8 β˜… 11B
  β”‚ {"count":3}
  β””
  prepare   wait    dns-lookup   tcp-handshake   transfer-start   download   process   total 
  1ms       236Β΅s   (cache)      (cache)         8ms              1ms        48Β΅s      11ms  
  βœ“  Received success reponse (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
  βœ“  Received count (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
  βœ“  Count is a number (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
  βœ“  Count is greater than zero (URL: http://is:5555/invoke/pc.greetings.utils.sequence:getGreetingsCount)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         β”‚          executed β”‚           failed β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚              iterations β”‚                 1 β”‚                0 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                requests β”‚                 3 β”‚                0 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚            test-scripts β”‚                 6 β”‚                0 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚      prerequest-scripts β”‚                 4 β”‚                0 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚              assertions β”‚                10 β”‚                0 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ total run duration: 249ms                                      β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ total data received: 155B (approx)                             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ average response time: 21ms [min: 6ms, max: 47ms, s.d.: 18ms]  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ average DNS lookup time: 2ms [min: 2ms, max: 2ms, s.d.: 0Β΅s]   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ average first byte time: 12ms [min: 3ms, max: 26ms, s.d.: 9ms] β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

But it also causes the test results to be exported using the standard junit output, which allows Gitlab to parse it and display this:

HTH,
Percio

2 Likes