Containers are not a new thing – they’ve been a hot topic for a couple of years. Docker, the company that became a synonym for the technology and now has a huge community worldwide, was founded 14 years ago. Despite those facts, we see a rather slow adoption of the containers both in our customer base and in our own R&D teams. This is rather strange considering the technology has so many clear advantages over the standard VMs. The reason may lie in the fact that all techies mostly stick to familiar practices. Many would like to leverage the container technology, but it is understood as something that requires a change in the way we do infrastructure along with a new toolset, new development processes, new monitoring processes, etc. Suddenly the bar gets too high, and many people don’t know where to start. It feels like one has to spend a month or two figuring out the technology first.
As often in life, we could use something without knowing each detail about it. Many drive a car and have no knowledge of how the combustion engine works or how exactly the gearbox gives traction to the wheels. We don’t care about it, as long it is usable enough – for one, it is enough to unlock, put gasoline, switch into gear and use the steering wheel. In a similar manner, this read will suggest small and practical examples of how to use the containers in a known context so that everybody can achieve small wins while getting familiar with the technology.
Everybody uses VMs and often the containers are seen as their next generation. Let’s briefly check the differences in multiple categories.
Figure 1: Source: Docker.com
Figure 1 shows the runtime differences between a container structure and a VM structure. Notice the overhead every VM brings – it has a thick layer of an operating system instead, and it shares the host operating systems via the container abstraction in a much more optimal way. By this, the containers have a much smaller footprint in terms of memory, hard disk allocation and much faster boot-up time. Their performance is also better and more persistent.
VM is a general-purpose computer – you can run desktop and server applications in one. As in a normal computer, there is no limitation or best practice for the number of applications that one can run.
A container uses the host operation system in isolation. It is small and fast, but per best practice, it runs only a single application. Further, this app can only be headless, as the container, contrary to the VM, and does not have a UI console to work with desktop applications. Headless means that the application will be accessible only via the network interface – TCP endpoints, REST endpoints, web UI, web services, etc. The two latter facts explain why often the containers are used in the microservices architecture – they contain a single service type application.
VMs are an abstraction of a normal computer and as such, they persist with everything that is written by the processes running inside it. We could revert back to a state-based snapshot of the virtual machine, but no data gets lost on restart. Contrary to this, the containers are immutable. Every data generated in the process space of the container gets lost on restart. The data that needs to survive a reboot has to persist in a volume mapped to the host operating system.
Virtual machines are not designed to be shared between developers. While this happens and everybody had a project VM called something like “dev setup virtual machine revision 32,” it is always a cumbersome process involving copying via hard drives, network shares, etc. In contrast to this, the containers are designed to be shared. The mechanism for sharing containers is through a container registry – this is a server component, similar to a VCS that can hold many types and versions of a container. Like source code, those containers can be pushed and pulled and tagged with a version, eventually enabling sharing between developers or promoting to an environment like test or production. Unlike code, and much like VMs, the containers provide a fully packaged solution that contains all dependencies – OS settings, environment variables, tools, libraries, etc. In summary – the containers are providing a solution of the “it worked on my machine problem” but without imposing the problem of sharing or promoting huge virtual machines.
So, what are the quick wins for the containers with webMethods? Let me propose multiple scenarios here – some are easy, some are more advanced.
Go and explore dockerhub.com. Our products are there along with many others. With a single command line and a download of the optimized image, you can test the new version of our software – Integration server, API Gateway, Terracotta. The same goes for services like databases (inc. nosql databases), caches, 3rd party app servers. It has never been easier to download an application with all the required settings and tools (like java, python, etc.), spin it up and start using it. With this, you can start a test or an application, leverage it, and then destroy it in minutes without any cumbersome installation and settings. A good example would be to set up a database, observe how your application can store data, and then destroy it.
Do not maintain all products on the dev machines as on-prem installations – you may do so only for the designer going forward. The rest can be used as containers. Here is a description of how the local service development environment can be set up with containers - Reverb.
When a fix of the Integration server is needed, you won’t use SUM to apply it on every machine instead a developer will pull the latest image that already contains the fix and will continue developing.
Here are some benefits of this approach:
- Set up the dev environment in minutes, not hours
- Do not wait for the cumbersome installation process of the run times, nor create complex installation scripts that must be modified when fixes are available
- Easily move an environment from one machine to another
Containers allow you to create a test environment based on specific tests without slow installation. Everyone that has set up a CI/CD for BPMS processes remembers how difficult it is to clean up the environment afterward, as the processes write information in the Integration Server, My Webmethods Server and the database. If your environment is disposable, as the containers are when restarted, this problem is practically nonexistent. Another quick win would be to scale the environment by creating as many containers as you need for a particular test scenario.
Those usages are quick wins, leveraging some of the container’s properties while not a proper container promotion themselves – here they are used as disposable, easy to run VMs.
To speed up the evaluation process when introducing a new fix or a new major version, just get the latest container with it and test it out. This would save you hours in installations and would be a very quick test to show if the fix works or if your code must be modified for the new version.
The promotion is not a quick win, but it’s worth mentioning. One of the biggest benefits of the containers is that it manages to package everything your service needs – all external libraries, OS dependencies, environment variables etc. The term “container” was adopted in the software, as the mechanism resembles the transport – somebody packages a container on the other side of the world, and it could travel on a ship, train and truck before its final destination, but it will still have the very same content at the receiving end. Such mobility and immutability are very powerful in the technology world as it ensures the same piece of software with all its dependencies will run on every stage of the promotion pipeline. If applied correctly, you’ll never hear again “it worked on my machine” or “the prod system had a different fix level than test environment.” However this result requires a complete rewiring of the CI/CD methodology as it has to now test and promote containers and not source code. This is not a very easy task, but it comes with lots of rewards.
Once you’ve achieved the containerized promotion, you might fully leverage the mobility of the software by deploying it everywhere – on-prem, cloud or hybrid scenario. Migrating existing solutions to the cloud when they run in containers may be simpler than using a migration infrastructure or reimplementing the services in SaaS.
Software AG is going to bet more and more on containers for the delivery of its classical on-premises products. Soon, we will launch our own public container registry with regularly updated container images. You should also monitor our official GitHub account for any samples around the CI/CD promotion and container orchestration.
For a deeper understanding of how to deploy container solutions with webMethods, take a look at the more comprehensive blog post on containers and container orchestration. The topic is also discussed in the following webinar.