CONTAINERS – An Overview
Containers also known as operating-system-level-virtualization, are a lightweight approach to virtualization that only provides the bare minimum requirements that an application needs to run and function as intended. In a way, they can be considered as super minimalist virtual machines which don't need or run on a hypervisor.
Maybe, if we allow our imagination to run wild, can assume that developers got frustrated with whole OS (read Linux & Windows) level of dependencies & cross-fit challenges - decided to create a world where any application can run on any environment/OS irrespective of that being Windows or Linux.
So, the basic idea is to isolate the "working code" completely from outside world/environment/dependencies and make it an executable portion which can run whole on itself, inside itself.
Items usually bundled into a container include:
* System Libraries
* Configuration files
Major Competition--Virtual Machines & Clouds:
Before the advent of containerized-world in the landscape of software enterprise, we used to host our applications over VMs.
People, especially Infrastructure Teams & Policy-Makers, used to like this because they can get a big set of server (hardware), and slice it down to multiple smaller chunks known as VMs, and use that to create a network of multiple independent servers/systems which are connected to each other over a same network. But now, with Containers on-rise, VMs look like an obsolete idea because we can achieve a better level of abstraction and independence using containers.
Another contender in a way for argument to Containers is Cloud Platforms. Some people say that we won't need Docker/Containers if we analyze the requirements and roadmaps properly, and select the most suitable CloudPlatform and use PaaS (platform as a Service) offerings, providing us with better level of abstraction. But at the same time, enterprises using this approach will again be tightly bound with that particular Cloud Platform provider.
Alas, another approach to above situation can be, selecting a Cloud Platform provider without any native support for Linux or Windows, creating our application containers and putting those containers over the Cloud Layer.
Below is a diagram, depicting the difference between VMs & Containers:
(Ref Docker Website)
Each of the Virtual Machines can contain its own Operating System & Application/s.
Containers running on the same landscape use/share the Kernel from the host OS.
VMs use hypervisors to share & manage hardware.
Containers share the kernel of the host OS to access the hardware.
VMs have their own kernel & don’t use /share the host OS kernel, so there is deep isolation present between VMs.
Containers use the same kernel as the host OS, so they are not 100% isolated & may present some security/operational issues.
VMs running on the same server, can run different OS’s, ex: one VM on Ubuntu, other on Windows Server etc.
Again, since containers are bound to host OS, so all the containers on the same server use same OS.
Below is the list of various available container solutions presently in the market:
- Amazon Elastic Container Service
- Google Kubernetes Engine
- IBM Cloud Kubernetes Service
- Red hat OpenShift Container Platform
- BSD Jails
- Solaris Zones
Among the few mentioned above, Docker, AECS & GKE are easily the most popular and widely accepted/used solution.
To understand how containers work, let’s revise the topic which has been around us Software engineers for almost a decade now and something we all are well versed with.
A decade ago, Google came up with the concept called ‘namespaces’.
A namespace is a set of symbols that are used to organize objects of various kinds, so that these objects may be referred to by name. In Java, a namespace ensures that all the identifiers within it must have unique names so that they can be easily identified.
So here in the world of containers, the idea is to put hardware resources into namespaces, and only give permission to use resources to other resources or software, only if they belong to a specific namespace. So you basically can tell processes, what is their namespace, and what hardware namespaces they can access, thereby creating a level of isolation, where each process has only access to the resources that are in their own namespace.
This is basically how Docker works. Each container runs in its own namespace and all containers use the same kernel to manage the namespaces. Now because kernel is the landscape running things here and knows the namespace that was assigned to the process, it makes sure that process can only access resources in its own namespace.
- Containers are small compared to VMs.
Containers start from some Megabytes up, compared to VMs which usually accounts in Gigabytes. So, comparatively a given server can host more containers compared to VMs.
- Containers use fewer resources.
Since we don’t have a full OS and share the same kernel, containers are pretty light.
- Fast Boot.
It’s just a matter of seconds to set up a new container. So in case of sudden surge in load, or some mishap in production with the existing running system, we can easily provision a new container, and parallel debug the existing system for issues.
- Removes the Environmental Risk.
Many a time, a developer’s code works in one env and then has some issues in the next promoted env. Containers essentially eliminate this risk to the lowest level of possibilities, because the actual Prod/QA/SIT/UAT envs are similar as the one where code is being developed.
As another benefit of how quick to spring-up additional containers once you have the basic layout ready, it helps immensely in scaling the load handling capacity in case of immediate needs/request-surges.
A kind of fault-tolerant or load balanced system can also be created very easily with containers, helping in achieving a level of high-availability, especially for customer centric solutions where every downtime even for a second may result in monetary losses.
Since containers present a comparatively less level of isolation than VMs, so this increases the threats of security (spell hacking), because of the exposed vulnerability to the system’s OS.
- Resource Consumption
Since all containers are sharing the kernel from same OS, so a Containerized system needs to be designed with proper efficiency and planning, or else we might end up with an overloaded system.