Introduction to Containers
Imagine you need to implement an end to end application stack, containing of a web server using Node.JS, a database such as MongoDB, a messaging system such as redis and finally an orchestration tool like Ansible. As per the usual approach, you would be spinning up some heavy set of VMs for each application/component and will make sure everything is compatible with the underlyingOS, bins and its libraries.
The approach will require some heavy testing and research since all these applications have different dependencies and requirements. And even if you manage to find a suitable OS, you won’t be able to update any of the applications/components without again going through the hassle of seeking compatibility. This is a developer’s nightmare.
The solution is simple. Use Containers.
Containers and Virtual Machines
Containers are isolated environments with their own libraries and dependencies they share with the OS they run on. For example, imagine you have a Centos machine and you installed a container engine to make use of containers. You now spin up some services as containers on the centos machine. Therefore, these containers have their own libraries and dependencies and the only common thing in between them is the underlying OS. The following image gives you a high-level architecture and usage of a container:
Virtual machines are run on the hypervisor and have their own OS, bins/libs and their own set of software/applications. Each one of these components uses resources and only after allocating for the basic infrastructure, it allows the user to use what is left to use on their own for their application.
Therefore, the higher the number of VMs you have, the more resources you need to assign. Following diagram showcases how containers are sharing the most resource-intensive part of the OS and let the user run many lightweight applications in the same OS. The apps will sometime share bins and libraries as well if they are compatible to each other, making it more lightweight.
Usage and Benefits
There is a lot of overhead and wasted resources when using VMs in the modern world. Containers have rectified that to create an OS independent solution to package and distribute apps and services anywhere, anytime, and in any number of instances you want. Few of the advantages of using containers are listed below
- It’s lightweight – usually when VMs are talked with GBs in mind, containers take up way smaller amounts of space and are usually measured in MBs.
- Faster boot time – due to its lightweight size, containers boot up faster than a traditional VM based application.
- Resource Utilization – Since hundreds or thousands of containers can be run in a single VM, the resource utilization is significantly less than a traditional VM based application stack.
Due to its large beneficial behavior, many companies are now adopting Containerization engines to their portfolio to increase their productivity and efficiency. However, we must not think that containers are the only way forward. It’s not either containers or Virtual machines but we should think about the advantages of both technologies and incorporate them in a manner to take the best out of both worlds.
Virtualization offers total isolation between VMs and containers do have some degree of isolation due to its shared Kernel. Therefore, we must think of virtualization when you want to isolate environments and think about containers when you need efficiency and time savings. Combine these two and you will only need a small amount of VMs to achieve the result where you can gain isolation, efficiency and time saving all together.
History of containers
Containerization is not a newly profound technology or a concept. It has been there since the early 2000 and several giants in the IT industry introduced their own variation to the idea.
2000 – FreeBSD Jails
The idea was to change the root directory for a set of processes which then would be able to run in its own isolated environment.
2001 – Linux VServer
This is similar to FreeBSD Jails where the server tried to partition resources (file systems, network addresses, memory) in its system. The solution was implemented by patching the Linux kernel, however the last patch was released in 2006
2005-open VZ (Open Virtuzzo)
Here the virtualization was done in the operating system level and patched the Linux kernel to achieve virtualization, resource management and checkpoints.
2008 – LXC
Also known as Linux Containers, was the first complete Linux container manager that used cgroups and Linux namespaces to implement containers. It worked on a single Linux kernel without requiring any patches.
Let me contain that for you (LMCTFY) was an open source version of googles container stack, providing Linux application containers. Few of you knew it, but you were using this technology whenever you accessed google functionalities. It ran the service inside a container for each user.
Docker used LXC during its initial stage and replaced the container manager with its own library called libcontainer. The containerization received a huge acceptance in the community due to dockers offering of a complete eco system that separated it among other technologies.
Currently Docker is the number one containerization technology widely used due to it resilient nature and amazing tools.
In the coming articles I will go in depth about docker and all its functionalities to identify the popularity it holds in the community. Learning docker would be the best starting point for any DevOPs learner as a gateway to container orchestration tools like docker swarm and Kubernetes clusters.
If you want to learn more about Containerization and how VMs and Containers play together, you should check out Zero, our container-based App-Platform.
Mit über 10 Jahren Praxis-Erfahrung im Bereich Software Development, Infrastruktur und Cloud-Architektur bieten wir ein breites Spektrum an Expertise im Bereich DevOps, Infrastructure as Code und Release-Management.
Wir beraten Startups zu Themen wie Technische Infrastruktur, Teambildung und Entwicklungs-Workflows, Code-Optimierung, Recruiting und Automatisierung.
Wir bieten Workshops und Vorträge zu Themen wie Docker, Docker Swarm, GitLab CI, Monitoring mit Prometheus und Grafana, Automatisierung und Change-Management.
Wir entwicklen Open Source Software und bieten Beratung zu zahlreichen Open Source Themen.
Wir coachen Team- und Projektleiter in agilen Methoden für Team- und Release-Management und unterstützen bei der Implementierung der technischen Voraussetzungen für ein zukunftssicheres Geschäftsmodell.
Unser Team aus Beratern, IT-Experten und Coaches steht Ihnen zur Seite bei allen Fragen rund um das Thema Digitale Transformation und Software Enabled Company.
Fordern Sie uns heraus
Wenn Sie denken "Da geht noch etwas mehr" aber nicht wissen was und wie, lassen Sie uns reden! In einem unverbindlichen Erstgespräch können wir gemeinsam Ihre Möglichkeiten sondieren - ganz ohne Risiko.