In class, I am continually asked for a simple explanation of the difference between an LXC container and a virtual machine. The diagram below is used in several of my classes to explain different aspects of the X86_64 architecture and how Linux works. KVM is a hypervisor built into the Linux Kernel and LXC is a container system also built into the Linux kernel.

New Microsoft Visio Drawing

The X86 architecture has a security mechanism called rings. Only software running in Ring 0 has direct access to the hardware. The commands in the Ring 0 are privileged in that they can interact with the hardware. In a Linux system, the only process to run in Ring 0 is the Linux kernel, the part of the operating system whose primary function is to manage the hardware and execution of processes. It contains the privileged commands for working with the hardware and execution of processes for Linux. Rings 1 and 2 are not used. Ring 3 contains the rest of the hardware including some parts of the operating system (OS ) which are not directly involved with hardware and management of processes.

At its core, KVM provides a virtual node (computer system) by virtualizing all of the hardware present on a node. KVM sets up the virtual node and then some OS, called the guest OS, is executed on top of the KVM software. Application software is then executed on the virtualized node by the guest OS. As you would imagine, there are some very difficult problems to be solved for this to work correctly. How does the virtualized node translate the guest OS’s privileged instructions for the hosts kernel? How does the guest OS get its time slice? These are just two of a large number of problems. How each problem is solved divides hypervisors into different types.

What is the payback for all of this complexity? We can run virtually any OS as a guest OS on a Linux system using the virtualized node provided by KVM.

A use case for this? I work in Linux. I do not own a computer with a Microsoft OS. However, if I need to deliver files in Microsoft Word or Microsoft PowerPoint, these do execute on my Linux systems in a virtual guest. On my Linux host, I use the KVM hypervisor to run Windows 7 on which I run Microsoft Word and Microsoft PowerPoint. It works well because I am using a laptop with two quad-core i7 CPUs and 16 GiB of memory.

A Linux container does not virtualize hardware. At its core, a Linux container provides an isolated virtual environment with its own process space, and network space, and then shares the host kernel. In simpler terms, this means four things: First, the container must contain a Linux OS, minus the kernel, that will work with the host’s kernel. In practice, this means a Linux OS that will work with a Linux kernel starting with version 3.0 or later. Second, the container runs a single process (yes, it can be made to run multiple processes) within the context of the Linux OS in the container. Third, the container has any additional software needed to run the process and should have only that software. And fourth, the container has its own network addresses and ports. There are multiple methods to expose container networking to the host, the Internet, and other containers.

What do I get for running my process in a Linux container? I can move my Linux container to any Linux system running a kernel 3.0 or greater containing the correct hardware for my process and I have confidence that it will run properly.

The use case? I do not have to make my process work in the development environment and then in the test environment and then in the production environment. I build, test, and deploy in the environment of the Linux container that travels with the process already in the container all the way to production.

There usually follows a vigorous discussion. How does this affect program design? What does this do to DevOps? How does this affect Agile development? What technology is used to distribute containers? How do you update and manage containers?

These are topics for another time.

ROI currently has an introductory course titled Essential Docker which discusses the current status of Docker and similar technologies that will show one set of answers for building, maintaining, and distributing containers. A second course, currently titled Programming with Linux Containers, is planned for release in April 2016.

Leave a Reply

Your email address will not be published.