Devs are from Venus, Ops are from Mars, Containers: Docker

June 2nd, 2015 by

docker-logoIf you’re just joining this column, it is one aspect of a response to the gap between how development and how operations view technology and measure their success – it is wholly possible for development and operations to be individually successful, but for the organization to fail.

So, what can we do to better align development and operations so that they can speak the same language and work towards the success of the organization as a whole? This article series attempts to address a portion of this problem by presenting operation teams insight into how specific architecture and development decisions affect the day-to-day operational requirements of an application.

This article continues our series on containers by examining Docker, which is an emerging standard in the micro-container market.

Introduction to Docker

Docker is a relatively new container technology, launched in March 2013, which enables you to build, ship, and run software anywhere. Docker can be provisioned on any infrastructure, which means that it can run on your laptop or it can run in a production cloud environment. It provides its own lightweight runtime environment to which you deploy your applications.

All of this is to say that Docker enables developers to run their application locally on their development machine and it should behave exactly the same on a production server: it eliminates the “but it works on my machine” excuse that you’re so used to hearing.

In practical terms, Docker provides two key components:

  • Docker Engine: this is a light-weight application runtime environment and packaging tool
  • Docker Hub: a cloud-based service for sharing Docker applications

Currently the Docker Engine runs natively on Linux, but it can be launched virtually through their boot2docker command and run on Windows and Mac. The Docker Engine is a lightweight container that is somewhat analogous to a virtual machine, but it starts up in seconds rather than minutes and is truly lightweight.

The Docker Hub is a public cloud-based source that contains Docker images. If you need a web server, rather than starting with a blank virtual machine, installing one, and configuring it, you can instead find an already configured web server with all of the latest patches on it. You can access the Docker Hub here.

When you click on “Browse & Search” you’ll see the latest and most popular images. For example, at the time of this writing when I sort by most downloaded I see top downloads such as BusyBox (23M downloads, this is the Swiss Army knife of embedded Linux), Redis, Ubuntu, MySQL, nginx, WordPress, MongoDB, Postgres, Node.js, CentOS, Ruby and Rails, Python, Java, RabbitMQ, PHP, and many more.

The process to use one of these systems is simple: install the Docker command line tool (or boot2docker if you’re on Windows or Mac) and then execute a command like:

docker run image-name

Docker will then download the image for you and launch it. No more creating a new virtual machine, downloading the software you want to install, reading through the manual to figure out how to configure it properly, and then managing the virtual environment.
Docker vs Virtual Machines

Docker is a container, so how is it different from a virtual machine? Figure 1 shows a comparison of virtual machines and Docker that I extracted from the Docker web site.

docker-comparison

The primary difference between Docker and traditional Virtual Machines is the requirement for a full guest operating system. Traditional Virtual Machines require not only the application and the supporting libraries and binary files, but they also require a full guest operating system in which these applications run.

Docker, on the other hand, only requires the application and its support libraries and binary files. As their website states, Docker “runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.”

Stated another way, rather than creating completely isolated full operating system virtual machines, the Docker Engine is able to share its own kernel with each of the Docker images, but put facilities in place to ensure that each application environment is isolated from the others.

This is how Docker is able to start images in a matter of seconds once the Docker Engine is running.  Now that we know how we can run a Docker container, it will be important to understand just why we want to do this.  Join us in our next post as we dive into the importance of the container model.

Image source: www.docker.com (Featured logo image), Docker versus VM by Steven Haines

Leave a Reply

Your email address will not be published. Required fields are marked *