You may have seen the name Kubernetes out on the internet and in your social media streams. Kubernetes, also sometimes called K8S (K – eight characters – S), is an open source orchestration framework for containerized applications that was born from the Google data centers. The origin of Kubernetes is something called Borg which is a massive Google-scale orchestration product and framework that allegedly drives much of the internal systems at Google.
In an effort to share in the community and grow this orchestration toolkit, Google open sourced the project and it is now quickly growing in interest and development within the industry. Some say that the challenge with Kubernetes comes with not having a way to think at “Google-scale”, but the reality is that orchestration applies to any scale and Kubernetes may be the right way for your organization to get application orchestration into the IT portfolio.
Applications = Infrastructure’s Ultimate Goal
While technology for technology’s sake is cool, the real reason we have all of this interesting and fun infrastructure is that we are doing it to run applications. The challenges that many organizations face are usually wrapped around application lifecycle management, but it can be veiled under many other potential disguises. Being able to deploy and manage applications in a containerized infrastructure means that containers become a viable option for consumption in the enterprise. This is one of many challenges around the use of containers, so we will explore how Kubernetes can assist with embracing containers.
The Kubernetes environment is centered around a few core concepts. These terms are used a lot as we discuss Kubernetes, so it’s a good place to start.
A Kubernetes cluster represents a compute, network, and storage. The networking for a Kubernetes cluster is flat to ensure East-West communication between pods. The clusters can range in size, so your choice on cluster size will be dependent on physical or virtual resources. Clusters can be run directly on bare metal, nested within hypervisors, or also nested within containers.
Pods are a group of applications which are run in a shared context. This means that they are treated much like a group of applications on any single virtual machine, or physical server in the past. Pods are the abstraction of this layer to provide a logical host which is application based rather than host based. Not every pod will contain multiple containers necessarily, but the potential is there to use a microservices architecture within the pod.
What good is a pod if it runs on one host only, right? This is where replication controllers come into play. The replication controllers ensure that a certain number of replicas of the pod are running at any one time. In the event of a replica failing, another will be spun up in its place to keep the pre-defined number of replicas active. There is also restart policies which will dictate the behavior inside the cluster of your pod replicas.
The labels are key value pairs assigned to tag objects such as pods within Kubernetes. This applies to be able to label and select objects using references that will be meaningful to the application environment. Pods have a UID which must, as the acronym would indicate, be unique. Labels allow us to create meaningful names that can be duplicated among the overall environment.
Service discovery is an important part of Kubernetes, and using services to provide names, addresses to pods using labels. Policies also come into play within the services, and we will discuss much more on these concepts here and in future articles.
Inside the Kubelet
The Kubelet is the agent running on each node that handles registering the node, sharing the health status of the node, and watching the Kubernetes API for for scheduled creations and deletions of pods. It runs as a binary and works with a combination of configuration files and the etcd servers to handle clusters on each node.
There are a lot of moving parts involved as we can see, and this is why understanding the basic terminology is helpful. This architecture diagram will give you a good hint as to just how many moving parts we are dealing with:
In our next article we will go through the life of instantiating a pod and seeing how it behaves as different actions and state changes occur. There are a lot of interesting capabilities, but the important thing is to understand the core concepts to be able to apply it to the challenges in an application environment. Kubernetes, like any application environment, is not the be-all and end-all, nor is it without its own specific challenges. We will touch on this throughout our discussion.
Image source: http://kubernetes.io/v1.0/docs/design/architecture.html , screenshot from http://kubernetes.io/ (Featured image)