In our first article on Containers in an Amazon Web Services environment, we talked about the fundamentals required to get us started. Now that we know the moving parts involved in AWS, it’s time to take the next step!
There are two primary strategies in deploying applications to a cloud-based environment:
- Setup the VM on-the-fly
- Create an application-specific image snapshot
There are elegant tools in the market that help you automatically configure and deploy applications to a server, such as Chef and Puppet. This first strategy is to setup a base image that contains the infrastructure that you need, such as the operating system, JVM, application server, and so forth, and then use a tool to create an instance of this image and deploy your application to it in a reproducible manner. This is a good strategy, but it has two potential issues:
- Because the image is built on-the-fly, you are not guaranteed that it will not have any unexpected issues
- The time required to startup and add a new instance into your cluster can be measurable
Automated tools like Chef and Puppet mitigate much of this first potential issue because, as an automated process, the process is reproducible and the results should be consistent. But there is always the risk that some problem snuck in during file transfers or configuration.
The second issue is that it takes time to start an image and configure it on the fly. When you are trying to add a new instance to a cluster to satisfy increased user load, time is sometimes of the essence and an extra 2-3 minutes can be substantial.
The alternative strategy is to start from a base image and create an application-specific image that contains your deployed artifacts. This enables you to quickly startup new instances that are already configured and ready to go. The drawback to this approach is that you will end up with a whole lot of images and you’re going to have to put some process in place to remove old images. Furthermore, your AWS costs include storage space, so your costs could be marginally higher.
Taking a Hybrid Approach
The best solution is probably a combination of the two: use an automated tool to start from a base image and configure an application-specific image, execute a set of tests to validate the image, snapshot that image, and then use that image to augment your environment. This is summarized in figure 3.
Before leaving this introduction to AWS, I wanted to comment about how AWS can provide support for continuous deployment. Continuous deployment is a software engineering approach that allows software developers to build software in short cycles and deploy it into production as soon as it is ready. It requires robust automation tools, such as Chef and Puppet, that can reliably setup a server, configure the server, and deploy artifacts to that server.
AWS lends itself well to continuous deployment because it provides a robust API that allows you to create AMI instances, capture snapshots, and so forth as a part of an automated process. In the next article I’ll detail a strategy for setting up a continuous deployment strategy and executing a zero downtime plan.
Containers really are the future of production deployment. Containers allow you to start from a pre-configured and working set of images and rapidly deploy your applications to a production environment. Containers come in many forms, from virtual machines to micro-containers that startup in a matter of seconds.
This article presented Amazon Web Services, which was one of the forerunners in promoting the concept of containers and still serves as the basis for many of the container technologies. We focused primarily on Elastic Cloud Compute (EC2), which allows you to create a virtual machine instance from an Amazon Machine Image (AMI), because it is a raw virtual machine-level container infrastructure.
In addition to introducing AWS and AMIs, this article provided some advice on deployment strategies and briefly reviewed how AWS contributes to continuous deployment.
The next article will show you hands-on how to setup an AWS environment and provide strategies for zero down-time deployments.