Now that we have reviewed some of the fundamentals of Big Data from an Ops point of view, it’s time to dive in and see where all these platforms come into play for us. With the choices in front of us as highlighted by Steven Haines we can see that there are some clear requirements we need to be aware of when deciding on the “right” Big Data platform.
Hadoop, HBase, MongoDB, Cassandra, CouchDB, Neo4j, Redis, and Riak have all been nicely summarized here and here, which gives us a great place to begin. The important part of this process is that this is the beginning of where we have to define the requirements.
As a fan of Agile and iterative approaches though, this doesn’t mean that we are going to be spending weeks in the lab kicking the tires on each of these solutions. We may already be using them in the Dev environments without realizing it because our development team is doing their best to drive innovation there.
Picking the “right” platform
When I’m asked the question about any technology and the ever classic “which is the right platform to use?” I give the same consistent answer, which is “the one that works for your requirements.”
What is the reach of our Big Data consumer? By that I mean the systems that will be consuming data to be rendered as either more data, or perhaps content to customer or employee facing applications. This is another very important component of the requirements because it may tell us that we need to have portions of our Big Data platform in a public cloud platform to give data locality for web applications.
Rate of growth, rate of change, rate of access and much more will drive our decision. Each of the products we named at the start of the article have their own specific advantages, disadvantages, and limitations that can be critically important to know.
We will have defined the specific needs of our application in advance of the build, but we also need to be adaptive to the actual needs during operation. This is where things get adventurous.
Past performance is not indicative of future results
You have probably seen that classic disclaimer on a financial prospectus that states that “past performance is not indicative of future results” which may seem like a rather generic waiver, but it is there for a reason. The same holds true when we talk about technology platforms.
At launch, and in development, the results of performance and scalability may differ greatly from the operational requirements. This may include spike traffic during peak periods, or a sustained rapid growth that wasn’t planned for.
Having too much customer traffic may be viewed as a good problem to have, but not if the results are application timeouts, transaction failures, and application performance that is substandard. Apple runs what could be one of the largest Big Data and storage platform to service its iTunes application to millions of devices, but even the media giant has been a victim of “too much of a good thing” when software releases and iTunes downloads are in high demand.
We have to be careful to separate the traditional storage and download from the Big Data area, but we know that the information that feeds all of your recommended suggestions is driven by massive Big Data engines which are also subject to incredible demand.
There is a reason that when you do a Google search for Big Data, that the top results are related to Big Data analytics. Performance is something we need to ensure that we get to the desired state for application demand, and to the real desired result which is satisfying customer demand.
One person’s trash is another person’s treasure
Another thing that we have to remember is that we need to look at real results in our own deployments to define the real value proposition for a platform. If you run one type of Big Data platform, you may find someone else in the industry with a horror story on that particular tool. The same goes for the reverse. We may achieve great success on a platform that has had failures in other situations.
Steven did a great roundup of what each Big Data platform is about, and your requirements will vary greatly, so that is why we aren’t getting too specific here on advantages and disadvantages. There will be more discussion as we delve into Steven’s Dev view on application patterns to help understand some more about the platform features that come into play.
This is why it is important that we don’t forget about the Dev team during this process. Remember that we are building towards a DevOps culture, so the Ops and Dev teams may have different requirements and preferences that we have to all be aware of. The application environments that are being built or adapted into are also very important in how they will consume Big Data as there are some application specific hooks that may affect decisions on which platform to use.
As Ops staffers, we have be acutely aware of the real application requirements which means that our Dev team will be our very close and important friends in all of this.
What’s next for Big Data and the Ops team?
We will evaluate the strategy of deployment and configuration management next. Platform choice is one thing, but being able to deploy and manage your Big Data platform on an ongoing basis is critical to adoption and continued operational success.