Devs are from Venus, Ops are from Mars: Test-Driven Development, Part I

November 4th, 2014 by

Devs are from Venus, Ops are from MarsIf you’re just joining this article series, it is one aspect of a response to the gap between how development and how operations view technology and measure their success – it is wholly possible for development and operations to be individually successful, but for the organization to fail. So what can we do to better align development and operations so that they can speak the same language and work towards the success of the organization as a whole? This series attempts to address a portion of this problem by presenting operation teams insight into how specific architecture and development decisions affect the day-to-day operational requirements of an application.

I would like to welcome my partner in crime in this series: Eric Wright. Eric is working on the other side of this gap: he is presenting to developers an operations’ view of technology. Be sure to check out his column on about:virtualization where we’re building a good conversation between development and operations (with myself and Eric standing in for each group).

This article is the first in a two-part series on Test-Driven Development (TDD). After this, I’ll dive into Continuous Integration (CI). This article presents an overview of TDD, the process, and some of the tools that enable TDD. In part two, I’ll talk about code coverage, and the benefits that we, as developers, derive from it. As well as how TDD comes together not only for development but also for operations to reduce production bugs and unplanned outages.

Introduction to Test-Driven Development

Test-Driven Development (TDD) was introduced in the late 1990’s as an extreme programming (XP) software-development process called “test-first” programming and later developed or “rediscovered” by Kent Beck in 2003. The basic premise behind TDD is to focus first on writing test cases for your application components before writing the actual components. The motivation is that writing test cases will help you more clearly identify the functionality you’re developing, which means fewer side effects in your code, and, if done comprehensively, will result in a test suite that can thoroughly validate your code and detect problems that you might inadvertently create while modifying your code. In short, building a comprehensive test harness and executing it as part of your build process gives you confidence that your application is functionally correct.

A purist view of TDD requires that test cases be written prior to writing your code, although many developers prefer to write test cases after writing code. The motivation behind writing test cases first is that doing so validates your test harness: the test should fail until you properly implement the functionality you are developing. If you write your test cases later and they pass then you do not have confidence that your test will truly detect problems. Regardless of the approach the important thing it to write the test cases!

Test-Driven Development Process

Test-Driven Development defines the following five steps:

  1. Add a new test to your test harness
  2. Execute the test harness and validate that the new test fails
  3. Develop your code: this may not be your final implementation, but you want to develop just enough to make the test case pass
  4. Re-execute the test harness and validate that the new test passes
  5. Refactor your code as needed, knowing that if you inadvertently break your functionality that your test harness will detect it

This process is shown in figure 1.

 TDD-Fig1

Fig. 1 – TDD Process

Following this process will help to ensure that all of the code you develop is well tested and will equip you with a comprehensive test harness that can regression test your entire application. The regression test ensures that if you make changes to one part of your application that inadvertently break another part of your application that you’ll actually know about it. A comprehensive test harness is the key to enabling continuous delivery, which we’ll talk about in a future article.

Test-Driven Development Tools

The tool that you choose to build your test cases will depend on the programming language with which you are writing your application: for Java we have JUnit, for .NET we have NUnit, and for PHP we have PHPUnit. These testing frameworks are designed to operate at a very fine-grained level of your code, typically against methods, and they are meant to not only test positive scenarios, but also negative scenarios. For example, does the method behave correctly when the caller provides invalid input? What happens if a method that it calls fails? Or if the network throws an exception?

Writing test cases for positive scenarios is pretty straightforward: if I pass in value X I expect the method to return value Y. This is shown in figure 2.

 TDD-Fig2

Fig. 2 – Positive Test Case

Negative test cases that simulate failure conditions are more challenging because you need a mechanism to simulate the failure. Figure 3 shows an example in which a dependent method throws an exception.

 TDD-Fig3

Fig. 3 – Negative Test Case

In order to execute this test case we have to introduce a new concept: Mock Objects. Mock Objects are mock, or “fake”, versions of dependent objects that you can configure to behave differently for different scenarios. Mock Objects can be used in both positive and negative test scenarios. For example, mock objects can be used in a positive scenario to isolate a tier of your application as shown in figures 4 and 5.

TDD-Fig4

Figure 4: Original Workflow

TDD-Fig5

Figure 5: Workflow in which DAO is “mocked” out

Figure 4 shows the original workflow in which a service method invokes a DAO method that talks to a database. Figure 5 replaces the DAO method with a mock object. The mock object can be configured so that when a specific method is called with a specific set of values it returns a prescribed response.

For example, when the DAO method is called it can return a list of response objects or, to simulate a negative scenario, it could even throw an exception. This enables us to more fully test the service method because not only can we pass in parameters to the service method, we can change the behavior of its back-end dependencies. If the DAO method throws an exception because the database is not available, how do we expect the service method to behave? How about when it returns an empty list? Or just one value? The point is that the service method may have logic that is dependent on the data set returned by the DAO method, so changing the data set returned by the DAO method enables us to better test the service method.

One final note on negative scenario testing: there are scenarios that would be otherwise very difficult to test without leveraging mock objects. Consider a network exception, how would you simulate it? Run your application and hope that you pull out a network cable at the right time? When using mock objects you can configure the object to throw a network exception when a dependent method is invoked. Mock objects are a very powerful testing construct.

In the second part of this article we’ll review code coverage and the benefits that we, as developers, derive from TDD and how TDD can help operations teams reduce production bugs and unplanned outages.

Leave a Reply

Your email address will not be published. Required fields are marked *