Automated Test Planning Models for Agile Development

Automated testing is often touted by Agile experts as an essential component in the Agile development process. Including automated testing in your Agile projects seems like a no-brainer, but knowing how and when to use automation in any given project requires a lot of thought and planning, especially in the cost-conscious world of digital advertising where digital applications must be developed at breakneck speed. In this blog post, I want to present a couple of Agile testing models that we’ve used as a guide in our automated test planning at Click Here Labs.

Mike Cohn’s Test Automation Pyramid

I had the pleasure of attending an Agile user stories workshop led by popular Agile author Mike Cohn a few years ago. Cohn covered all aspects of Agile methods in his workshop, including a discussion of his Test Automation Pyramid diagram, which is well known in the Agile testing community. Introduced in Cohn’s book, Succeeding with Agile, the pyramid is intended to show programmers and testers where to place the most emphasis when applying test automation to Agile development projects:

Testing pyramid with UI, Service, and Unit

The base of the Test Automation Pyramid indicates the type of automated testing that should receive the most emphasis according to Cohn. It is labeled “Unit,” which stands for Unit Tests – automated tests (usually written by programmers) that are designed to test code as it is being written. This approach, sometimes referred to as Test-Driven Development (TDD), is intended to reduce or eliminate defects in code before it is fully integrated into a digital application.

The very top of the pyramid indicates the type of automated testing that should receive the least amount of emphasis. “UI” represents automated tests that are run directly through the application’s User Interface. These types of automated tests are easy to produce with “record and playback” automation tools, such as Selenium-IDE, which enable the test engineer to record interactions with the user interface and play back the tests as new application builds are produced to ensure that the application returns the same results. The main drawback with automated UI tests, however, is that they often break with the slightest change to the user interface and must be debugged or re-recorded entirely, thus increasing the cost of maintaining the tests. While UI automated tests may be useful in detecting that something in the code may not be correct, it often takes human beings conducting manual exploratory testing to evaluate the full extent of the potential problem.

The middle layer of the pyramid, labeled “Service,” is described by Cohn as “something the application does in response to some input or set of inputs.” For most web applications, Service tests include testing through the API layer of the application, along with automated integration tests and automated component tests. Cohn contends that the middle of the pyramid is often overlooked by automated testers and yields much more reliable results than automated testing through the UI. Another reason why Service tests are sometimes omitted is that it often requires more programming skill to create tests at this level.

Lisa Crispin and Janet Gregory’s Test Quadrants

Another model for conceptualizing automated testing in Agile development was presented in a popular 2011 book, Agile Testing. The authors Lisa Crispin and Janet Gregory used a test quadrants chart in their discussion of how to plan for automated testing. The test quadrants concept is based on a test matrix first introduced by Agile pioneer Brian Marick in 2003. Here is an updated version of Crispin and Gregory’s test quadrants chart from their recently published book, More Agile Testing.

Agile testing quadrants model

The test quadrants chart is less prescriptive than Cohn’s pyramid in that it doesn’t suggest the level of emphasis that should be applied to the various types of automated tests. The quadrants chart is intended to classify all types of Agile testing in a taxonomy for the purpose of conducting test planning for the project as a whole and for planning each individual Agile sprint. It is not a representation of a process workflow, but may be used in any order, and all or some of the tests may be used on any given Agile sprint. It is noteworthy that manual exploratory testing is included in one of the quadrants. Crispin suggests that Agile teams work through the quadrants rapidly for each feature under development in a sprint, applying automated or manual exploratory testing until the feature is considered “done,” then the process of picking and choosing the appropriate tests is repeated all over again for the next feature to be developed.

Automated Testing at Click Here Labs

The programming and quality control teams at Click Here Labs have had success using automated testing on some of our more high-profile Agile projects. The key has been to use automated testing judiciously and primarily on large-scale projects.

A few of our test automation highlights have included:

In one Agile project, we implemented automated testing on a large website project (500+ pages) for a historical park. The website featured several complex “recommender” applications for dining, purchasing fine wines and shopping. Our test engineer created a custom automated test suite comprised of “Service” and “UI” automated tests, enabling the manual testers to focus on testing new site features and saving countless hours that would’ve been spent on manual regression testing.

In another Agile project, we produced a website for a firm that owns over 400 retail shopping centers nationwide. Each of the 400 locations is represented on the website by an aerial map that shows individual store locations and detailed leasing information for each storefront. The retail location and leasing data are updated by a nightly feed process. In the initial manual testing phase, testers spent 5 minutes testing each of the 400 shopping center locations, which took about 35 hours to complete. Subsequent tests were automated using “Service”-level regression tests and could be run overnight for all 400 locations in a matter of a few hours.

For website maintenance testing for large sites, we’ve produced a suite of automated tests that can be used as-is or may be customized, depending on the client’s needs. Written in Ruby and powered by Selenium WebDriver, the test suite offers a simple user interface, which can be run by programmers or manual testers when new builds are deployed. The suite includes:

  • Site navigation and link verification tests
  • Form verification tests
  • Form penetration tests
  • The ability to compare server environments (e.g., staging against production) following build deployments
  • Automated test results notifications

Our current plan for our automated test suite will be to integrate it into our continuous integration process, which would launch automated tests whenever new builds are promoted to staging or production.

In summary, the Click Here Labs quality control group has had positive experiences implementing automated testing best practices in some of our Agile projects here at the agency, but in the fast-paced digital advertising business you have to know where to pick your spots. For the many “quick-turn” small digital executions that we produce, manual exploratory testing is still the most time- and cost-efficient way to go. We’ve learned that if a test is only going to be used once or twice, it’s often not worth the time and cost to automate it. For larger applications that lend themselves to an iterative development and testing approach, however, we’ll continue to integrate automated testing into our Agile development process based on the models I’ve described in this blog, and we’ll continue to refine our automated testing practices in the future as new testing technology and testing techniques emerge.


Teach Your Kids to Fix Your Wi-Fi Router in the Future