Monday, April 6, 2009

The importance of failure



http://www.youtube.com/watch?v=OiaPNlR5A4I

All about testing

Introduction

Here is a matrix from Brian Marick enhanced by Mary Poppendieck that has everything about testing in an agile context. It is simple and clarifies the intent of different kinds of tests which I have found valuable when communicating with QA people.

I have used the matrix for a couple of years and decided to blog about it since I like it so much. Others have blogged about this too, but I try to dig in a bit deeper.




The matrix has two axis. The first describes the goal of tests, whether it is to support programming or critique the end result like traditional QA. Some QA people find the idea of tests whose primary purpose is not testing but something else a bit odd, but that is a key point. Certain kinds of tests exist to allow a development team to go faster, not so much to find bugs.

The second axis describes the level and vocabulary of the tests. Business facing tests are understandable to business stakeholders and end users and operate at their abstraction level (think black box testing). Technology facing tests on the other hand are at a lower abstraction level and use technical jargon. They test either a small part of the system or some property of the system like performance.


Unit tests

Unit tests test the smallest possible units of code. In an (semi) object oriented language like Java that means individual functions/methods or classes. Some tests may involve a couple of classes, but those are the rare exception.

Actually my definition is a bit of a simplification. Unit tests test individual responsibilities of code and not the code itself. This is an important distinction as responsibilities are an abstraction level higher than code, but I digress.

Note that due to semantic diffusion unit testing can mean a variety of things. I know at least two organiations that use the for any kind of testing done by the development team themselves ("Has the database designer unit tested his database schema?"). Let's stick with the original meanings of terms, shall we? A unit test tests a unit of code responsibility, period.

The purpose of unit tests is to drive the design of the code through test-first development and refactoring. They act as regression test harness that allows developers to change the codebase with impunity. A secondary purpose is to document the developer's intent about what his code should do and how it should be used. Good unit tests can almost be used as API usage examples.

Unit tests are always automated and executed in batches (often called test suites). They should run quickly enough so that developers can run them every couple of minutes. Several frameworks exist for unit testing, most based on Kent Beck's SUnit framework. Ward's Wiki has a comprehensive list of xUnit frameworks.


Acceptance tests

Acceptance tests are functional system level black box regression tests. Traditionally the bread and butter of QA and testing departments, though agile suggests that acceptance tests shold be automated to free up testers from slavery to do more useful things like exploratory testing. Whether acceptance tests are written by end users like XP advocates or by someone else I don't really care.

The purpose of acceptance tests is to verify that the fully integrated, complete, up-and-running system works as expected from the end user's (man or machine) point of view. They communicate the business intent of the system, and document usage scenarios for the system. Like unit tests acceptance tests allow developers to change the system at will. And if unit tests drive the design of the code, then acceptance tests drive the design of the entire system.

Acceptance tests should be automated as far as possible. Several frameworks exist for testing different kinds of user interfaces from the browser-based Selenium for Web applications to OCR-based frameworks that use the mouse and keyboard like a user would. Often acceptance tests require much more work to set-up the state of the world before executing a test. Databases have to be cleaned and initialized, systems started up, and so on and so forth. Custom code is almost always needed even with fancy tools. It is not uncommon to see acceptance tests executed over an xUnit testing framework, but they are acceptance tests regardless of the tool used.

When it comes to functional testing acceptance tests are The Truth. They tell you if your system works as intended, your unit tests do not. If acceptance tests could be executed in seconds instead of minutes or hours, I wounder if we would bother with unit tests? Some even claim that unit testing is overrated.


Usability and exploratory tests

Usability testing is evaluating the usability of a system by inspecting users in the act of using the system. Usually done with a study where users carry out tasks under observation. Measured things can be the time to perform a task, number of errors, and such.

Usability testing is an art-form on its own, and needless to say impossible to automate. I will not go deep into it since it is not my specialty and since there is nothing special about usability testing in agile software development, except that it cannot be done test-first. :)

While lots of fancy things have been written about exploratory testing it basically is a bunch of ruthless, evil-minded testers running amok trying to intentionally break your system. It requires creativity and insight and, surprise surprise, cannot be automated. This is what testers should be doing instead of brainlessly executing manual test cases.


Property testing

Property testing investigates the emergent properties of a system. These can include performance, scalability, security or other SLAs. The goals of property testing can vary from ensuring that the system can cope with peak loads to ensuring that it cannot be hacked in certain ways.

Most property tests require tools of some sort to create load, set up the system and so on. Of all the kinds of testing property testing probably requires the most expertise and knowledge of the system being tested. Tools must always be accompanied by a thinking brain. Tests can be automated to an extent, but in performance testing for example analysing results and troubleshooting problems often takes the most time and those cannot be fully automated.


More information

Brian Marick's original blog entry, read the follow-ups too
Mary Poppendick: Competing On The Basis Of Speed, the testing segment starts at 18 minutes in the video