Here are the slides for a presentation I gave with Marko today at OO Day 2009 in Tampere, Finland. We had around 400 people in the audience - more than there were attendees in Scan-Agile 2009!
Since the presentation was in Finnish so are the slides. I might write more on the subject later on.
Thoughts about stuff that matters to me. Life, humanity, purposeful work, leadership, user-driven products.
Wednesday, December 9, 2009
Paper on TDD and architecture
Couple of years ago me and Marko Taipale wrote an experience report on our experiment in using TDD (in the strict, XP definition) to build a game platform. The idea was to try and see if an architecture would emerge like XP claims (it did not).
Originally the paper was intended as an experience report for the XP2008 conference but was rejected due to insufficient research data. It is fair to say that the paper is our take on the TDD controversy that raged at that time, and we did write it at Jim Coplien's suggestion.
The paper is a couple of years old but since at least two people asked for a copy I decided to put the paper online. Perhaps you will find it interesting.
xp2008_experience_report_marko_taipale_ari_tanninen.pdf
Originally the paper was intended as an experience report for the XP2008 conference but was rejected due to insufficient research data. It is fair to say that the paper is our take on the TDD controversy that raged at that time, and we did write it at Jim Coplien's suggestion.
The paper is a couple of years old but since at least two people asked for a copy I decided to put the paper online. Perhaps you will find it interesting.
xp2008_experience_report_marko_taipale_ari_tanninen.pdf
Wednesday, December 2, 2009
Testing In Agile presentation
I gave a thirty-minute talk about testing in agile projects in December's Agile Dinner in Helsinki. Here are the slides.
It was nice to have so many participants from TestausOSY visiting our dinner! Too bad not many stayed for the beers (read: actual conversation) afterwards.
It was nice to have so many participants from TestausOSY visiting our dinner! Too bad not many stayed for the beers (read: actual conversation) afterwards.
Tuesday, November 10, 2009
Dr. Deming's Management's Five Deadly Diseases
Here is an Encyclopaedia Britannica Film from 1984 featuring Dr. Deming himself: Management's Five Deadly Diseases. Awesome.
Tuesday, October 27, 2009
Open Space Session: Think!
Intro & thinking principles
This blog entry is about the open space session with similar title I held at Scan-Agile 2009. It took me a while to write since I have never actually organized my thoughts on this. My intention was not to host a session but after hearing fifteen sessions announced about tools and methods I felt compelled to since a very important ingredient was missing: thinking and the principles that guide thinking.
Photo courtesy my fellow conference organizer Ari Tikka.
I will start this blog like I started my open space session: let me share with you the three principles of the most productive Scrum team I have ever been in. These principles guided our everyday life from choosing technologies to improving our ways of working. Agile architecture absolutely requires them. But even though these principles have arisen from agile software development, they are valid for most aspects of life. I exercise them daily in my personal life.
The thinking principles are:
1) Do only what is needed
2) First the problem, then the solution
3) Challenge everything
This blog entry is about the open space session with similar title I held at Scan-Agile 2009. It took me a while to write since I have never actually organized my thoughts on this. My intention was not to host a session but after hearing fifteen sessions announced about tools and methods I felt compelled to since a very important ingredient was missing: thinking and the principles that guide thinking.
Photo courtesy my fellow conference organizer Ari Tikka.
I will start this blog like I started my open space session: let me share with you the three principles of the most productive Scrum team I have ever been in. These principles guided our everyday life from choosing technologies to improving our ways of working. Agile architecture absolutely requires them. But even though these principles have arisen from agile software development, they are valid for most aspects of life. I exercise them daily in my personal life.
The thinking principles are:
1) Do only what is needed
2) First the problem, then the solution
3) Challenge everything
Wednesday, October 14, 2009
Leadership and self-organization
"The only truly self-organizing team I know of was Apollo 13 after the pop."
(Thanks for that one, Mom!)
Teams do not spontaneously materialize out of thin air, and just by accident happen to pick a goal that by coincidence is in line with the surrounding organization's objectives. Teams need to be formed, and they need to be given a purpose. Sometimes that is done by a memeber of the team and sometimes by an external party.
I call the process of forming a team and giving it purpose leadership.
Leadership is not only vital to self-organization it is usually a pre-requisite for it. Without leadership there is no team, and without a team there is no self-organization. Apollo 13 teams who are driven by a single, overwhelming imperative pushing them to self-organize are rare in the modern corporate world.
Besides team-forming and goal-setting leadership can serve other purposes in teamwork. These can be guiding the team's daily work, keeping the team on the right course, removing team dysfunctions and in general all activities that help the team. Whether this kind of leadership is needed or not, and wheter it should be internal or external to theam depends entirely on circumstance. What is important that leadership can serve a purpose and is probably needed in one form or another during the life-span of a team.
The idea that leadership is not necessary because teams self-organize is wishful thinking at best.
It will be interesting to hear what Mary Poppendieck has to say about this subject in her keynote tomorrow at Scandinavian Agile Conference 2009.
(Thanks for that one, Mom!)
Teams do not spontaneously materialize out of thin air, and just by accident happen to pick a goal that by coincidence is in line with the surrounding organization's objectives. Teams need to be formed, and they need to be given a purpose. Sometimes that is done by a memeber of the team and sometimes by an external party.
I call the process of forming a team and giving it purpose leadership.
Leadership is not only vital to self-organization it is usually a pre-requisite for it. Without leadership there is no team, and without a team there is no self-organization. Apollo 13 teams who are driven by a single, overwhelming imperative pushing them to self-organize are rare in the modern corporate world.
Besides team-forming and goal-setting leadership can serve other purposes in teamwork. These can be guiding the team's daily work, keeping the team on the right course, removing team dysfunctions and in general all activities that help the team. Whether this kind of leadership is needed or not, and wheter it should be internal or external to theam depends entirely on circumstance. What is important that leadership can serve a purpose and is probably needed in one form or another during the life-span of a team.
The idea that leadership is not necessary because teams self-organize is wishful thinking at best.
It will be interesting to hear what Mary Poppendieck has to say about this subject in her keynote tomorrow at Scandinavian Agile Conference 2009.
Friday, September 11, 2009
The essence of leadership
I have been thinking about leadership - especially the effects of it's absence - for along while know. After exchanging a few messages with Ola Ellnestam on Twitter about it last week, I have finally decided to write down my thoughts.
Leadership is a difficult concept to define. The word is overloaded, subject to interpretation and Wikipedia alone lists over ten theories to explain it:
Types of leadership and other theories: Agentic Leadership, Coaching, Communal Leadership, Max Weber's Charismatic authority, Antonio Gramsci's theory of Cultural hegemony, Ethical leadership, Islamic leadership, Ideal leadership, Leader-Member Exchange Theory (LMX), Leadership Character Model, Leadership development, Servant leadership, Toxic Leadership, Youth leadership, Collaborative leadership, Outstanding leadership theory
Mind boggling. Surely it cannot be such a difficult concept?
What makes things more interesting is the emotional and cultural load associated with leadership. For some the word conjures up images of charismatic leaders like Patton or Winston Churchill, for others the images are of micro-managers for hell, breathing down their necks constantly.
The Finnish culture has a special antipathy towards the concept, causing people to grumble when you even mention leadership in a corporate context. For that I blame Lieutenant Lammio, the pompous, by-the-book spit and polish caricature of a career officer from the Finnish national epoch The Unknown Soldier.
I am not really interested in getting entangled in all that, rather I want to dig in to the problem that leadership solves - the reason for its existence. My interest? I believe lack of leadership is the greatest problem in the industry, and the root cause of most failures companies have. (Especially so in agile methods with their emphasis on self-organization, but I'll get to that later.)
Note that I only talk about leadership and will consciously avoid entering the "management vs. leadership" territory. Nor do I want to speculate whose primary job it is to lead in a modern organization. I merely want to point out that leadership is absolutely necessary, much neglected, and much simpler than you think. So without further ado...
The purpose of leadership is to get a bunch of people to accomplish something together.
The essence of leadership consists of three parts:
1) Organizing doers
2) Deciding objective(s), communicating to doers
3) Helping doers succeed in achieving objective
Miss one of the previous, and all the theories on motivation, fancy pep-talks by management, team building events, company parties by HR and the rest of the fluffy stuff associated with "leadership" become completely irrelevant. No amount of motivation will work on a group that has lost it's purpose of existence!
My list works at any level, by the way. Whether leading an individual, a team, department or a division. If the goal is to get something done with a bunch of individuals, all three things need to happen. The bunch must be formed, an objective chosen and - usually - the bunch need to be supported on their pursuit of the objective. Miss one of the three and likely nothing will ever get done. Simple?
Why is it, then, that so many organizations spend energy on schemes to motivate their employees while one or more of the three parts is missing? I don't know, but my bet is because they have lost the reason for their existence, and are trying to keep themselves busy earning money without no greater purpose. It is very difficult to set goals locally for a group if there is no overall strategy nor greater goals to serve.
Here is another point I want to drive home:
If a team, group or any organizational unit has lost its goal or the reason for its existence, it is not being lead. Simple as that.
Did I mention that leadership is vital for happiness and contentment? I will get to that later.
Leadership is a difficult concept to define. The word is overloaded, subject to interpretation and Wikipedia alone lists over ten theories to explain it:
Types of leadership and other theories: Agentic Leadership, Coaching, Communal Leadership, Max Weber's Charismatic authority, Antonio Gramsci's theory of Cultural hegemony, Ethical leadership, Islamic leadership, Ideal leadership, Leader-Member Exchange Theory (LMX), Leadership Character Model, Leadership development, Servant leadership, Toxic Leadership, Youth leadership, Collaborative leadership, Outstanding leadership theory
Mind boggling. Surely it cannot be such a difficult concept?
What makes things more interesting is the emotional and cultural load associated with leadership. For some the word conjures up images of charismatic leaders like Patton or Winston Churchill, for others the images are of micro-managers for hell, breathing down their necks constantly.
The Finnish culture has a special antipathy towards the concept, causing people to grumble when you even mention leadership in a corporate context. For that I blame Lieutenant Lammio, the pompous, by-the-book spit and polish caricature of a career officer from the Finnish national epoch The Unknown Soldier.
I am not really interested in getting entangled in all that, rather I want to dig in to the problem that leadership solves - the reason for its existence. My interest? I believe lack of leadership is the greatest problem in the industry, and the root cause of most failures companies have. (Especially so in agile methods with their emphasis on self-organization, but I'll get to that later.)
Note that I only talk about leadership and will consciously avoid entering the "management vs. leadership" territory. Nor do I want to speculate whose primary job it is to lead in a modern organization. I merely want to point out that leadership is absolutely necessary, much neglected, and much simpler than you think. So without further ado...
The purpose of leadership is to get a bunch of people to accomplish something together.
The essence of leadership consists of three parts:
1) Organizing doers
2) Deciding objective(s), communicating to doers
3) Helping doers succeed in achieving objective
Miss one of the previous, and all the theories on motivation, fancy pep-talks by management, team building events, company parties by HR and the rest of the fluffy stuff associated with "leadership" become completely irrelevant. No amount of motivation will work on a group that has lost it's purpose of existence!
My list works at any level, by the way. Whether leading an individual, a team, department or a division. If the goal is to get something done with a bunch of individuals, all three things need to happen. The bunch must be formed, an objective chosen and - usually - the bunch need to be supported on their pursuit of the objective. Miss one of the three and likely nothing will ever get done. Simple?
Why is it, then, that so many organizations spend energy on schemes to motivate their employees while one or more of the three parts is missing? I don't know, but my bet is because they have lost the reason for their existence, and are trying to keep themselves busy earning money without no greater purpose. It is very difficult to set goals locally for a group if there is no overall strategy nor greater goals to serve.
Here is another point I want to drive home:
If a team, group or any organizational unit has lost its goal or the reason for its existence, it is not being lead. Simple as that.
Did I mention that leadership is vital for happiness and contentment? I will get to that later.
Monday, August 24, 2009
Blog reboot
The last few months I have been busy either enjoying the sunshine or thinking about my goals in life, and trying to decide where to take my career. I haven't had time to focus on writing my blog and time has come to remedy that.
This blog initially had two purposes. It was a notebook for things I found interesting and a way to make me a smartass who supposedly knows something about agile.
The problem in writing good blog entries is the time required for making a balanced argument. If you don't take that time, you end up writing flamebait or simply repeating what others have said. Where's the fun in that?
So what's next? I will start organizing my thoughts and start writing about the things that I really find interesting, the stuff that keeps me motivated and awake at nights. Some of those things include agile, lean, software architecture, leadership, and what makes people happy.
I need a new name for this blog, though, since it won't really be about agile anymore. Suggestions?
This blog initially had two purposes. It was a notebook for things I found interesting and a way to make me a smartass who supposedly knows something about agile.
The problem in writing good blog entries is the time required for making a balanced argument. If you don't take that time, you end up writing flamebait or simply repeating what others have said. Where's the fun in that?
So what's next? I will start organizing my thoughts and start writing about the things that I really find interesting, the stuff that keeps me motivated and awake at nights. Some of those things include agile, lean, software architecture, leadership, and what makes people happy.
I need a new name for this blog, though, since it won't really be about agile anymore. Suggestions?
Monday, June 8, 2009
Specialization and generalization
This blog was inspired by Vasco Duarte's blog Why specialization in Software development is bad for business.
I maintain that specialization is crucial for software development, and that overgeneralization can be just as harmful as overspecialization. The following figure will illustrate my point.
At the top of the figure is a team of three with each having a specialty skill. Each team member happily sits in their own comfort zone and there is very little understanding of adjacent specialities. Since finishing an item of work needs all three specializations there will be handoffs and theory of constraints applies. Even if the team is working on more than one item time is wasted on waiting for bottleneck resources and so on. The good news is that since there is a lot of expertise in the team the finished product will reflect that expertise (if it ever gets finished).
Now let's look at the other extreme at the bottom of the figure. A team with nothing but generalists will work most efficiently since everyone is able to do everyone else's job. Very little time is wasted since there are no handoffs and everyone is able to support each other. The caveat is that since everyone is a generalist the team lacks mastery in any field, and thefore can build mediocre products at best. Jack of all trades, master of none applies.
In summary pure specialists produce excellent products late and pure generalists sub-standard products on-time.
How about the middle road then, the "generalizing specialists"? The guys who are specialists in their field, but know enough of their surroundings to be able to work effectively with other specialists. Perhaps they have the social and teamworking skills to also pick up new skills and broaden their specialty. Such a team should be able to work efficiently and still utilize the expertise of individuals to build good products.
A mixture of specialization and generalization is needed, though I will grant that overspecialization may be a bigger problem in the industry.
By the way, how does generalization work in a truly cross-functional team that includes other disciplines besides software development? Is it feasible to train graphic designers software design, and vice-versa? What about marketing and sales? In the long run it probably is beneficial to send all developers on a marketing crash-course but in a project environment I wonder if it is feasible?
I maintain that specialization is crucial for software development, and that overgeneralization can be just as harmful as overspecialization. The following figure will illustrate my point.
At the top of the figure is a team of three with each having a specialty skill. Each team member happily sits in their own comfort zone and there is very little understanding of adjacent specialities. Since finishing an item of work needs all three specializations there will be handoffs and theory of constraints applies. Even if the team is working on more than one item time is wasted on waiting for bottleneck resources and so on. The good news is that since there is a lot of expertise in the team the finished product will reflect that expertise (if it ever gets finished).
Now let's look at the other extreme at the bottom of the figure. A team with nothing but generalists will work most efficiently since everyone is able to do everyone else's job. Very little time is wasted since there are no handoffs and everyone is able to support each other. The caveat is that since everyone is a generalist the team lacks mastery in any field, and thefore can build mediocre products at best. Jack of all trades, master of none applies.
In summary pure specialists produce excellent products late and pure generalists sub-standard products on-time.
How about the middle road then, the "generalizing specialists"? The guys who are specialists in their field, but know enough of their surroundings to be able to work effectively with other specialists. Perhaps they have the social and teamworking skills to also pick up new skills and broaden their specialty. Such a team should be able to work efficiently and still utilize the expertise of individuals to build good products.
A mixture of specialization and generalization is needed, though I will grant that overspecialization may be a bigger problem in the industry.
By the way, how does generalization work in a truly cross-functional team that includes other disciplines besides software development? Is it feasible to train graphic designers software design, and vice-versa? What about marketing and sales? In the long run it probably is beneficial to send all developers on a marketing crash-course but in a project environment I wonder if it is feasible?
Thursday, May 28, 2009
Theory and Practice
Experience by itself teaches nothing... Without theory, experience has no meaning. Without theory, one has no questions to ask. Hence, without theory, there is no learning.
- W. Edwards Deming: The New Economics for Industry, Government, Education
Theory by itself teaches nothing. Application by itself teaches nothing. Learning is the result of dynamic interplay between the two.
- Peter Scholtes: The Leader’s Handbook: A Guide To Inspiring Your People and Managing the Daily Workflow
- W. Edwards Deming: The New Economics for Industry, Government, Education
Theory by itself teaches nothing. Application by itself teaches nothing. Learning is the result of dynamic interplay between the two.
- Peter Scholtes: The Leader’s Handbook: A Guide To Inspiring Your People and Managing the Daily Workflow
Monday, April 6, 2009
All about testing
Introduction
Here is a matrix from Brian Marick enhanced by Mary Poppendieck that has everything about testing in an agile context. It is simple and clarifies the intent of different kinds of tests which I have found valuable when communicating with QA people.
I have used the matrix for a couple of years and decided to blog about it since I like it so much. Others have blogged about this too, but I try to dig in a bit deeper.
The matrix has two axis. The first describes the goal of tests, whether it is to support programming or critique the end result like traditional QA. Some QA people find the idea of tests whose primary purpose is not testing but something else a bit odd, but that is a key point. Certain kinds of tests exist to allow a development team to go faster, not so much to find bugs.
The second axis describes the level and vocabulary of the tests. Business facing tests are understandable to business stakeholders and end users and operate at their abstraction level (think black box testing). Technology facing tests on the other hand are at a lower abstraction level and use technical jargon. They test either a small part of the system or some property of the system like performance.
Unit tests
Unit tests test the smallest possible units of code. In an (semi) object oriented language like Java that means individual functions/methods or classes. Some tests may involve a couple of classes, but those are the rare exception.
Actually my definition is a bit of a simplification. Unit tests test individual responsibilities of code and not the code itself. This is an important distinction as responsibilities are an abstraction level higher than code, but I digress.
Note that due to semantic diffusion unit testing can mean a variety of things. I know at least two organiations that use the for any kind of testing done by the development team themselves ("Has the database designer unit tested his database schema?"). Let's stick with the original meanings of terms, shall we? A unit test tests a unit of code responsibility, period.
The purpose of unit tests is to drive the design of the code through test-first development and refactoring. They act as regression test harness that allows developers to change the codebase with impunity. A secondary purpose is to document the developer's intent about what his code should do and how it should be used. Good unit tests can almost be used as API usage examples.
Unit tests are always automated and executed in batches (often called test suites). They should run quickly enough so that developers can run them every couple of minutes. Several frameworks exist for unit testing, most based on Kent Beck's SUnit framework. Ward's Wiki has a comprehensive list of xUnit frameworks.
Acceptance tests
Acceptance tests are functional system level black box regression tests. Traditionally the bread and butter of QA and testing departments, though agile suggests that acceptance tests shold be automated to free up testers from slavery to do more useful things like exploratory testing. Whether acceptance tests are written by end users like XP advocates or by someone else I don't really care.
The purpose of acceptance tests is to verify that the fully integrated, complete, up-and-running system works as expected from the end user's (man or machine) point of view. They communicate the business intent of the system, and document usage scenarios for the system. Like unit tests acceptance tests allow developers to change the system at will. And if unit tests drive the design of the code, then acceptance tests drive the design of the entire system.
Acceptance tests should be automated as far as possible. Several frameworks exist for testing different kinds of user interfaces from the browser-based Selenium for Web applications to OCR-based frameworks that use the mouse and keyboard like a user would. Often acceptance tests require much more work to set-up the state of the world before executing a test. Databases have to be cleaned and initialized, systems started up, and so on and so forth. Custom code is almost always needed even with fancy tools. It is not uncommon to see acceptance tests executed over an xUnit testing framework, but they are acceptance tests regardless of the tool used.
When it comes to functional testing acceptance tests are The Truth. They tell you if your system works as intended, your unit tests do not. If acceptance tests could be executed in seconds instead of minutes or hours, I wounder if we would bother with unit tests? Some even claim that unit testing is overrated.
Usability and exploratory tests
Usability testing is evaluating the usability of a system by inspecting users in the act of using the system. Usually done with a study where users carry out tasks under observation. Measured things can be the time to perform a task, number of errors, and such.
Usability testing is an art-form on its own, and needless to say impossible to automate. I will not go deep into it since it is not my specialty and since there is nothing special about usability testing in agile software development, except that it cannot be done test-first. :)
While lots of fancy things have been written about exploratory testing it basically is a bunch of ruthless, evil-minded testers running amok trying to intentionally break your system. It requires creativity and insight and, surprise surprise, cannot be automated. This is what testers should be doing instead of brainlessly executing manual test cases.
Property testing
Property testing investigates the emergent properties of a system. These can include performance, scalability, security or other SLAs. The goals of property testing can vary from ensuring that the system can cope with peak loads to ensuring that it cannot be hacked in certain ways.
Most property tests require tools of some sort to create load, set up the system and so on. Of all the kinds of testing property testing probably requires the most expertise and knowledge of the system being tested. Tools must always be accompanied by a thinking brain. Tests can be automated to an extent, but in performance testing for example analysing results and troubleshooting problems often takes the most time and those cannot be fully automated.
More information
Brian Marick's original blog entry, read the follow-ups too
Mary Poppendick: Competing On The Basis Of Speed, the testing segment starts at 18 minutes in the video
Here is a matrix from Brian Marick enhanced by Mary Poppendieck that has everything about testing in an agile context. It is simple and clarifies the intent of different kinds of tests which I have found valuable when communicating with QA people.
I have used the matrix for a couple of years and decided to blog about it since I like it so much. Others have blogged about this too, but I try to dig in a bit deeper.
The matrix has two axis. The first describes the goal of tests, whether it is to support programming or critique the end result like traditional QA. Some QA people find the idea of tests whose primary purpose is not testing but something else a bit odd, but that is a key point. Certain kinds of tests exist to allow a development team to go faster, not so much to find bugs.
The second axis describes the level and vocabulary of the tests. Business facing tests are understandable to business stakeholders and end users and operate at their abstraction level (think black box testing). Technology facing tests on the other hand are at a lower abstraction level and use technical jargon. They test either a small part of the system or some property of the system like performance.
Unit tests
Unit tests test the smallest possible units of code. In an (semi) object oriented language like Java that means individual functions/methods or classes. Some tests may involve a couple of classes, but those are the rare exception.
Actually my definition is a bit of a simplification. Unit tests test individual responsibilities of code and not the code itself. This is an important distinction as responsibilities are an abstraction level higher than code, but I digress.
Note that due to semantic diffusion unit testing can mean a variety of things. I know at least two organiations that use the for any kind of testing done by the development team themselves ("Has the database designer unit tested his database schema?"). Let's stick with the original meanings of terms, shall we? A unit test tests a unit of code responsibility, period.
The purpose of unit tests is to drive the design of the code through test-first development and refactoring. They act as regression test harness that allows developers to change the codebase with impunity. A secondary purpose is to document the developer's intent about what his code should do and how it should be used. Good unit tests can almost be used as API usage examples.
Unit tests are always automated and executed in batches (often called test suites). They should run quickly enough so that developers can run them every couple of minutes. Several frameworks exist for unit testing, most based on Kent Beck's SUnit framework. Ward's Wiki has a comprehensive list of xUnit frameworks.
Acceptance tests
Acceptance tests are functional system level black box regression tests. Traditionally the bread and butter of QA and testing departments, though agile suggests that acceptance tests shold be automated to free up testers from slavery to do more useful things like exploratory testing. Whether acceptance tests are written by end users like XP advocates or by someone else I don't really care.
The purpose of acceptance tests is to verify that the fully integrated, complete, up-and-running system works as expected from the end user's (man or machine) point of view. They communicate the business intent of the system, and document usage scenarios for the system. Like unit tests acceptance tests allow developers to change the system at will. And if unit tests drive the design of the code, then acceptance tests drive the design of the entire system.
Acceptance tests should be automated as far as possible. Several frameworks exist for testing different kinds of user interfaces from the browser-based Selenium for Web applications to OCR-based frameworks that use the mouse and keyboard like a user would. Often acceptance tests require much more work to set-up the state of the world before executing a test. Databases have to be cleaned and initialized, systems started up, and so on and so forth. Custom code is almost always needed even with fancy tools. It is not uncommon to see acceptance tests executed over an xUnit testing framework, but they are acceptance tests regardless of the tool used.
When it comes to functional testing acceptance tests are The Truth. They tell you if your system works as intended, your unit tests do not. If acceptance tests could be executed in seconds instead of minutes or hours, I wounder if we would bother with unit tests? Some even claim that unit testing is overrated.
Usability and exploratory tests
Usability testing is evaluating the usability of a system by inspecting users in the act of using the system. Usually done with a study where users carry out tasks under observation. Measured things can be the time to perform a task, number of errors, and such.
Usability testing is an art-form on its own, and needless to say impossible to automate. I will not go deep into it since it is not my specialty and since there is nothing special about usability testing in agile software development, except that it cannot be done test-first. :)
While lots of fancy things have been written about exploratory testing it basically is a bunch of ruthless, evil-minded testers running amok trying to intentionally break your system. It requires creativity and insight and, surprise surprise, cannot be automated. This is what testers should be doing instead of brainlessly executing manual test cases.
Property testing
Property testing investigates the emergent properties of a system. These can include performance, scalability, security or other SLAs. The goals of property testing can vary from ensuring that the system can cope with peak loads to ensuring that it cannot be hacked in certain ways.
Most property tests require tools of some sort to create load, set up the system and so on. Of all the kinds of testing property testing probably requires the most expertise and knowledge of the system being tested. Tools must always be accompanied by a thinking brain. Tests can be automated to an extent, but in performance testing for example analysing results and troubleshooting problems often takes the most time and those cannot be fully automated.
More information
Brian Marick's original blog entry, read the follow-ups too
Mary Poppendick: Competing On The Basis Of Speed, the testing segment starts at 18 minutes in the video
Tuesday, March 31, 2009
Remarkable error message
Youtube search crashed for me two minutes ago.
Who said software has to be boring?
500 Internal Server Error
Sorry, something went wrong.
A team of highly trained monkeys has been dispatched to deal with this situation.
Also, please include the following information in your error report:
h3Uu0TiSf93ZIfuwp-CZYrMc-0pjnkweTODRkB21PG_E7kj0dypM_0LBebqg
5UWTJZ5oMaZaHimqIuZRjbe27wVZhjKQ3iNeCoFS3pye8FB3fUMj4rnYoekV
...
Who said software has to be boring?
Monday, March 30, 2009
Friday, March 27, 2009
Shu Ha Ri
Shu Ha Ri is a Japanese martial arts concept that describes the three levels of learning.
I have always liked the concept since it provides a simple model that can help with communicating with people. The idea is to be aware of the level of your audience and match your message to their level. For example if you are an expert trying to explain a beginner how to do agile software development quoting principles only frustrates him; he needs practical advice to practical problems - not spiritual guidance.
Shu: Imitation, learning rules and individual techniques, tradition
Ha: Understanding, learning exceptions to rules, adapting techniques, breaking from tradition
Ri: Mastery, transcending the rules; flow, intuitive use of techniques
The Shu level is all about following the master and learning the rules. You don't really understand the big picture yet so you follow the book as best as you can.
At Ha level you have gained deep understanding. You understand why certain techniques work and can choose the best one for the situation. You also understand the limitations of techniques and when their use is not appropriate and when not.
By the time you reach Ri level you apply techniques naturally without thinking. You have transcended the rules.
Last year in Nääsvillen Oliopäivät Alistair Cockburn began his keynote by describing Shu Ha Ri. After his talk a member of the audience asked a longish question about a problem in applying agile in his organization. Having explained Shu Ha Ri just moments before, Alistair begun his reply:
You expect a Shu level answer to a Ri level question.
Further reading:
Shu Ha Ri by Alistair Cockburn
Three Levels of Audience in Ward's Wiki
What is Shu Ha Ri? an excellent blog entry by Kevin E. Schlabach
I have always liked the concept since it provides a simple model that can help with communicating with people. The idea is to be aware of the level of your audience and match your message to their level. For example if you are an expert trying to explain a beginner how to do agile software development quoting principles only frustrates him; he needs practical advice to practical problems - not spiritual guidance.
Shu: Imitation, learning rules and individual techniques, tradition
Ha: Understanding, learning exceptions to rules, adapting techniques, breaking from tradition
Ri: Mastery, transcending the rules; flow, intuitive use of techniques
The Shu level is all about following the master and learning the rules. You don't really understand the big picture yet so you follow the book as best as you can.
At Ha level you have gained deep understanding. You understand why certain techniques work and can choose the best one for the situation. You also understand the limitations of techniques and when their use is not appropriate and when not.
By the time you reach Ri level you apply techniques naturally without thinking. You have transcended the rules.
Last year in Nääsvillen Oliopäivät Alistair Cockburn began his keynote by describing Shu Ha Ri. After his talk a member of the audience asked a longish question about a problem in applying agile in his organization. Having explained Shu Ha Ri just moments before, Alistair begun his reply:
You expect a Shu level answer to a Ri level question.
Further reading:
Shu Ha Ri by Alistair Cockburn
Three Levels of Audience in Ward's Wiki
What is Shu Ha Ri? an excellent blog entry by Kevin E. Schlabach
Saturday, March 21, 2009
Ken Schwaber on Google Tech Talk
Here is an excellent video where Ken Schwaber (co-creator of Scrum) explains what Scrum is and where it came from. He also explains how everyone can create their own design dead software and how to avoid it.
I have summarized and paraphrased some of the key points.
Scrum is not a methodology - you are on your own
Scrum isn't a methodology and as such does not have answers on how to do things. It frees us from the belief that someone else can tell us what to do in every circumstance and that it'll work. Scrum's assumption is that you are intelligent and that you will use that intelligence and your experience to come up with the best solution for whatever circumstance you are in right then.
Most Scrum implementations fail because of denial
Scrum gets all of the news visible, whether it's good or bad. It assumes that intelligent people will want the news so they can do what is best for the entire organization. Because of this only about 30-35% of organizations manage to implement Scrum successfully. Most organizations don't want to be faced with something they don't want to see.
The Core problem
Very common problem in all organizations is Core or infrastructure software that all the company's software depends on. Usually such software is fragile, does not have a test harness and there are only few developers in the company who know it and are willing to work on it. The Core software then effectively constrains all development in the company. If you don't have enough money to rebuild your Core and competition is breathing down your neck, you should shift into another market or sell your company.
You don't have to buy your Core, you can make your own. Squeezing a bit more "must have" features into your releases works in short term, but leads into cutting quality. Soon your team's capability to deliver starts dropping because they are working on a worse codebase. That leads into more squeezing, and after five years you have your own design dead product that will kill your business.
There are two main problems here. First, when developers are told to do more they cut quality without telling a soul. Second, product management beliefs in magic that to do more all they have to do is tell developers.
The solution is incremental and iterative development, and managing releases by scope rather than by time. In Scrum also every team has a Scrum Master (the prick), whose job is to make sure the team does not cut quality.
I have summarized and paraphrased some of the key points.
Scrum is not a methodology - you are on your own
Scrum isn't a methodology and as such does not have answers on how to do things. It frees us from the belief that someone else can tell us what to do in every circumstance and that it'll work. Scrum's assumption is that you are intelligent and that you will use that intelligence and your experience to come up with the best solution for whatever circumstance you are in right then.
Most Scrum implementations fail because of denial
Scrum gets all of the news visible, whether it's good or bad. It assumes that intelligent people will want the news so they can do what is best for the entire organization. Because of this only about 30-35% of organizations manage to implement Scrum successfully. Most organizations don't want to be faced with something they don't want to see.
The Core problem
Very common problem in all organizations is Core or infrastructure software that all the company's software depends on. Usually such software is fragile, does not have a test harness and there are only few developers in the company who know it and are willing to work on it. The Core software then effectively constrains all development in the company. If you don't have enough money to rebuild your Core and competition is breathing down your neck, you should shift into another market or sell your company.
You don't have to buy your Core, you can make your own. Squeezing a bit more "must have" features into your releases works in short term, but leads into cutting quality. Soon your team's capability to deliver starts dropping because they are working on a worse codebase. That leads into more squeezing, and after five years you have your own design dead product that will kill your business.
There are two main problems here. First, when developers are told to do more they cut quality without telling a soul. Second, product management beliefs in magic that to do more all they have to do is tell developers.
The solution is incremental and iterative development, and managing releases by scope rather than by time. In Scrum also every team has a Scrum Master (the prick), whose job is to make sure the team does not cut quality.
Friday, March 20, 2009
The wrong question
Ask not what agile can do to my organization?
Rather ask how can I change my organization to becoming more agile?
On a similar vein, an excellent blog from Ron Jeffries: Context my Foot.
Rather ask how can I change my organization to becoming more agile?
On a similar vein, an excellent blog from Ron Jeffries: Context my Foot.
Levels of consulting
Here is a quick categorization of consulting work that I came up with some friends some time ago.
Level 1: Introduction
Basic concepts, sales pitches, two-day training course, lectures and exercises
Level 2: Consulting
Discussing or planning how a customer should use the material in his/her context
Level 3: Coaching
Implementing changes with the daily or weekly help of a coach, may include an intensive training period
Level 4: Follow-up
Coach periodically returns to customer after a couple of months to see how things are going and adjust as necessary
What has this to do with agile? When "going agile" many companies only buy level 1, then go and run their heads to the wall. I have heard both Jeff Sutherland and Jim Coplien state that "70% of Scrum implementations fail". Wonder why.
By the way, Certified ScrumMaster training is level 1.
Level 1: Introduction
Basic concepts, sales pitches, two-day training course, lectures and exercises
Level 2: Consulting
Discussing or planning how a customer should use the material in his/her context
Level 3: Coaching
Implementing changes with the daily or weekly help of a coach, may include an intensive training period
Level 4: Follow-up
Coach periodically returns to customer after a couple of months to see how things are going and adjust as necessary
What has this to do with agile? When "going agile" many companies only buy level 1, then go and run their heads to the wall. I have heard both Jeff Sutherland and Jim Coplien state that "70% of Scrum implementations fail". Wonder why.
By the way, Certified ScrumMaster training is level 1.
Thursday, March 19, 2009
Turku Agile Day 2009
Just came back from the Turku Agile Day 2009. Good to see the agile movement on the rise in Turku. Thanks for the organizers, keep up the good work, guys! Next time I am staying for the party.
A few tidbits stuck in my mind:
A few tidbits stuck in my mind:
- "The human execution of a process cannot be copied." said Pekka Abrahamsson when he was discussing the difficulty of duplicating successful practices in projects.
- Agilists often add the fourth dimension of quality to the holy trinity of software projects: features, time, and resources. Ola Ellnestam added a fifth, forming the iron pentagon of software development: features, time, resources, quality, customer satisfaction.
- Ola also suggested viewing software development as a system from which customers pull features out. Contrast this with the traditional way of pushing features into the system as requirements, which then pushes them out to the user. Pull instead of push is the starting point for Lean and all kinds of good things come from it, but I have not previously thought of pull as a way of making sure you deliver only what is needed. How silly is that?
- Something I enjoyed was Petri Taavila's frank, down to earth presentation about Nokia Siemens Network's agile transformation with wrinkles and everything. Wonderful to have an honest presentation about the good, the bad and the ugly of turning agile. Thanks, Petri!
Wednesday, March 11, 2009
The key to test automation
The key to test automation is called software engineering.
I have met many testing teams and Quality Assurance people searching for the silver bullet that would speed their team or department on its way to test automation and all it promises. Usually these people have no software development background, and while their teams may have some scripting knowledge they often lack proper software development skills. As such, they are heavily dependent on tools, preferably those that come with a GUI or that can be extensively configured. If push comes to shove they can maybe manage with a scripting language, but that's about it.
Good tools are essential to software development, and I daresay even more so in agile software development. What is the problem with depending on tools, then?
First of all, many commercial testing tools are expensive and complex. Since they cater the needs of a wide audience they have many features, and often come with elaborate XML configuration languages or custom scripting languages. Complexity steepens the learning curve and slows down troubleshooting. While troubleshooting is may not be much of a problem for functional testing, it is a major issue in performance testing where it can take more time than executing the actual tests.
Once you have chosen and bought an expensive tool you are pretty much stuck with it. In choosing a tool you have to anticipate all your future testing needs which increases the likelihood of you choosing the most complex tool with the most features. The vicious circle is complete.
Another issue is that all tools have their limitations. No enterprise testing solution is as flexible or powerful as a good programming library or a language and you cannot iteratively and incrementally develop and improve a commercial tool. You must choose one in a big bang and then stick with it. It is impossible to start with something simple that you can evolve with time as your needs become greater.
There is an entire industry building testing solutions designed around the idea that programming skills are not needed for test automation. Hogwash! Automation of any kind implies a certain level of design skill that cannot be substituted with a tool! The meaning of "test automation" is "automated test case execution and reporting", not "automated test case design".
If you have recently started doing agile software development and (still) have a dedicated testing team that is confused about test automation my advice is this: send your testers to a programming course, and introduce a software developer or two to the team. The developers can then devise any tools your team needs, and help your testers with programming.
Adding developers to a testing team also brings testers and developers closer which you should be doing in the first place if you are trying to become agile.
I have met many testing teams and Quality Assurance people searching for the silver bullet that would speed their team or department on its way to test automation and all it promises. Usually these people have no software development background, and while their teams may have some scripting knowledge they often lack proper software development skills. As such, they are heavily dependent on tools, preferably those that come with a GUI or that can be extensively configured. If push comes to shove they can maybe manage with a scripting language, but that's about it.
Good tools are essential to software development, and I daresay even more so in agile software development. What is the problem with depending on tools, then?
First of all, many commercial testing tools are expensive and complex. Since they cater the needs of a wide audience they have many features, and often come with elaborate XML configuration languages or custom scripting languages. Complexity steepens the learning curve and slows down troubleshooting. While troubleshooting is may not be much of a problem for functional testing, it is a major issue in performance testing where it can take more time than executing the actual tests.
Once you have chosen and bought an expensive tool you are pretty much stuck with it. In choosing a tool you have to anticipate all your future testing needs which increases the likelihood of you choosing the most complex tool with the most features. The vicious circle is complete.
Another issue is that all tools have their limitations. No enterprise testing solution is as flexible or powerful as a good programming library or a language and you cannot iteratively and incrementally develop and improve a commercial tool. You must choose one in a big bang and then stick with it. It is impossible to start with something simple that you can evolve with time as your needs become greater.
There is an entire industry building testing solutions designed around the idea that programming skills are not needed for test automation. Hogwash! Automation of any kind implies a certain level of design skill that cannot be substituted with a tool! The meaning of "test automation" is "automated test case execution and reporting", not "automated test case design".
If you have recently started doing agile software development and (still) have a dedicated testing team that is confused about test automation my advice is this: send your testers to a programming course, and introduce a software developer or two to the team. The developers can then devise any tools your team needs, and help your testers with programming.
Adding developers to a testing team also brings testers and developers closer which you should be doing in the first place if you are trying to become agile.
Monday, March 9, 2009
Agile software development summarized in Agile Dinner
Last month I organized the monthly Agile Dinner in Helsinki with the theme "what have you done right in your previous software projects?". I specifically asked people to forget literature and just focus on what they have directly experienced. The outcome could be the table of contents of any agile software development book!
Form a cross-functional team, put them in one room where they are surrounded by information radiators and whiteboards and can work in peace. Provide them direct access to business people and end users and make sure they understand the goals of the project. Maintain the big picture and provide an environment that facilitates learning and try to involve experienced developers and maybe a coach. Use metrics to measure progress. The technical environment should allow for continuous integration and automated regression testing. Consider laying down basic architecture. Be honest, deliver frequently and think a lot.
Form a cross-functional team, put them in one room where they are surrounded by information radiators and whiteboards and can work in peace. Provide them direct access to business people and end users and make sure they understand the goals of the project. Maintain the big picture and provide an environment that facilitates learning and try to involve experienced developers and maybe a coach. Use metrics to measure progress. The technical environment should allow for continuous integration and automated regression testing. Consider laying down basic architecture. Be honest, deliver frequently and think a lot.
Friday, March 6, 2009
Challenge everything
Challenge everything, accept nothing at face value.
Drive this philosophy into everyone in your software organization from marketing and sales to project management and individual team members and good things will follow.
But why? Isn't challenging people's decisions and choices destructive? Isn't it mistrusting your colleagues, and slowing down work? Instead of getting things done, you spend time discussing the same issues over and over again when everybody has their round of challenging? Doesn't it set up kind of an environment where people constantly have to defend their ideas, and provoke confrontation?
That exactly is the point! Sparring ideas, working them on the whiteboard and discussing them over coffee is cheap. Spending months implementing the wrong features or bad marketing strategy is expensive.
Challenging everything is really a safety net for your organization. It prevents brainfarts from destroying your business or technology. It is also a personal safety net since it allows you to test your ideas in a safe environment. It is better to have a trusted colleague torpedo your cool idea right from the start than fail publicly six months later, is it not? And imagine the kind of confidence you can proceed with your Next Big Thing after you have had your first five sparring sessions about it.
Building an environment and culture that allows ideas to be challenged may be difficult though. It requires certain maturity from the involved, they have to understand that it's people's ideas that argue, not the people. It helps to keep in mind that the word challenge is not a negative word, it is just a request for more information or a sanity check. Challenging is not a destructive or distrustful process but a process for creating mutual understanding and confidence. It is the starting point for a conversation, not a finishing one.
Avoiding rumpling other people's feathers is tricky, but the actual act of challenging could not be easier. Just keep on asking "why" until things make sense to you.
A real-life example
Here is a (pretty convoluted) example. In a certain project in the past I was re-designing a public interface of an accounting system that, among other things, processed monetary transactions. A function of that interface was to transfer money to a user's account. The function had four parameters: transaction ID, account ID, and two monetary amounts.
Two moneys? Why?
"Because for taxation reasons - or something like that - certain transactions cannot be deposited in a lump sum" said the architect of the accounting system. OK, fair enough, but sounds like he is not completely sure. I checked with the architect of the business system using the accounting system, who told me that the accounting team demanded the feature for taxation reasons, and that the reporting team also has a stake. So I call up the reporting people, who tell me that the feature is a pain in the ass but that it is very important for the customer and the accounting team requires it. They also mentioned that a fourth team who monitor the transactional integrity of all system has a stake. So I contact the monitor team, who are a bit confused about my question and in the end could not care less. At the same time a business person - after double-checking with the customer - confirms that the feature, in fact, is not needed.
Very interesting, I thought. Everyone takes the feature of splitting lump sums for granted. After all it is an feature supported by three different systems so it must be needed by someone! But no one knows by whom.
In the end I spent two months bothering fifteen people from four teams to remove a single parameter from a function. Of course functionality considered "a pain in the ass" could now be removed from three systems which would simplify the life of four teams.
Buy why was the rather expensive feature that no one needs implemented in the first place? Certainly there is fault in the software process, but would this had happened if any of the people involved had challenged the need for the feature?
Morale of the story: it may take more effort to challenge the rationale of doing something than just doing it. But it probably pays off in the long term.
Which brings us to...
Impact on software architecture
Simple software is cheaper, runs faster, is easier to use, document, test, deploy, operate, maintain and is generally more pleasant than complex software. Common sense dictates that software should be as simple as possible (but not simpler). But how to make simple software?
Using the latest programming paradigm? Buying the fanciest Enterprise Platform on the market? Grabbing the hottest open source framework where you don't have to write code anymore, just configure XML, use annotations, or let the built in artificial intelligence run the show?
Maybe not. As a function of features complexity increases exponentially, not linearly. Therefore the key to keeping things simple (stupid), is to minimize the number of features and moving parts in your software. Disciplined processes for capturing user needs and designing software can help, but the key thing is simply to challenge the necessity of every feature and nut and bolt before adding it to your system.
In short, challenging everything
- Is a cheap way of improving ideas, and weeding out bad ones
- Builds mutual understanding and trust, and kills assumptions (the mothers of all fuck-ups)
- Is a safety net that prevents your organization from doing stupid things
- Is a personal safety net that prevents you from doing stupid things
- Can be challenging :)
- Is the basis for good software architecture
The lesser primate committee thinking experiment
Start with a cage containing five apes.
In the cage, hang a banana on a string and put stairs under it. Before long, an ape will go to the stairs and start to climb towards the Banana, but as soon as he touches the stairs, spray all of the apes with cold water. After a while, another ape makes an attempt with the same result-all the apes are sprayed with cold water. Turn off the cold water. If, later, another ape tries to climb the stairs, the other apes will try to prevent it even though no water sprays them.
Now, remove one ape from the cage and replace it with a new one. The New ape sees the banana and wants to climb the stairs. To his horror, all of the other apes attack him. After another attempt and attack, he knows that if he tries to climb the stairs, he will be assaulted.
Next, remove another of the original five apes and replace it with a new one. The newcomer goes to the stairs and is attacked. The previous Newcomer takes part in the punishment with enthusiasm.
Again, replace a third original ape with a new one. The new one makes it to the stairs and is attacked as well. Two of the four apes that beat him have no idea why they were not permitted to climb the stairs, or why they are participating in the beating of the newest ape. After replacing the fourth and fifth original apes, all the apes which have been sprayed with cold water have been replaced. Nevertheless, no ape ever again approaches the stairs.
Why not?
"Because that's the way it's always been around here."
Sound familiar?
Wednesday, March 4, 2009
Fundamental attribution error
Fundamental attribution error is a phenomenon in social psychology that describes a tendency in us to explain the behavior of others in terms of their personal qualities rather than circumstance. We "attribute" a person's actions to his persona rather than the immediate situation or social context.
I think this is quite a natural tendency. It is a bit like the energy minimum principle, we are always looking for simple explanations to complex issues. Much easier labeling others lazy or stupid rather than spending energy investigating the circumstances they are operating from.
And by the way, this door swings both ways. Other people are likely to think we are incompetent or not caring rather than bothering to consider that perhaps we are overworked, or having the worst hangover of our lives.
So the next time you are aggravated by your compatriots give them the benefit of the doubt. You might be missing something from the big picture that makes their (in)actions perfectly justified.
I think this is quite a natural tendency. It is a bit like the energy minimum principle, we are always looking for simple explanations to complex issues. Much easier labeling others lazy or stupid rather than spending energy investigating the circumstances they are operating from.
And by the way, this door swings both ways. Other people are likely to think we are incompetent or not caring rather than bothering to consider that perhaps we are overworked, or having the worst hangover of our lives.
So the next time you are aggravated by your compatriots give them the benefit of the doubt. You might be missing something from the big picture that makes their (in)actions perfectly justified.
Tuesday, February 3, 2009
A bicycle story
Here is a favorite story of mine often told by Hannu Lehessaari, the long-time CEO of one of the oldest software companies in Finland.
You are driving your bicycle in a great hurry to reach your destination. You are expected, and already late. Suddenly, your bicycle's chain breaks. What do you do?
Waste no time, throw your cycle over your shoulder and start running like crazy?
Or stop and fix the chain, and continue cycling?
You are driving your bicycle in a great hurry to reach your destination. You are expected, and already late. Suddenly, your bicycle's chain breaks. What do you do?
Waste no time, throw your cycle over your shoulder and start running like crazy?
Or stop and fix the chain, and continue cycling?
Subscribe to:
Posts (Atom)