Pretty image
It's not the unsolved problems that trip you up, it's the ones you solved too easily.

Moneyball is a book and movie about baseball: about fielding, pitching, and especially scoring. But it's also about building great teams, and about recognizing systemic errors in human thought processes. That's where it has relevance for software development. Understanding these errors and how to overcome them can help you build better software teams, and manage them more effectively.

Moneyball came into being because the Oakland Athletics major league baseball team had a big problem. It had a payroll that was a tenth the size of the very best teams in the league, and it had to field a winning team in order to remain profitable in its small market. Its general manager, Billy Beane, started looking deeply into the characteristics that produced winning teams, and came to a surprising conclusion.

Beane discovered that the baseball experts were all wrong. The prototypical baseball player simply didn't have a lot of impact on the ability of a team to win. Instead, Beane embarked on a detailed examination of characteristics that were highly correlated with winning games.

In short, he found a set of systemic errors in the way that baseball experts evaluated and used talent. The experts evaluated individual players by projecting their own opinions on the value of individual abilities to team goals, rather than by looking at characteristics that directly related to wins.

We can apply this to software development and testing teams. To do so, we have to look at problems the way Billy Beane did, and delve more deeply into the realm of human error. The best place to start is with Daniel Kahneman's new book, Thinking, Fast and Slow. Kahneman describes two approaches to human thought and decision-making, which he labels System 1 and System 2.

System 1 is immediate, reflexive thinking that is for the most part unconscious. We do this sort of thinking many times a day. It keeps us functioning in an environment where we have numerous external stimuli, some important, and many not.

But System 1 thinking is sometimes fooled. Consider this question: A baseball and bat together cost $1.10. The bat cost $1.00 more than the ball. What does the ball cost? If you didn't think before responding, you likely said ten cents, which is incorrect. That's System 1 talking.

System 2 is more deliberate thought. It is engaged for more complex problems, those that require mental effort to evaluate and solve. System 2 makes more accurate evaluations and decisions, but it can't respond instantly, as is required for many types of day-to-day decisions. And it takes effort, which means it can tire out team members.

We need both systems. Most of our normal reactions in life are governed by System 1. Our senses pick up a cue, and we respond to it in a familiar way. We do System 1 thinking "without thinking." But then when we are presented with a new problem, or have to deal with an unfamiliar situation, we have to think it through. That is where System 2 kicks in action. The two modes of thinking work together to enable us to function across a wide range of sensory inputs and subsequent decisions.

But this very natural way of thinking can result in systemic errors that cause us to make mistakes on software projects. Those mistakes are related to our very expertise on such projects. By relying on our beliefs based on past practices, we are sometimes right, but often wrong. And that hurts our decision-making throughout the project.

How We Make Mistakes in Thinking

Most errors in decision-making are the result of fallacious System 1 thinking. There are a number of ways of classifying reflexive, or System 1, errors in thinking. These include:

  • Priming

  • Heuristics

  • Anchoring

  • Regression to the mean

Priming is probably the oddest of these errors. It codifies the fact that any information presented prior to the problem, even if it is completely unrelated to the problem, influences the decision. For example, Kahneman set up an experiment where a "wheel of fortune" stopped at either 30 or 65. It turned out that the number people obtained on that wheel influenced their answers to a follow-on question, which had nothing to do with the wheel of fortune numbers.

Anchoring is related to priming, in that a previous value affects a subsequent decision. But in this case, the previous number is suggested as a possible direction for the ensuing decision. I might ask the question "Was John Kennedy older or younger than 40 when he was killed?" Whatever your answer, you are now anchored to age 40 if subsequently asked how old he was when he died.

All of us use heuristics, or rules of thumb, in our decision-making. Heuristics enable us to categorize situations and make fast, pre-conceived decisions, and are useful and valued. They are formed through as few as one prior experience with a similar situation. That's quick learning, and there lies the danger.

Because heuristics can fool us. Years ago when I was training to be a pilot, my instructor put me "under the hood" in practicing recovery from unusual attitudes. With the hood down, he put the plane into an unusual flying position from which I had to recover as quickly as possible. When he brought the hood up, I could see only the instrument panel. I rapidly developed a heuristic that enabled me to quickly identify and correct an unusual attitude. Quick response without any thought.

I continued to use my heuristic in my next lesson, but my instructor had figured out what I was doing. Now things weren't working out as I expected. My turn indicator and artificial horizon were centered, but I was still losing over 1000 feet a minute! I was stumped. My instructor had "crossed" the controls, leaving me in a slip that my heuristic couldn't account for. Now my heuristic was working against me. I was worse than wrong; I couldn't follow through with any critical thought at all once my heuristic failed.

Regression to the mean is a particularly insidious flaw in our thinking. In a general sense, regression to the mean denotes that extreme values are highly uncommon, so that subsequent values are more likely to trend back to a more likely value. We might praise an exceptionally good performance, then be dismayed when subsequent attempts aren't as good. Likewise, criticizing a poor performance will likely result in a better performance the next time, not because of the criticism, but because the poor performance had more to do with bad luck than it did poor ability or a lack of trying. We fallaciously conclude that praise makes us complacent and criticism makes us try harder.

Regression to the mean is not itself an error or a mode of thinking; it is a statistical phenomenon. It simply notes that long-tail events in a probability distribution are rare. If we happen to observe one, then random fluctuation is likely to make subsequent observations closer to the average. The System 1 error in thinking comes in the form of an inability to grasp the fact of regression to the mean.

Regression to the mean, in practical experience, means that performance is made up of ability and luck. It's possible, though unusual, for a person to have exceptional ability. And exceptional ability usually leads to exceptional performance. In all likelihood, though, any particular exceptionally good performance is the result of competent ability plus very good luck in this specific instance. While we know this intellectually, it is very difficult to realize it in practice.

Back to Software Teams

What does human error have to do with software project teams? It turns out that we make all these kinds of errors in planning and executing our application projects. And this leads to familiar problems.

Having preconceived notions about our project success is a particular problem, similar to what we know as the Pygmalion effect, or self-fulfilling prophecy. Our expectations can ultimately influence the final result, even if we aren't aware of the influence, or even swear that it doesn't exist.

Even worse than self-fulfilling prophecies is the bandwagon, where most people on the project know better, but subscribe to the views of a minority in order to go along with the crowd. It is clear that there is belief and buy-in among some segment of the users, but the belief doesn't extend to the entire team.

When evaluating project data, we recognize that there are data and performance points that are outside what we expect, but rather than attributing them to luck, good or bad, we usually attempt to explain them. This results in our drawing conclusions that are both inaccurate and misleading.

How do we overcome these biases? We have to bring System 2 thinking into play, and that means we have to be cognizant of the biases. Reminding ourselves that exceptional performances, either good or bad, are probably based largely on luck helps us maintain our perspective in the face of extreme results.

Another way to overcome our thinking is to understand how we can make decisions on a project for peak efficiency. Many of our biases come from System 1 thinking, where we react and respond immediately, and where prior events play a significant role in our decisions. But as managers, our decisions impact people beyond ourselves. When our decisions affect others, they often need to be questioned and thought through.

So we should strive to engage System 2 thinking more frequently in projects. Of course there is a downside to this. System 2 thinking is hard work; too much of it causes us to mentally tire and make mistakes. System 1 thinking has value in many situations, because of its efficiency and relative accuracy.

Mental mistakes aren't the sort of things that are at the top of our mind when we assess our software projects, and make decisions concerning direction, resource allocation, or priorities. Many of these decisions are made with System 1 thinking, and are often both efficient and correct.

But such decisions are often wrong, too. This seems to present a conundrum. If we fear the errors of System 1 thinking, and System 2 thinking is laborious and time-consuming, what is the solution?

One way to overcome some of the errors I've mentioned is to pause to think through the implications of an immediate decision. If the decision comes automatically, and influences team direction or use of resources, a minute to examine it isn't a lot to ask. Another method is to assign one or more trusted team members to rethink all of your impulse decisions. Feedback from others can be an important check on unthinking responses to daily decisions.

Thinking errors are critical to the outcome of software development projects. Managers, leads, and team contributors all look at information on the project and make decisions daily. By understanding how we evaluate these decisions, how our own thought processes work, and where we are likely to make thinking errors, we can train ourselves and our team to make better decisions regularly.

I have developed a presentation around this topic that I would be happy to share. Just email me at peter@petervarhol.com, tell me something about what you do and what kind of software projects you're involved in, and I'll send it to you.

Peter Varhol is a well-known writer and speaker on software development and testing topics. He has over 20 years experience in software and software development, and has been a college professor, software product manager, and technology journalist. He is currently tools evangelist at Seapine Software, and a consultant to lifecycle tools vendors and users.

Send the author your feedback or discuss the article in the magazine forum.