Learn how to be a wildly successful small business programmer

The ONE chart every developer MUST understand

Our industry is famous for delivering projects late and over budget. Many projects are cancelled outright and many others never deliver anything near the value we promised our customers. And yet, there is a subset of software development organizations that consistently deliver excellent results. And they’ve known how to do it since the 1970s. In this post I’ll tell you their secret.

(As an Amazon Associate I earn from qualifying purchases.)

It all starts with understanding this one chart from Steve McConnell. It describes the relationship between defect rates and development time.

Chart showing the relationship between defect rate and development time.

This chart says that most teams could deliver their software projects sooner if they focused more effort on defect prevention, early defect removal, and other quality issues.

But is this chart true?

Steve McConnell published this chart in a blog post titled Software Quality at Top Speed in 1996. This chart (and the blog post) summarizes some of the data in his excellent book Rapid Development (paid link). And that book is based, in part, on the research of Capers Jones up to the early 1990s. I’m throwing in all these dates because I want you to know just how long we’ve known that:

In software, higher quality (in the form of lower defect rates) and reduced development time go hand in hand.

Anyway, Capers Jones kept doing his research and he released another book in 2011 with co-author, Olivier Bonsignour titled The Economics of Software Quality (paid link). They analyzed over 13,000 software projects from over 660 organizations between 1973 and 2010 and collected even more evidence that:

… high quality levels are invariably associated with shorter-than-average development schedules and lower than average development costs.

In other words, Steve McConnell’s chart is true.

So what’s the problem then?

There are three problems.

Problem 1: we’re ignoring the research

The majority of projects are run as if this chart isn’t true. Hardly a day goes by when I don’t hear of some project or someone exercising poor judgement and then predictably getting smacked down by the universe for it. Literally billions of dollars are lost every year to this foolishness. It’s been going on since we started programming computers. Every developer has experienced it. And there’s no end in sight.

For example, pressuring yourself (or succumbing to external pressure) to go faster by cutting corners is almost guaranteed to increase your defect rate and slow down your project. Yet it happens all the time!

But the problem runs deeper than that. Managers are responsible for the worst project disasters. We have people running these projects who, while well-intentioned, have little idea what they are doing. Many of their projects are headed for disaster from the outset (see the “classic” software mistakes below). And by the time they realize that their project is in trouble–usually months after the developers reached the same conclusion–it’s often too late to do much about it.

Problem 2: small project development practices don’t scale well

The development practices that work relatively well for small projects, don’t scale to large, real world projects. Small projects are the only kind of projects most students work on. So they graduate with the false impression that they know how to develop software. But they’ve been building the equivalent of garden sheds when we are trying to hire them to build the equivalent of skyscrapers. A skyscraper isn’t just a really big garden shed–they are completely different things.

garden sheds aren't the same as really big skyscrapers

And because so few organizations do software development well, many teams employ garden shed-appropriate methods to tackle skyscraper-sized problems. So these poor developers think chaos, confusion, bugs, conflicting requirements, endless testing cycles, missed deadlines, stress, piles of rework, and death marches are all normal parts of software development.

Problem 3: many teams don’t have the required skills

You need more than raw technical skills to achieve low defects rates in real-world projects. You need a whole suite of organizational, managerial, and technical level strategies and tactics to pull this off. You’ll almost certainly need additional training for almost everyone in your organization and it also requires you to embrace different development practices.

What does it take to achieve that 95% pre-release defect removal rate?

For most organizations, it will take quite an adjustment to achieve that 95% pre-release defect removal rate. But the good news is that even modest improvements in pre-release defect rates will positively impact the economics of your project.

With that in mind, I suggest the following steps:

  1. Accept the truth that high-quality software is faster and cheaper to build than low quality software
  2. Be aware of Steve McConnell’s “classic” software mistakes
  3. Use memory-safe languages whenever possible
  4. Start improving your development practices

Let’s dive in.

Accept the truth that high-quality software is faster and cheaper to build than low quality software

If you need more evidence than you already have, read The Economics of Software Quality (paid link) to truly convince yourself and your teammates that this chart is telling the truth. The authors of this book leave very little doubt that:

The best available quality results in 2011 are very good, but they are not well understood nor widely deployed because of, for one reason, the incorrect belief that high quality is expensive. High-quality software is not expensive. High-quality software is faster and cheaper to build and maintain than low quality software, from initial development all the way through total cost of ownership.

Furthermore:

If state-of-the-art combinations of defect prevention, pretest defect removal, and formal testing were utilized on every major software project, delivered defects would go down by perhaps 60% compared to 2011 averages.

Why is that?

Low quality projects spend much more time on testing, debugging, fixing, and rework than high quality projects. In fact, low quality projects contain so many defects that testing often takes longer than construction. And low quality projects frequently stop testing long before they run out of bugs to find.

On waterfall projects they either release the software as is or cancel the project because testing and fixing will go on forever otherwise. On agile projects increments of work are completed quickly at first and then slow to a glacial pace as more and more problems are discovered in existing code. Eventually low quality agile projects reach the same options as waterfall projects: release as is or cancel the project.

High quality projects invest in defect prevention and pretest defect removal activities so that when they do get to testing, there are many fewer defects to find and fix. High quality projects are released sooner and cost less than low quality projects because they have much shorter testing phases and much less rework. And high quality projects also have fewer post-release issues to fix. So when they do need to make changes, the code in high quality projects is easier and cheaper to modify.

Let’s look at some key points from The Economics of Software Quality:

  • Overall quality levels have not changed much between 1973 and 2010. IDEs, new languages, interpreted languages, automated testing tools, static analysis tools, better libraries, frameworks, continuous integration, thousands of books, Agile, Scrum, XP, OOP, TDD, and the whole fricking web haven’t moved the needle! That’s just depressing. (I know someone is going to argue that this point can’t be true. Feel free to look it up on pages 538 and 539 of The Economics of Software Quality).
  • In low quality software projects nearly 50% of the effort is devoted to finding and repairing defects and rework.
  • Defect rates rise faster than project size. That means the things you need to do to ensure a 95% pre-release defect removal rate in a 5 KLOC project are completely different than the things you need to do in a 500 KLOC project. Bigger projects not only need more QA activities but they also need different QA activities. Remember, a skyscraper isn’t just a really big garden shed.
  • Testing has been the primary form of defect removal since the software industry began and for many projects, it’s the only form used. That’s a shame because testing is not that effective. Even if you combine 6 or 8 forms of testing you can’t expect to remove more than 80% of the defects in large systems.
  • Defect prevention methods include reuse, formal inspections, prototyping, PSP/TSP, static analysis, root cause analysis, TDD, and many others. The factors that hurt defect prevention the most are excessive requirements changes, excessive schedule pressure, and no defect or quality measures.
  • The most effective forms of pretest defect removal are formal inspections and static analysis. But the authors also discuss 23 other methods of pretest defect removal, their range of expected results, and when you might want to use them.
  • The ROI on the better forms pretest defect removal is more than $10 for every $1 that is spent.
  • This book discusses 40 kinds of testing, their range of effectiveness, and when you might want to use them.

I think this book’s true value comes from the evidence it provides to help you argue against ideas and practices that have an especially negative effect on the cost and schedule of your project. For example, after reading this book it’s hard to argue that you don’t have time for code reviews.

Be aware of Steve McConnell’s classic software mistakes

In chapter 3 of Rapid Development (paid link), Steve McConnell lists 36 “classic” software mistakes.

Falling victim to even one of these mistakes can condemn your project to slow, expensive development. You need to avoid all of them if you want to be efficient.


Update: 2021-08-17

Check out Steve McConnell’s updated and improved list of “classic software mistakes.”


Use memory-safe languages whenever possible

This suggestion didn’t make it into either book but I think it’s an important point in 2019. Around 70 percent of all the vulnerabilities in Microsoft products addressed through a security update each year are memory safety issues.

It’s pretty clear at this point that humans just aren’t capable of programming large systems in memory-unsafe languages like C and C++ without making an astounding number of mistakes or spending embarrassing amounts of money to find and remove them.

If Microsoft can’t keep those errors out of their software, it’s unreasonable to think you can do any better. So, if you’re starting a new project choose a memory-safe language. There are memory-safe languages for even the most demanding domains so don’t let your concerns about the performance implications or compatibility issues stop you from checking them out.

Start improving your development practices

Okay. So, you now believe the chart is correct and you further believe you are to the left of the optimum. What now? Try to get your organization moving down the curve towards the optimal point on the chart, of course.

Your first step is to get buy-in to the fact that low quality is a problem on your project. Since quality is a problem on most projects it shouldn’t be hard to come up with evidence to support your case. Chapter 2 of The Economics of Software Quality will help you setup a defect tracking system to track the right things and avoid common measurement pitfalls. Having hard data about the cost of low quality will help you make your case.

Your next step is to convince yourself and your team to stop taking shortcuts because they almost always backfire. Print out the chart and hang out on your walls if you have to.

Next, I suggest you implement a small change across your team, try it for a while, evaluate your results, keep or discard your change, and then choose something else to improve and repeat the cycle.

Not all ideas are equally helpful so I’d start with the suggestions in Software Quality at Top Speed. The three ideas in that blog post are bang on. Start by eliminating shortcuts, replacing error-prone modules, and adopting some kind of code review process. You’re welcome to use my code review checklist as a starting point.

Then I’d move on to Rapid Development (paid link). This book is all about getting your project under control and delivering working software faster. The 36 classic mistakes to avoid are super important. Go there next, then tackle the topics in the remainder of the book.

The Economics of Software Quality (paid link) isn’t really a how-to book but it will come in handy as a reference. It will help you determine the optimal quality target for your particular project. And it offers you a menu of options to help you choose the right combination of practices to achieve it.

What if nobody’s interested in improving quality?

Sadly, this is going to happen to many of you. It’s hard to shake the myth that quality is expensive. And it’s even harder to convince people to change how they work. Unless you are the top dog on your team you may not have the influence required to spearhead this kind of change.

So you have three options:

  1. Forget about improving quality at the project level and conform to the norms of your team.
  2. Continue advocating for quality improvements and hope that people will eventually agree with you.
  3. Find a new place to work that already cares about quality.

I can’t decide for you but let me just say that there are many more developer positions available than qualified developers to fill them. And many employers do care about quality and are actively recruiting people at all levels of experience who can help them improve it.

Wrapping Up

Many of you are just as alarmed and distressed by the quality of the average software project as I am. We know how to do better but we just need to get the ball rolling on more software development teams. And I believe the first step is to show people this chart and the evidence behind it.

I know this problem isn’t going to go away overnight. But if you feel the same way I do please do what you can to spread the word, share this post, improve the results in your own projects, and help improve the collective results of our industry.

Agree or disagree? Have a story to share? Let me have it in the comments.

1 Comment

  1. Dave Aronson

    Agreed except a terminology nitpick: having few “defects” (and that could use some definition too) is just a part of the broader landscape of what I at least consider “quality”. I’ve been working on a definition for a while now, and what I’ve come up with is summed up in the acronym ACRUMEN (try saying that ten times fast). Long story short, software should be Appropriate (by which I mean doing what the stakeholders need, which devs rarely even consider), Correct, Robust, Usable, Maintainable, and Efficient — usually in that order.

Leave a Reply