This post is part of a series on The Effective Executive (by Peter F. Drucker). You can find the first post here. In this post I’m going to tackle chapter 7: effective decisions.

Chapter 7: effective decisions

This is the second chapter on decision-making in The Effective Executive. Click here to read my post on chapter 6: the elements of decision-making if you haven’t already read it.

In the real world any decision worth making is going to be messy.  You won’t find any 100% right answers for most problems. Instead, you’ll be faced with various possible alternatives that rank differently on different criteria. Much of the information that you’d like to have to help you make the decision will not be available.

Yet you need to make a decision because decision-making is part of your job. In chapter 7 Drucker gives us some advice for making responsible decisions using good judgement.

Some decisions I’m facing

Let me give you some examples of some decisions I’m facing:

  • How can we produce high quality software faster?
  • Would hiring another programmer increase our overall productivity enough to offset the cost of having another programmer?
  • Manual testing through the UI is really slow and error-prone. But automated testing through the UI won’t be easy to do with our code base. Many of the paths we need to test require code changes to get the system into the correct state to run the tests (example: non-admin IP address while the shopping cart is off). Plus we want to reskin our website in the near future so that would likely break many automated UI tests we write. What’s the “best” course of action?
  • We have lots of low quality legacy code. Should we spend time refactoring our code to increase its quality? Should we refactor our code to a framework? Or should we keep moving ahead with new features and ignore the old code whenever possible?

I learned this 7 step decision-making process in university. It seems like a reasonable process. Each step is clear and understandable. But if you try to apply this decision-making process to any of the questions I posed above, you quickly run into trouble.

The normal decision-making processes are problematic

How can we “gather relevant information” for my question: “should we continue with manual testing or move to automated UI testing?”

The more you look at the “gather relevant information” step, the more problematic it becomes. There isn’t going to be a study that conclusively settles the question of whether companies in our exact situation should automate their UI testing. And it is impractical to do the same project once each way with different teams to see which is better.

Everything has a context

“Facts” are meaningless without criteria of relevancy. Even if you could gather a bunch of “facts” about the decision you’re facing, they won’t necessarily share your context. For example, just because x other companies automated their UI testing and felt that it was the right thing to do, that isn’t evidence that you should move to automated UI testing.

Does adopting automated UI testing still make sense if your company is:

  • going to lose a big source of income if you can’t fix a major defect in your software before the end of the month?
  • racing to release a product before you run out of cash?
  • planning to replace your software in two months?
  • understaffed to the point of not being able to keep your existing software functioning?

Context matters.

Drucker’s approach

Drucker cautions us against using the usual decision-making approaches for all of the reasons listed above and more. Instead he asks us to start with opinions.

Gather opinions

Opinions are natural and usually plentiful. And if we think about opinions as untested hypotheses about reality then we should ask what would we expect to see in the real world if this hypothesis was correct? Drucker wants us to seek opinions from knowledgeable people and stakeholders. But we should make them responsible for also explaining what factual findings we should expect to see in the real world if their opinions/hypotheses are correct.

For example, if Bill says that automated UI testing will “save us tons of time,” he should be able to direct us to several companies similar to our own that made the switch to automated UI testing and measured specific benefits. Or maybe Bill can point us to high quality studies that have repeatedly found automated UI testing provides ‘x’ specific benefits.

“Facebook or Google or NASA or whoever does it so we should do it too” isn’t good enough. “It’s the right thing to do” or “It will payback eventually” aren’t good evidence either.

Choose a measure

We need a new measure to evaluate these opinions. This is where context is important. We can think of many possible measures for the question of “automated UI testing”:

  • how will automated UI testing fit in with our overall objectives and software development processes?
  • do we have a better chance of meeting our project deadline if we do manual or automated UI testing?
  • how will automated UI testing effect our story points completed per sprint?
  • how many sprints will take to break even on the installation, setup, and learning of UI test automation software?
  • can we get a better return on investment with an alternate quality practice such as code reviews or automated testing one level below the UI?
  • how much does automated UI testing reduce the risk of us introducing a critical bug into our production environment?
  • what tasks must we forgo to free up time to develop an automated UI testing capability?

Do you see where this is going? These are judgment calls. Would you prefer go slower now while you learn automated UI testing for the chance to possibly go faster in the future? How bad is it if a defect slips by your manual testing? How often does that happen? What’s your risk tolerance? Will your team embrace automated UI testing? How important is your next deadline?

Coming up with an appropriate measure is hard. Drucker recommends we gather several good candidate measures, calculate them all, and see which one works the best using our experience and judgment as a guide.

Generate conflict

Drucker trained as a lawyer and you can see that coming through in his adversarial approach to decision-making. He writes:

Decisions of the kind the executive has to make are not made well by acclamation. They are made well only if based on the clash of conflicting views, the dialogue between different points of view, the choice between different judgments. The first rule in decision making is that one does not make a decision unless there is disagreement.

Okay, this is probably not a programmer’s natural inclination when faced with a decision but it contains a certain logic.

What if Bob took the “let’s automate UI testing” position and Sally took the “let’s keep using manual UI testing” position and you sat as the judge in a mock trail? When each person presents their case, they will also be cross-examined by the other side and by you, the judge. During this exercise you will expose the strengths and weaknesses of all arguments. If you do this well, the chance that you miss a major factor in the decision will be negligible.

Drucker has three reasons for his insistence on disagreement:

  1. It’s the only safeguard you have against being led astray by office politics, manipulation, and hidden agendas.
  2. Disagreement can provide you with alternates to a decision. Without alternatives, you aren’t really making a decision because you aren’t choosing between any alternatives.
  3. Disagreement is needed to stimulate the imagination. In all nontrivial situations, you need creative solutions to new problems. Here’s more from Drucker:

Disagreement, especially if forced to be reasoned, thought through, and documented, is the most effective stimulus we know.

Commit and take action

Finally, Drucker encourages us to commit to a decision and execute it.

I once saw a documentary on how the US Marine Corps trains its leaders to make decisions under life and death circumstances. They call it “The 70% Solution“:

…when faced with the likely prospect of failure amidst a sea of uncertain, vague, and contradictory information, most people are extremely hesitant to make a decision. We tend to forget that the enemy is also facing a similar information shortfall. Understanding the factors that degrade our decision-making ability on the battlefield and realizing that they will never be absent are absolutely vital to relevant decisions in conflict. As leaders, we must guard against waiting for a perfect sight picture, which may never come, leading to inaction.

Drucker also warns us against hedging and half-measures:

…the effective decision-maker either acts or he doesn’t act. He does not take half-action. This is the one thing that is always wrong, and the one sure way not to satisfy the minimum specifications, minimum boundary condition.

Don’t delay the decision under the guise of “more study.” Unless you have a good reason to believe that additional information will significantly improve the quality of your decision, you should make it and move on. Avoid the temptation to delay and hedge, even if the decision will not be popular or tasteful. You are paid to make decisions, not be popular.

Further learning

High quality software engineering studies are few and far between. But the following books have the best research I’ve seen:

  • Code Complete: A Practical Handbook of Software Construction, Second Edition (Steve McConnell)
  • Facts and Fallacies of Software Engineering (Robert Glass)
  • Making Software: What Really Works, and Why We Believe It (Andy Oram and Greg Wilson)

These books might help you make decisions about the problems you’re facing. Or, at the very least, steer you away from doing something completely stupid. I highly recommend these books to all programmers.

Watch this video for a depressing take on the primitive state of computer “science”: https://youtu.be/Q7wyA2lbPaU. There is very little, if any, evidence in computer “science” (hence the sarcastic quotes) that rises to the level of “smoking causes cancer” or “global warming is real.” We’re basically making things up as we go along. And ignoring or misapplying the scant evidence we do have. This is the reality of our young profession.

Wrapping up

Making effective decisions is hard because the real world is messy. Yet most programmers make important decisions without decision-making training. Drucker recommends we do the following things to help increase the soundness of our decisions:

  • gather opinions from knowledgeable people along with testable conditions you should see in the world if their opinions are true
  • treat each opinion as a hypothesis to be tested
  • generate conflict
  • develop and choose an appropriate measure to evaluate your alternatives
  • commit and take action even when it’s hard or distasteful