Learn how to be a wildly successful small business programmer

Is it Ethical to Work on the Tesla Autopilot Software?

The more I learn about Tesla’s self driving car development, the more concerned I become about the ethics of working as a software developer on the Tesla autopilot software. Let me explain.

Tesla’s value offer

Tesla is selling people a prototype-quality level 1 or 2 self driving car. But it is also collecting money upfront for the promise that its cars will become level three capable via a future software update. And that the level three capability will be at least twice as safe a human driver. Of course, full self driving cars aren’t actually legal yet. So, that will need to be sorted out by regulators in every jurisdiction (but Elon Musk is confident that they’ll come around once he demonstrates how much safer autopilot is than allowing humans to drive).

That’s a lot of assumptions

I believe Tesla is making representations to the public and customers that are bordering on deceptive.

Here’s a list:

  1. It’s possible for Tesla to create a safe level three self driving car in the next few years
  2. Autopilot will be safer than human drivers
  3. Tesla will be able to prove that autopilot is safer than human drivers
  4. The current hardware will support full self driving capabilities
  5. Regulators all over North America will allow the autopilot to be used on their roads without modifications
  6. Tesla will not be sued over their product quality
  7. Tesla will not be forced into a company-ending recall or go bankrupt for other reasons

Let’s walk through these points.

1. Is it possible for Tesla to create a safe level three self driving car in the next few years?

There are several parts to this argument.

a) The Tesla autopilot software needs to be perfect

Tesla can’t expect credit for the lives it saves to count against the lives that its self driving cars take. The minute a self driving car makes a serious mistake or is suspected of making a serious mistake, there are going to be lawsuits and bad publicity aplenty. What’s going to happen when a self driving car kills a bunch of kids?

Let’s do some math. By some estimates self driving cars are going to contain something like 200 million lines of code. And the industry average is 15-50 defects/KLOC. So, if self driving car software follows industry average defect rates, we can expect the software in each car to contain 3 to 10 million errors. Are we as a society okay with that?

b) Achieving the requirements for a fully autonomous car is next to impossible

I can’t see how Tesla’s going to get the accuracy required from its machine learning software for a fully autonomous car anytime soon. We have machine learning in many products with tons of data that are way simpler domains than self driving cars (swipe typing, voice recognition, photo classification, language translation, etc.) and they make mistakes all the time. In self driving car software, you have to chain multiple deep learning systems together. And each part of it has to come to the right conclusion in some tiny fraction of a second, then the system must make the right decision, and be able to execute it.

All this has to happen with whatever computing power is available in the car. And it has to be able to cope with temperature extremes, hardware faults, bit flips from cosmic rays, mud splashed on the sensors and cameras, software errors, mechanical problems, network issues, cyberattack, novel road conditions, aggressive and unpredictable humans, wildlife, and emergent behavior when multiple self driving cars interact (more risks here). Plus, it has to work over the life of the car. Given all of these requirements, you expect me to believe these systems will get everything right 99.9999999% of the time?

c) Tesla’s cars are more like prototypes than production quality cars

Building a self driving car without LiDAR is just crazy. There. I’ve said it. I know Elon Musk thinks he can get around it but I can pretty much guarantee you his cars would be safer if they also had LiDAR. For comparison, Waymo’s cars have 3 kinds of LiDAR, 5 radar sensors, and 8 cameras. Which car would you rather trust your children’s lives to?

But there are other problems too. GM and Waymo are building redundant systems into their cars so they can cope with failure conditions without killing you. Where are Tesla’s redundant systems?

The Toyota unintended acceleration case should be required reading for anyone developing or considering buying a self driving car. Here’s an excellent video (slides) of what happened and why extremely high quality engineering processes and redundancies are required for safety critical systems.

d) You can’t mess around with safety

When you’re developing a system that could cause deaths if it malfunctions, you’re building a safety-critical system. It’s a whole different ball game from making smart phone apps. You need to follow a process like ISO 26262. As far as I can understand it, deep learning systems cannot be certified under ISO 26262 because we can’t know what they’ll do in every case. They can’t be subjected to formal methods. Nor can they be tested exhaustively. And that’s a big problem.

Comparisons with the aviation industry

If you want to see examples of what taking safety-critical systems seriously looks like, look at the aviation industry. No deep learning allowed. Only unparalleled levels of engineering effort and quality assurance.

Airbus A330

For example, the Airbus A330 has quintuple redundancy for its flight control system!

Highlights:

  • the system runs on five computers simultaneously and only one is needed to fly the aircraft
  • each computer has two processors (known as channels) that perform the same computations and then compare their results
  • different processors are used to reduce the chance that a manufacturing or design fault could introduce an error
  • three computers make up the primary system and two computers are available as fallbacks with reduced functionality
  • the system contains four versions of the flight control software programmed by independent teams using different programming languages
  • all sensor inputs are redundant
  • all actuators are redundant

That’s a serious commitment to safety (video, slides). By the way, the A330 system I described above was introduced in 1992! Seeing what Airbus was doing over 25 years ago should make you reconsider if Tesla has any business building safety critical systems.

2. Autopilot will be safer than human drivers

The airline industry discovered ages ago that autopilot technology isn’t all upside:

  • Pilots’ flying skills get rusty when they use the autopilot.
  • And they have trouble regaining situational awareness when the autopilot disconnects unexpectedly.

Tesla is going to face both of these problems.

Air France 447 crash

The autopilot on Air France 447 kicked out unexpectedly after receiving an incorrect air speed reading from a sensor. And three well trained pilots, each with thousands of hours of professional experience and extensive training on that particular jet, couldn’t figure out what was happening, ignored the warnings from their computers, and crashed the plane, killing over 200 people.

This doesn’t bode well for your average car owner who hasn’t done a lick of training or studying since they first got their license.

The Air France crash isn’t an isolated incident, by the way. Pilots routinely have trouble gaining situational awareness after their autopilot systems suddenly disconnect.

Consider, what would happen to your night driving or highway passing skills a few years after you got a car that routinely handled those tasks for you. It’s the paradox of automation: the more you use it, the worse you’ll perform if it unexpectedly fails.

By the way, the Air France crew couldn’t gain enough situational awareness in the three minutes they had to respond to their autopilot disconnect to save their lives. What car is ever going to figure out it can’t handle a situation three minutes beforehand? None. You may have as little as a couple of seconds in a car. Let that sink in for a minute.

Skipping level 3 autonomy

For this very reason, several car companies have decided to skip level 3 autonomy altogether. I believe Google was the first to publicly abandon the idea of level 3 autonomy and then others followed.

3. Tesla will be able to prove that autopilot is safer than human drivers

Proving that your self driving car is safer than humans in any meaningful way is going to be extremely difficult. I’m not going to drag you through all the statistics. Let me just link you to an article that does a good job of covering the scope of the problem. I’ll only hit a few key points here.

graph

Huge sample size needed

Fatal vehicle accidents happen very infrequently (about 1 in every 94 million miles in the US) so you need a huge sample of Tesla autopilot fatal accidents to even have confidence that your average death rate is statistically significant (30 is a rough rule of thumb). That’s a lot of deaths and a lot of driving before you can make the claim that your self driving car is statistically safer than human drivers.

Updates or changes reset the clock

Every time you update the software or the hardware, the clock resets to zero because it’s basically a new product.

You must count the net outcome of autopilot

There are three indirect kinds of deaths that you’ll need to count against self driving cars.

  1. Deaths that occur as a result of people losing their driving skills because they have came to rely on the automation.

  2. Any deaths that result from autopilot/hand-off confusion. People are really bad taking control of cars when the automation fails. But it will be worse when you also consider that self driving cars from different companies may handle the same situation differently or that the same car could behave differently after a software update. I predict these issues will lead to many, many deaths.

  3. Deaths caused by self driving cars doing things that people just wouldn’t expect. There are many scenarios where a self driving car could do something no human would expect and cause a third party to crash.

In other words, you need the all-in outcome, not just the deaths prevented by the autopilot for Tesla owners.

Rigorous study required to determine safety of autopilot

You need to conduct a scientific study to actually figure it if Tesla’s autopilot is safer than human drivers. You have to take all people who want to buy a Tesla with autopilot, sell them the car, and then randomly assign them to get autopilot or not and wait to see until at least 30 crashes in each group before you can make any claims about safety.

I’m grossly oversimplifying the process but that’s the gist of it. You can’t just look at the overall fatality or crash rate for all vehicles and compare it the rate for Tesla cars–that’s an apples to oranges comparison.

So, in conclusion, car makers will somehow have to get approval to run huge life and death experiments on public roads to collect the data to prove that self driving cars are safe enough to use on public roads. And each hardware or software change resets the clock. Am I the only one who sees a problem with that?

4. The current hardware will support full self driving capabilities

Considering we don’t even know what it will take to make a fully autonomous car, I have my doubts.

Let me tell you a little story. I bought voice recognition software in the 1990s because I was writing a lot and the company that made the software promised me that it would work and save me a bunch of time. Guess what? It was so inaccurate that I stopped using it almost immediately. Yet every version of that software promised would-be users that advances in computing power and better algorithms had solved the problems. Lies. In fact, it’s 20 years later and my voice now gets processed in the cloud by an unimaginable amount of computing power compared to what I had in my 1990s desktop and voice recognition is still hit and miss.

So, what’s the chance the same pattern shows itself with self driving cars? What if it takes 100 or 1,000 or 10,000 times the computational power to go from Tesla’s current autopilot to a safe fully self driving car on any road under most conditions (the level of capability that Elon Musk is hinting at)?

5. Regulators all over North America will allow the autopilot to be used on their roads

That’s a big assumption if you’re already taking deposits for the full autonomy feature. What if approval doesn’t come for a decade? Can Tesla survive that? But there are other problems too.

Different regulators may place different restrictions on the level of autonomy they will allow on their roads and the conditions in which that autonomy may be used. Can you imagine the chaos and confusion for everyone if self driving cars had radically different capabilities depending on the jurisdiction it happens to be in?

Regulators could also mandate that self driving cars meet certain requirements before they can be used. What if one of those requirements is LiDAR or adherence to ISO 26262 or no single points of failure? Tesla cannot claim any of these things.

Finally, I wonder about the possibility of a regulator making autopilot illegal after a particularly terrible incident or series of incidents. What would happen if a car with the autopilot engaged slammed into a group of preschooler?

6. Tesla will not be sued over their product quality

Of course Tesla will be sued. It’s already being sued.

lawsuits

Toyota has paid out over a billion dollars in connection with the unintended acceleration issues I mentioned previously. And, while I’m no legal expert, I suspect Tesla is even more exposed than Toyota. I expect plaintiff’s lawyers in some huge class action lawsuit are going to easily tear Tesla apart for all the reasons I’ve mentioned throughout this post. How many lawsuits can Tesla afford?

7. Tesla will not be forced into a company-ending recall or go bankrupt for other reasons

Massive recalls seem foreseeable. Can the current hardware support full autonomy? Will regulators require LiDAR? Will regulators require Tesla to install more redundant systems? But even if that stuff all works out in Tesla’s favor, just think about the newness and complexity of self driving cars. Aren’t big recalls foreseeable?

Bankruptcy for other reasons is another possibility since Tesla is the most shorted stock in the US. Can Tesla get its model 3 production issues straightened out? What if Tesla cannot achieve full autonomy in the next few years? Will regulators allow fully autonomous cars on the road? What if GM, Ford, Waymo or another company get approval for their self driving cars before Tesla? Will people want their deposits back so they can go buy a self driving car immediately? What if monitoring the autopilot is more difficult and tiring than just driving the car yourself?

Putting it all together

I’d like to return to my original question: Is it ethical to work on the Tesla autopilot software? Here are some quotes from the Software Engineering Code of Ethics.

1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.

1.04. Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment, that they reasonably believe to be associated with software or related documents.

1.06. Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents, methods and tools.

2.01. Provide service in their areas of competence, being honest and forthright about any limitations of their experience and education.

3.10. Ensure adequate testing, debugging, and review of software and related documents on which they work.

6.07. Be accurate in stating the characteristics of software on which they work, avoiding not only false claims but also claims that might reasonably be supposed to be speculative, vacuous, deceptive, misleading, or doubtful.

6.10. Avoid associations with businesses and organizations which are in conflict with this code.

[All emphasis is mine.]

Of course, I have no first hand knowledge that anything unethical is happening at Tesla. But suppose even half of what I’ve presented here is true. Is it ethical to work on the Tesla autopilot software?

Staffing problems

Tesla’s made the news several times over the departures of key members of the autopilot team. See here, here, and here. Are people leaving the autopilot team on ethical grounds?

And then there’s this tweet from Elon Musk:

We are looking for hardcore software engineers. No prior experience with cars required. Please include code sample or link to your work.

If these engineers don’t have experience with cars or aerospace, are they at risk of violating principle 2.01 (Providing service in areas of competence…)?

Final arguments

I have serious concerns about what’s going on at Tesla. Building a safe self driving car is going to be incredibly difficult. And nobody should be allowed to cut corners in the interest of getting to market faster or making more profit. Tesla itself has called its software releases “beta-tests”. I don’t know how you can beta test in public with untrained customers as drivers in good conscience. At a minimum I’d like to see self driving cars tested like this.

The aviation industry would never develop a new technology this way

Do you think Boeing or Airbus could get away with beta-testing a deep learning autopilot system in planes carrying passengers? Not likely. What about if they claimed it would be twice as safe as their current autopilot systems? Irrelevant. Can you imagine the firestorm if a passenger jet crashed and killed everyone on board because the beta-software malfunctioned? So, how are we even talking about doing this with cars?

Is Elon Musk’s approach to self driving cars destined to fail?

Earlier this month Elon Musk tweeted the following in response to the problems with the model 3 production line:

Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.

I bring this up for two reasons. First, Elon Musk has cultivated a reputation for being the smartest person in pretty much any room. But he does make mistakes. And we shouldn’t blindly trust his judgement. Secondly, I wonder if he’ll be forced to admit the exact same thing about the Tesla autopilot system in the next couple of years. Read that tweet again–it would work perfectly.

car crash

The bottom line

I’m all for new technology and making roads safer but Tesla’s approach just seems reckless and unethical. Call me a stick-in-the-mud if you must but I want Tesla autopilot software that’s designed, engineered, built, tested, validated, and supported like modern Airbus jetliner software, not a buggy prototype for a smart phone app built as quickly as possible by whatever software developers Elon could drum up on Twitter.

Agree or disagree. I’d love to hear your thoughts.

4 Comments

    • Blaine Osepchuk

      Thanks for the link, Keef.

      (For those who don’t want to read the article, it basically says that the NHTSA did not evaluate the claims made by Tesla about the safety of autopilot or the effectiveness of the autopilot itself.)

  1. vliscobx

    having had just some rudimentary exposure to Operations Research programming and AI, it was always clear to me that there never was a path of stepwise refinement here, OTA or otherwise. That is just not how these things work.

Leave a Reply