Meltdown … what plane crashes, oil spills and dumb business decisions can teach us about how to succeed
March 5, 2019
“A crash on the Washington, D.C. metro system. An accidental overdose in a state-of-the-art hospital. An overcooked holiday meal.
At first glance, these disasters seem to have little in common. But surprising new research shows that all these events—and the myriad failures that dominate headlines every day—share similar causes.
By understanding what lies behind these failures, we can design better systems, make our teams more productive, and transform how we make decisions at work and at home.”
The new book Meltdown explores how complexity causes failure in all kinds of modern systems, from social media to air travel, and how we can prevent meltdowns in business and life.
Weaving together cutting-edge social science with riveting stories that take us from the frontlines of the Volkswagen scandal to backstage at the Oscars, and from deep beneath the Gulf of Mexico to the top of Mount Everest, Chris Clearfield and András Tilcsik explain how the increasing complexity of our systems creates conditions ripe for failure and why our brains and teams can’t keep up. They highlight the paradox of progress: Though modern systems have given us new capabilities, they’ve become vulnerable to surprising meltdowns–and even to corruption and misconduct.
But Meltdown isn’t just about failure; it’s about solutions—whether you’re managing a team or the chaos of your family’s morning routine. It reveals why ugly designs make us safer, how a five-minute exercise can prevent billion-dollar catastrophes, why teams with fewer experts are better at managing risk, and why diversity is one of our best safeguards against failure. The result is an eye-opening, empowering, and entirely original book–one that will change the way you see our complex world and your own place in it.
Are you heading for a meltdown? Take the test
“More and more of our systems are in the danger zone, but our ability to manage them hasn’t quite caught up. The result: things fall apart.”
As systems become more complex, we are more vulnerable to unexpected system failures. In Meltdown, the authors examine a fatal D.C. Metro train accident, the Three Mile Island disaster, the collapse of Enron, the 2012 meltdown of Knight Capital, the Flint water crisis, and the 2017 Oscars mix-up, among other meltdowns, and discover that while these failures stem from very different problems, their underlying causes are surprisingly similar. These stories told here are a compelling look behind the scenes of why failures occur in today’s many complex systems.
Using sociologist professor Charles Perrow’s theory that as a system’s complexity and “tight coupling” (a lack of slack between different parts—no margin) increase the chance of a meltdown. In other words, these failures are driven by “the connections between the different parts, rather than the parts themselves.”
Some systems are linear and in these systems, the source of the breakdown is obvious. But as systems become complex, as at a nuclear power plant, the parts of the system interact in hidden and unexpected ways. Because these systems are more like a web, when they breakdown, it is difficult to figure out exactly what is wrong. And worse still, it is almost impossible to predict where it will go wrong and all of the possible consequences of even a small failure somewhere in the system.
As more and more of our systems become more complex and tightly coupled, what do you do? How do we keep up with our increasingly complex systems?
Oddly enough, safety features are not the answer. They become part of the system and thereby add to the complexity. And when something goes wrong, we like to add even more safety features into the system. “It’s like the old fable: cry wolf every eight minutes, and soon people will tune you out. Worse, when something does happen, constant alerts make it hard to sort out the important from the trivial.”
There are ways to make complex systems more transparent. For example, using premortems. Imagine in the future your project has failed. Write down all of the reasons why you think it happened. A 1989 study showed that premortems or prospective hindsight boost our ability to identify reasons why an outcome might occur and therefore deal with the potential problems before they occur.
We also should encourage feedback and sharing of failures and near-misses. “By openly sharing stories of failures and near failures—without blame or revenge—we can create a culture in which people view errors as an opportunity to learn rather than as the impetus for a witch hunt.”
Encourage dissent with a more open-leadership style. People in power tend to dismiss other’s opinions. Leaders should speak last. You have to work on the culture. Ironically, the authors note, introducing anonymous feedback actually highlights the dangers of speaking up.
Bring in outsiders and add diversity of thought. Outsiders will see things we don’t and are more willing to ask uncomfortable questions. Also in a more diverse environment, we tend to be more vigilant and question more. When we are around people just like us, we tend to trust their judgment which can lead to too much conformity. “Diversity is like a speed bump. It’s a nuisance, but it snaps us out of our comfort zone and makes it hard to barrel ahead without thinking. It saves us from ourselves.”
Transparent design matters. We need to see what is going on under the hood. Being able to see the state of a system by simply looking at it can be an important safeguard.
These are just a sampling of the ways we can learn to manage complex systems. This doesn’t mean we should take fewer risks. On the contrary, these solutions—structured decision tools, diverse teams, and norms that encourage healthy skepticism and dissent—“tend to fuel, rather than squelch, innovation and productivity. Adopting these solutions is a win-win.”
We can make our systems more forgiving of our mistakes by thinking critically and clearly about our own systems. How many things have to go right at the same time for this to work? Can we simplify it? How can we add margin?