Chapter 1 | A Routine Operation

▪ Doctors were effectively killing patients for the better part of 1,700 years not because they lacked intelligence or compassion, but because they did not recognize the flaws in their own procedures. If they had conducted a clinical trial (an idea we will return to),* they would have spotted the defects in bloodletting: and this would have set the stage for progress.

▪ Medicine has a long way to go, and suffers from many defects, as we shall see, but a willingness to test ideas and to learn from mistakes has transformed its performance.

▪ for our purposes a closed loop is where failure doesn’t lead to progress because information on errors and weaknesses is misinterpreted or ignored; an open loop does lead to progress because the feedback is rationally acted upon

▪ Doctors do not invent reasons for an accident to pull the wool over the eyes of their patients. Rather, they deploy a series of euphemisms—“technical error,” “complication,” “unanticipated outcome”—each of which contains an element of truth, but none of which provides the whole truth.

Chapter 2 | United Airlines 173

▪ But what is important for our purposes is not the similarity between the two accidents; it is the difference in response. We have already seen that in health care, the culture is one of evasion. Accidents are described as “one-offs” or “one of those things.” Doctors say: “We did the best we could.” This is the most common response to failure in the world today.

▪ In the aftermath of the investigation the report is made available to everyone. Airlines have a legal responsibility to implement the recommendations. Every pilot in the world has free access to the data. This practice enables everyone—rather than just a single crew, or a single airline, or a single nation—to learn from the mistake.

▪ The current ambition is to increase the quantity of real-time data so as to render the black boxes redundant. All the information will already have been transmitted to a central database

▪ In each case the investigators realized that crews were losing their perception of time. Attention, it turns out, is a scarce resource: if you focus on one thing, you will lose awareness of other things.

▪ This is now a well-studied aspect of psychology. Social hierarchies inhibit assertiveness. We talk to those in authority in what is called “mitigated language.” You wouldn’t say to your boss: “It’s imperative we have a meeting on Monday morning.” But you might say: “Don’t worry if you’re busy, but it might be helpful if you could spare half an hour on Monday.” This deference makes sense in many situations, but it can be fatal when a 90-ton airplane is running out of fuel above a major city.

▪ Psychologists often make a distinction between mistakes where we already know the right answer and mistakes where we don’t

▪ even if we practice diligently, we will still endure real-world failure from time to time. And it is often in these circumstances, when failure is most threatening to our ego, that we need to learn most of all. Practice is not a substitute for learning from real-world failure; it is complementary to it.

▪ Most of all his investigations reveal that in order to learn from failure, you have to take into account not merely the data you can see, but also the data you can’t.

▪ His seminal paper for the military was not declassified until July 1980, but can be found today via a simple search on Google. It is entitled: “A Method of Estimating Plane Vulnerability Based on Damage of Survivors.”16

Chapter 3 | The Paradox of Success

▪ Most closed loops exist because people deny failure or try to spin it. With pseudosciences the problem is more structural. They have been designed, wittingly or otherwise, to make failure impossible. That is why, to their adherents, they are so mesmerizing. They are compatible with everything that happens. But that also means they cannot learn from anything.

▪ By looking only at the theories that have survived, we don’t notice the failures that made them possible.

▪ The first is a system. Errors can be thought of as the gap between what we hoped would happen and what actually did happen. Cutting-edge organizations are always seeking to close this gap, but in order to do so they have to have a system geared up to take advantage of these learning opportunities. This system may itself change over time: most experts are already trialing methods that they hope will surpass the Toyota Production System. But each system has a basic structure at its heart: mechanisms that guide learning and self-correction.

▪ Even the most beautifully constructed system will not work if professionals do not share the information that enables it to flourish. In the beginning at Virginia Mason, the staff did not file Patient Safety Alerts. They were so fearful of blame and reputational damage that they kept the information to themselves. Mechanisms designed to learn from mistakes are impotent in many contexts if people won’t admit to them. It was only when the mindset of the organization changed that the system started to deliver amazing results.

▪ The culture implies that senior clinicians are infallible. Is it any wonder that errors are stigmatized and that the system is set up to ignore and deny rather than investigate and learn?

▪ But it took another 194 years for the British Royal Navy to enact new dietary guidelines. And it wasn’t until 1865 that the British Board of Trade created similar guidelines for the merchant fleet. That is a glacial adoption rate. “The total time from Lancaster’s definitive demonstration of how to prevent scurvy to adoption across the British Empire was 264 years,” Gillam says

▪ When the probability of error is high, the importance of learning from mistakes is more essential, not less. As Professor James Reason, one of the world’s leading experts on system safety, put it: “This is the paradox in a nutshell: health care by its nature is highly error-provoking—yet health workers stigmatize fallibility and have had little or no training in error management or error detection

Chapter 4 | Wrongful Convictions

▪ It is noteworthy that when a court of criminal appeal was first proposed in England and Wales in the early nineteenth century, the strongest opponents were judges. The court had a simple rationale: to provide an opportunity for redress. It was an institutional acknowledgment that mistakes were possible. The judges were against it, in large part, because they denied the premise.

▪ When we are confronted with evidence that challenges our deeply held beliefs we are more likely to reframe the evidence than we are to alter our beliefs. We simply invent new reasons, new justifications, new explanations. Sometimes we ignore the evidence altogether.

▪ Festinger’s great achievement was to show that cognitive dissonance is a deeply ingrained human trait. The more we have riding on our judgments, the more we are likely to manipulate any new evidence that calls them into question.

▪ DNA evidence is indeed strong, but not as strong as the desire to protect one’s self-esteem.

Chapter 5 | Intellectual Contortions

▪ It is worth noting here, too, the relationship between the ambiguity of our failures and cognitive dissonance. When a plane has crashed, it’s difficult to pretend the system worked just fine. The failure is too stark, too dramatic. This is what engineers call a red flag: a feature of the physical world that says “you are going wrong.” It is like driving to a friend’s house, taking a wrong turn, and hitting a dead end.

▪ It is precisely in order to live with themselves, and the fact that they have harmed patients, that doctors and nurses reframe their errors in the first place. This protects their sense of professional self-worth and morally justifies the practice of nondisclosure. After

▪ This is a classic response predicted by cognitive dissonance: we tend to become more entrenched in our beliefs (like those in the capital punishment experiment, whose views became more extreme after reading evidence that challenged their views and the members of the cult who became more convinced of the truth of their beliefs after the apocalyptic prophecy failed)

▪ most paradoxical aspect of cognitive dissonance. It is precisely those thinkers who are most renowned, who are famous for their brilliant minds, who have the most to lose from mistakes. And that is why it is often the most influential people, those who ought to be in the best position to help the world learn from new evidence, who have the greatest incentive to reframe it.

▪ “Ironically, the more famous the expert, the less accurate his or her predictions tended to be.”

▪ Ironically enough, the higher people are in the management hierarchy, the more they tend to supplement their perfectionism with blanket excuses, with CEOs usually being the worst of all. For example, in one organization we studied, the CEO spent the entire forty-five-minute interview explaining all the reasons why others were to blame for the calamity that hit his company. Regulators, customers, the government, and even other executives within the firm—all were responsible. No mention was made, however, of personal culpability.

▪ avoiding failure in the short term has an inevitable outcome: we lose bigger in the longer term. This is, in many ways, a perfect metaphor for error-denial in the world today: the external incentives—even when they reward a clear-eyed analysis of failure—are often overwhelmed by the internal urge to protect self-esteem. We spin the evidence even when it costs us.

▪ The pattern is rarely uncovered unless subjects are willing to make mistakes—that is, to test numbers that violate their belief. Instead most people get stuck in a narrow and wrong hypothesis, as often happens in real life, such that their only way out is to make a mistake that turns out not to be a mistake after all. Sometimes, committing errors is not just the fastest way to the correct answer; it’s the only way

▪ They are not dishonest people; they are often unaware of the reframing exercise because it is largely subconscious. If there were independent investigations into adverse events, these mistakes would be picked up during the “black box” analysis and doctors would be challenged on them, and learn from them.

▪ Admitting to error becomes so threatening that in some cases surgeons (decent, honorable people) would rather risk killing a patient than admit they might be wrong.

▪ Intelligence and seniority when allied to cognitive dissonance and ego is one of the most formidable barriers to progress in the world today. In one study in twenty-six acute-care hospitals in the United States, nearly half of the errors reported were made by registered nurses.

Chapter 6 | Reforming Criminal Justice

▪ Lysenko had publicly come out in favor of a technique of close planting of crop seeds in order to increase output. The theory was that plants of the same species would not compete with each other for nutrients. This fitted in with Marxist and Maoist ideas about organisms from the same class living in harmony rather than in competition. “With company, they grow easy,” Mao told colleagues. “When they grow together, they will be comfortable.”

▪ It contributed to one of the worst disasters in Chinese history, a tragedy that even now has not been fully revealed. Historians estimate that between 20 and 43 million people died during one of the most devastating famines in human history

▪ they discovered that mistaken eyewitness identification was a contributing factor in an astonishing 75 percent of cases.10 People were testifying in open court that they had seen people at the scene of a crime who in fact were elsewhere at the time.

▪ But Danziger found something quite different: if the case was assessed by a judge just after he had eaten breakfast, the prisoner had a 65 percent chance of getting parole. But as time passed through the morning, and the judges got hungry, the chances of parole gradually diminished to zero. Only after the judges had taken a break to eat did the odds shoot back up to 65 percent, only to decrease back to 0 over the course of the afternoon

Chapter 7 | The Nozzle Paradox

▪ Progress had been delivered not through a beautifully constructed master plan (there was no plan), but by rapid interaction with the world. A single, outstanding nozzle was discovered as a consequence of testing, and discarding, 449 failures

▪ What the development of the nozzle reveals, above all, is the power of testing. Even though the biologists knew nothing about the physics of phase transition, they were able to develop an efficient nozzle by trialing lots of different ones, rejecting those that didn’t work and then varying the best nozzle in each generation.

▪ The equivalent of natural selection in a market system is bankruptcy. When a company goes bust it is a bit like the failure of a particular nozzle design. It reveals that something (product, price, strategy, advertising, management, process, etc.) wasn’t working compared with the competition.

▪ Now, compare this with centrally planned economies, where there are almost no failures at all. Companies are protected from failure by subsidy. The state is protected from failure by the printing press, which can inflate its way out of trouble. At first, this may look like an enlightened way to go about solving the problems of economic production, distribution, and exchange. Nothing ever fails and, by implication, everything looks successful.

▪ There are problems of monopoly, collusion, inequality, price-fixing, and companies that are too big to fail and therefore protected by a taxpayer guarantee. All these things militate against the adaptive process. But the underlying point remains: markets work not in spite of the many business failures that occur, but because of them

▪ Amateurs and artisans, men of practical wisdom, motivated by practical problems, worked out how to build these machines, by trying, failing, and learning. They didn’t fully understand the theory underpinning their inventions. They couldn’t have talked through the science. But—like the Unilever biologists—they didn’t really need to

▪ [T]here is a profound obstacle to testing, a barrier that prevents many of us from harnessing the upsides of the evolutionary process. It can be summarized simply, although the ramifications are surprisingly deep: we are hardwired to think that the world is simpler than it really is. And if the world is simple, why bother to conduct tests? If we already have the answers, why would we feel inclined to challenge them?

▪ [Unilever biologists.] They didn’t regard the rejected nozzles as failures because they were part and parcel of how they learned. All those rejected designs were regarded as central to their strategy of cumulative selection, not as an indictment of their judgment.

▪ In health care, the assumptions are very different. Failures are seen not as an inevitable consequence of complexity, but as indictments of those who make them, particularly among senior doctors whose self-esteem is bound up with the notion of their infallibility. It is difficult to speak up about concerns, because powerful egos come into play. The consequence is simple: the system doesn’t evolve.

▪ Houston was clever enough to know that his product wasn’t a guaranteed winner. Predicting whether consumers will actually buy a product is often treacherous. But he was quietly confident and wanted to give it a go. However, after a year he wondered if he would ever get a shot.

▪ As Vanier explains, if he can launch ten features in the same time it takes a competitor to launch one, he’ll have ten times the amount of experience to draw from in figuring out what has failed the test of customer acceptance and what has succeeded.

▪ Babineaux and Krumboltz, the two psychologists, have some advice for those who are prone to the curse of perfectionism. It involves stating the following mantras: “If I want to be a great musician, I must first play a lot of bad music.” “If I want to become a great tennis player, I must first lose lots of tennis games.” “If I want to become a top commercial architect known for energy-efficient, minimalist designs, I must first design inefficient, clunky buildings.”

▪ Smokers compensated for the lack of nicotine by smoking more cigarettes and taking longer and deeper drags. The net result was an increase in carcinogens and carbon monoxide. That is what happens in systems populated by human beings: there are unintended consequences. And this is why it is difficult to formulate an effective strategy from on high, via a blueprint

▪ The key is to adjust the flight of the bullet, to integrate this new information into the ongoing trajectory. Success is not just dependent on before-the-event reasoning, it is also about after-the-trigger adaptation. The more you can detect failure (i.e., deviation from the target), the more you can finesse the path of the bullet onto the right track. And this, of course, is the story of aviation, of biological evolution and well-functioning markets.

▪ Clinging to cherished ideas because you are personally associated with them is tantamount to ossification. As the great British economist John Maynard Keynes put it: “When my information changes, I alter my conclusions. What do you do, sir?”

▪ It is for this reason that many of the most influential development campaigners argue that the most important issue when it comes to charitable giving is not just raising more money, but conducting tests, understanding what is working and what isn’t, and learning.

▪ is for this reason that many of the most influential development campaigners argue that the most important issue when it comes to charitable giving is not just raising more money, but conducting tests, understanding what is working and what isn’t, and learning.

Chapter 8 | Scared Straight?

▪ The results of this unique program are astounding. Participating communities report that 80 to 90 percent of the kids that they send to Rahway go straight after leaving this stage. That is an amazing success story. And it is unequalled by traditional rehabilitation methods

▪ We don’t observe what would have happened if we had not gotten married. Or see what would have happened if we had taken a different job. We can speculate on what would have happened, and we can make decent guesses. But we don’t really know. This may seem like a trivial point, but the implications are profound

▪ Often, failure is clouded in ambiguity. What looks like success may really be failure and vice versa. And this, in turn, represents a serious obstacle to progress. After all, how can you learn from failure if you are not sure you have actually failed? Or, to put it in the language of the last chapter, how can you drive evolution without a clear selection mechanism?

▪ Indeed, it is entirely possible that sales would have gone up even more if you had not changed the website.

▪ But it is also worth noting that such considerations carry little weight when it comes to life-threatening conditions. If you find yourself in the middle of an epidemic of, for example, smallpox or Ebola, you will want the vaccine even if there is a risk of complications in a few decades’ time.10

▪ The majority of our assumptions have never been subject to robust failure tests. Unless we do something about it they never will be.

▪ [T]he results, when they finally arrived, were dramatic. Scared Straight didn’t work. The children who attended Rahway were more likely to commit crimes than those who did not. “The evidence showed that the kids who went on the program were at greater risk of offending than those who didn’t,” Finckenauer said. “The data when you compared the treatment and control group was clear.”

▪ language, the authors effectively damned its entire rationale: “We conclude that programs like Scared Straight are likely to have a harmful effect and increase delinquency . . . Doing nothing would have been better than exposing juveniles to the program.”20

▪ But, as with medieval bloodletting, observational stats do not always provide reliable data. Often, you need to test the counterfactual. Otherwise you may be harming people without even realizing it

▪ It doesn’t require people to be actively deceitful or negligent for mistakes to be perpetuated. Sometimes it can happen in plain view of the evidence, because people either don’t know how to, or are subconsciously unwilling to, interrogate the data

▪ “Don’t smile at another man in prison, ’cause if you smile at another man in prison, that makes them think that you like them, and for you to like another man in prison, something seriously is wrong with you.”

Chapter 9 | Marginal Gains

▪ His answer was clear: “It is about marginal gains,” he said. “The approach comes from the idea that if you break down a big goal into small parts, and then improve on each of them, you will deliver a huge increase when you put them all together.”

▪ The results, when they came in, were both emphatic and surprising. The students in the schools that received free textbooks didn’t perform any better than those who did not. The test results in the two groups of schools were almost identical. This outcome contradicted intuition and the observational data. But then randomized trials often do

▪ As Toto Wolff, the charismatic executive director of the team, put it: “We make sure we know where we are going wrong, so we can get things right.”

▪ The basic proposition of this book is that we have an allergic attitude to failure. We try to avoid it, cover it up, and airbrush it from our lives. We have looked at cognitive dissonance, the careful use of euphemisms, anything to divorce us from the pain we feel when we are confronted with the realization that we have underperformed.

▪ The fact that Divine’s shade lost out in this trial didn’t mean he was a poor designer. Rather, it showed that his considerable knowledge was insufficient to predict how a tiny alteration in shade would impact consumer behavior. But then nobody could have known that for sure. The world is too complex.

▪ The brand, which operates casinos and resorts across America, reportedly has three golden rules for staff: “Don’t harass women, don’t steal, and you’ve got to have a control group.”

▪ And we will see that beneath the inspirational stories told about these shifts, the deepest and most overlooked truth is that innovation cannot happen without failure. Indeed, the aversion to failure is the single largest obstacle to creative change, not just in business but beyond

Chapter 10 | How Failure Drives Innovation

▪ This aspect of the creative process, the fact that it emerges in response to a particular difficulty, has spawned its own terminology. It is called the “problem phase” of innovation. “The damn thing had been bugging me for years,” Dyson says of the conventional vacuum cleaner. “I couldn’t bear the inefficiency of the technology. It wasn’t so much a ‘problem phase’ as a ‘hatred phase.’”

▪ “Creativity should be thought of as a dialogue. You have to have a problem before you can have the game-changing riposte.”

▪ Free-wheeling is welcome; don’t be afraid to say anything that comes to mind. However, in addition, most studies suggest that you should debate and even criticize each other’s ideas [my italics].”

▪ The problem with brainstorming is not its insistence on free-wheeling or quick association. Rather, it is that when these ideas are not checked by the feedback of criticism, they have nothing to respond to. Criticism surfaces problems. It brings difficulties to light. This forces us to think afresh. When our assumptions are violated we are nudged into a new relationship with reality. Removing failure from innovation is like removing oxygen from a fire

▪ Failures feed the imagination. You cannot have the one without the other.”

▪ When a blue slide was shown, the assistant called out “green.” And this is when something odd happened. When Nemeth then asked these volunteers to free-associate on the colors that had been wrongly identified, they suddenly became far more creative. They came up with associations that reached way beyond tired convention. Blue became “jeans” or “lonely” or “Miles Davis.”5

▪ Contradictory information jars, in much the same way that error jars. It encourages us to engage in a new way. We start to reach beyond our usual thought processes (why would you think differently when things are going just as expected?). When someone shouts out the wrong color, our conventional mental operations are disrupted.

▪ [I]nnovation is highly context-dependent. It is a response to a particular problem at a particular time and place. Take away the context, and you remove both the spur to innovation, and its raw material

▪ We noted earlier that we tend to overlook what happens before the moment of epiphany. But, if anything, we are even more neglectful of what happens afterward. This is a serious oversight because it obscures the reason why some people change the world while others are footnotes in the patent catalog.

▪ Dyson puts it: “If insight is about the big picture, development is about the small picture. The trick is to sustain both perspectives at the same time.”

▪ They found that only 9 percent of the pioneers ended up as the final winners. They also found that 64 percent of pioneers failed outright

▪ [T]he companies that may not have come up with an idea first, but who made it work? The answer can be conveyed in one word: discipline. This is not just the discipline to iterate a creative idea into a rigorous solution; it is also the discipline to get the manufacturing process perfect, the supply lines faultless, and delivery seamless.*

▪ It is when we fail that we learn new things, push the boundaries, and become more creative. Nobody had a new insight by regurgitating information, however sophisticated.

▪ My strategy has always been: be wrong as fast as we can . . . which basically means, we’re gonna screw up, let’s just admit that. Let’s not be afraid of that. But let’s do it as fast as we can so we can get to the answer. You can’t get to adulthood before you go through puberty. I won’t get it right the first time, but I will get it wrong really soon, really quickly.

Chapter 11 | Libyan Arab Airlines Flight 114

▪ [O]ur first reaction is to assume that the person closest to a mistake has been negligent or malign, then blame will flow freely and the anticipation of blame will cause people to cover up their mistakes.

▪ In the low-blame teams, on the other hand, this finding was reversed. They were reporting more errors, but were making fewer errors overall.*

▪ It was precisely because the nurses in low-blame teams were reporting so many errors that they were learning from them, and not making the same mistakes again. Nurses in the high-blame teams were not speaking up because they feared the consequences, and so learning was being squandered

▪ In management courses today, a contrast is often offered between a “blame culture” and an “anything goes” culture. In this conception, the cultural challenge is to find a sensible balance between these two, seemingly competing objectives. Blame too much and people will clam up. Blame too little and they will become sloppy

▪ The six phases of a project:

  1. Enthusiasm
  2. Disillusionment
  3. Panic
  4. Search for the guilty
  5. Punishment of the innocent
  6. Rewards for the uninvolved.

▪ But professionals working on the ground have crucial data to share in almost any context. Health care, for example, cannot begin to reform procedures if doctors do not report their failures. And scientific theories cannot evolve if scientists cover up data that reveal the weaknesses in existing hypotheses. That is why openness is not an optional extra, a useful cultural add-on. Rather, it is a prerequisite for any adaption worthy of the name.

▪ The impetus that drives learning from mistakes is precisely the same as the one that aims at a just culture. Forward-looking accountability is nothing more and nothing less than learning from failure. To generate openness, we must avoid preemptive blaming. All these things interlock in a truly adaptive system. As the philosopher Karl Popper put it: “True ignorance is not the absence of knowledge, but the refusal to acquire it.”

Chapter 13 | The Beckham Effect

▪ He was once called “an evangelist for failure.” “The most important quality I look for in people coming to Dyson is the willingness to try, fail and learn. I love that spirit, all too rare in the world today,” he says

▪ Moser was aware that previous studies had shown that people tend to learn more rapidly when their brains exhibit two responses. First, a larger ERN signal (i.e., a bigger reaction to the mistake), and second, a steady Pe signal (i.e., people are paying attention to the error, focusing on it, so they are more likely to learn from it).

▪ [T]hose in the Growth Mindset had a Pe signal three times larger (an amplitude of 15 compared with only 5). “That is a huge difference,” Moser has said.

▪ [T]hose in Growth Mindset cultures, everything changed. The culture was perceived as more honest and collaborative, and the attitude toward errors was far more robust. They tended to agree with statements like “This company genuinely supports risk-taking and will support me even if I fail” or “When people make mistakes, this company sees the learning that results as ‘value added’” or “People are encouraged to be innovative in this company—creativity is welcomed

▪ When someone is given a new challenge, like giving a major presentation to clients, it is inevitable that they will be less than perfect the first time around. It takes time to build expertise, even for exceptional people. But there are huge differences in how individuals respond. Some love the challenge. They elicit feedback, talk to colleagues, and seek out chances to be involved in future presentations. Always—and I mean always—they improve. But others are threatened by the initial “failure.” In fact, they engage in astonishingly sophisticated avoidance strategies to insure they are never put in that situation ever again.

▪ When it comes to creating a dual-cyclone vacuum cleaner, learning how to take a world-class free kick, or becoming an expert chess player or military leader, success requires long application. It demands a willingness to strive and persevere through difficulties and challenges

Chapter 14 | Redefining Failure

▪ It was only years later, when reading about cognitive dissonance and the Fixed Mindset, that the pieces fell into place: they were so terrified of underperforming, so worried that the exam might reveal that they were not very clever, that they needed an alternative explanation for possible failure. They effectively sabotaged their own chances in order to gain one

▪ It is precisely because the project really matters that failure is so threatening—and why they desperately need an alternative explanation for messing up. As one psychologist put it: “One can admit to a minor flaw [drinking] in order to avoid admitting to a much more threatening one [I am not as bright as I like to think].”13

Coda | The Big Picture

▪ Error, under the Greeks, was no longer catastrophic, or threatening, or worth killing over. On the contrary, if someone had persuasive evidence revealing the flaws in your beliefs, it was an opportunity to learn, to revise your model of the world

▪ We noted in chapter 7 that many of the seminal thinkers of the last two centuries favored free markets and free societies precisely because they resist the human tendency to impose untested answers from above.

▪ In high tech, as we have seen, the world is moving so fast that entrepreneurs have found it necessary to adopt rapid iteration. They may have bold ideas, but they give them a chance to fail early through the minimum viable product (MVP). And if the idea survives the verdict of early adopters, it is iterated into better shape by harnessing the feedback of end users.

▪ This is the notion we need to instil in our children: that failure is a part of life and learning, and that the desire to avoid it leads to stagnation

▪ The pre-mortem is crucially different from considering what might go wrong. With a pre-mortem, the team is told, in effect, that “the patient is dead”: the project has failed; the objectives have not been met; the plans have bombed. Team members are then asked to generate plausible reasons why. By making the failure concrete rather than abstract, it alters the way the mind thinks about the problem.

▪ It has also been backed by a host of leading thinkers, including Daniel Kahneman. “The pre-mortem is a great idea,” he said. “I mentioned it at Davos . . . and the chairman of a large corporation said it was worth coming to Davos for.”14

▪ A pre-mortem typically starts with the leader asking everyone in the team to imagine that the project has gone horribly wrong and to write down the reasons why on a piece of paper. He or she then asks everyone to read a single reason from the list, starting with the project manager, before going around the table again.

[Yeni Yazılar] #medicine #failure #aviation #law #errors #learning #management #hierarchy #free-markets #pre-mortem #Dyson #Beckham #testing