So much can go wrong. Here’s how to make it go right.

michal-matlon-4ApmfdVo32Q-unsplash.jpg

If you read this post, you will see 37 of the hundreds of mental traps, errors, shortcuts, and biases that I catalog in my book, BIG DECISIONS: Why we make them poorly. How we can make them better.

And it is likely that you will come away with your head spinning after you read the eight short cases selected from my book that demonstrate that even the smartest people and best known organizations can fall prey to and suffer great damage from these traps and biases.

Before we get into the traps, biases and related cases, I need to address the question, why this list? That’s simple - these are the serious traps and biases that you most likely can can avoid, mitigate, or overcome by getting into a coached group of business leaders, owners, entrepreneurs, and professionals.

Being part of a coached group offers you more minds to tap, more perspectives to consider, more ideas for action or what not to do, exposure to new information and best practices, and more encouragement, all curated by your experienced and qualified business coach.

Simply, in a coached group, the coach and group can help you navigate, avoid, or mitigate the effects of these eight categories of mental traps and errors that lead to bad decision making, inaction, or misappropriate action.

1. Effects based on our lack of information and how we are led to misinterpret information

  • Availability heuristic, in which we base judgments on available information, even if it is not representative. We are programmed to immediately "fit" the information we have at hand to our experience – magnified by what is most current and what seems "like" or relevant to the situation - whether it is applicable and sufficient. We often assess the probability of an event by the ease with which related instances or occurrences come to mind, rather than by carefully examining the situation and alternatives.

  • All humans, are prone to making the fundamental cognitive error, in which we underestimate the contribution of our beliefs and theories to observation and judgement. We often don't recognize when we've made an interpretation and that there are other ways that the information could have been interpreted. The problem is that our past experiences color how we interpret evidence and new experiences.

  • Ignoring the base rate in analysis and decision making is called base rate neglect.  When we assess the probability of an event or a cause, we often make the error of ignoring predictive background information in favor of only the information at hand in the case under consideration. What’s at hand may not be representative.

  • Expectation bias, the way in which we are prone to see what we expect to see, when our expectations influence our perceptions.

  • Illusion of validity, which is when we “see” a pattern or a story in data that leads us to overestimate our ability to interpret the data and accurately predict outcomes as a result.

  • Confirmation bias. We tend to subconsciously discount, dismiss, or ignore evidence that threatens our favored beliefs while overweighting evidence supporting out favored beliefs, instead of seeking and coolly evaluating impartial evidence.

  • The narrative fallacy, which arises from our vulnerability to overinterpretation and our preference for simple stories over raw facts. We think in terms of stories and relationships between facts – never mind whether the story is accurate and whether the facts are linked as the story suggests. Often the result is a distorted mental representation of the world; we think we understand when we don’t really understand.


“Consider the underestimation of the risk of a nuclear accident when constructing the Fukushima Daiichi Nuclear Power Plant in Japan.

The plant in Japan's Ōkuma, Fukushima Prefecture sat on a high bluff over the ocean. The seawall around the reactors was 10 meters (33 feet) high. In siting the plant, power company TEPCO and Japanese nuclear regulators relied on a risk assessment that concluded that there was an infinitesimal chance of the region suffering an earthquake greater than magnitude 8.0, the strongest earthquake previously recorded. Further, the 10-meter seawall was deemed sufficient because there was no modern history of a tsunami above that height.

Sadly, the Tohoku earthquake on March 11, 2011, was magnitude 9.0, 10 times as powerful as the maximum earthquake that the plant was built to withstand. The tsunami peaked at 13-14 meters (43-46 feet), 3-4 meters above the seawall.

The active reactors automatically shut down when the earthquake occurred. When the electricity failed, emergency diesel generators powered pumps to circulate coolant through the reactors' cores to remove decay heat.

The tsunami that hit 50 minutes after the earthquake flooded the Units 1–4 reactor buildings and knocked out the emergency generators. The loss of coolant caused three nuclear meltdowns, three hydrogen explosions, and the release of radioactive contamination. Radiation in the atmosphere forced the government to evacuate 154,000 residents from a 20-kilometer radius around the plant. Large amounts of radioactive water were discharged into the Pacific Ocean. The health effects of the disaster are still being revealed. Decontaminating the affected areas and decommissioning the plant will take decades.

The nuclear plant had been built to withstand the strongest past earthquake, with TEPCO and the regulators not envisioning worse. But the evidence relied on to site the plant was insufficient. Missed evidence would have changed the plans for the plant: Subsequent research showed that a wave high enough to overtake the reactors had hit the coast a millennium ago. Further, a probabilistic approach - asking about the odds of it happening rather than if it had ever happened in recorded history - would have dictated against siting the plant on the bluff.

The Fukushima Daiichi Nuclear Power Plant disaster offers a stark example of what can go wrong when we reply on a small and biased sample of data. In short, a handful of historical observations of earthquakes and tsunamis are not sufficiently predictive of the nature of future earthquakes and tsunamis.

But why did engineers and regulators, supposed experts, fall prey to the trap of relying on insufficient and biased information? One explanation is that they, like all humans, are prone to making the fundamental cognitive error, in which we underestimate the contribution of our beliefs and theories to observation and judgement. We often don't recognize when we've made an interpretation and that there are other ways that the information could have been interpreted.”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


2. Effects based on how we address risk

  • We often display an inability to properly assess and respond to risk. Probability neglect or risk blindness can be at work. This can lead us to overstate the risks of less harmful activities and therefore make us less likely pursue them. It also leads us to underrate the risks of more dangerous activities and therefore leads us to get into trouble.

  • Furthermore, we can be both risk averse and risk seeking, according to prospect theory. Fear of disappointment can lead people to prefer a sure thing for a lesser gain to the high probability of a greater gain.  Likewise, fear of a large loss can lead people to accept a sure thing for a smaller loss versus the low probability of a greater loss. Also, the hope of a large gain can lead people to prefer a small chance of a large gain over a sure thing for a smaller gain. Likewise, the hope of avoiding a large loss can lead people to prefer a small chance of a large loss to the sure thing of a smaller loss.


“We could make driving much safer. Autonomous, that is, self-driving, cars have the promise of precluding some portion of the estimated 94% of accidents caused at least in part by human error. Tesla CEO Elon Musk has claimed that driving a Tesla in its Autopilot mode (as close to autonomous driving as any production automobile on the road at this writing) is about twice as safe as a human driver driving without it. If this claim is true is hard to evaluate – especially as news reports highlight accidents involving cars in self driving mode or in which drivers have not met their responsibilities in driver-assist mode. In fact, David Zipper, a Visiting Fellow at Harvard’s Kennedy School, questionned the assumptions behind the benefits of self-driving cars, writing, “Unfortunately, the potential safety benefits of self-driving cars are probably overblown.” Nonetheless, a 2017 study for Rand assessed 500 different what-if scenarios for autonomous driving technology and found that “in most, the cost of waiting for almost-perfect driverless cars, compared with accepting ones that are only slightly safer than humans, was measured in tens of thousands of lives lost.”

However, self-driving cars are not proven to be safe enough for the public to want to ride in them in fully autonomous mode. Over 40% of Americans “would never ride” in a fully automated vehicle, a study conducted by J.D. Power and Associates and the National Association of Mutual Insurance Companies found. AAA's 2019 survey found that 71% of people would be afraid to ride in fully autonomous vehicles and only 19% said that they would be comfortable with their children and family members riding in them. A Reuters Ipsos 2019 survey found that half of people thought that autonomous vehicles were more dangerous than human-driven vehicles. Two-thirds said “self-driving cars should demonstrate a higher standard of safety than human drivers.”

Here, again, we have evidence of our inability to properly assess and respond to risk. In this case, it is likely that probability neglect or risk blindness is at work. Often, we are unable to gauge risk, which can lead us to overstate the risks of less harmful activities (that is, riding in a self-driving car that is not totally safe but safer than a human-driven car) and therefore make us less likely pursue them. It also leads us to underrate the risks of more dangerous activities (driving ourselves around) and therefore leads us to get into trouble.”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


3. Effects based on how we choose

  • Hyperbolic discounting (also called current moment bias or present bias), leads us to tend to have a stronger preference for immediate rather than for later payoffs. We make choices today that our future selves would prefer that we would not have made. We find it difficult to see ourselves in the future and to alter our current behaviors and expectations to have a better future. We often opt for current pleasure and leave the pain for later. Because of this, our organizations tend to opt for short-term gain rather than long-term sustainability.

  • Anchoring, our tendency to compare and contrast only a limited set of items, to fixate on a value or number that in turn gets compared to everything else. Anchoring produces a skewed perspective that can lead to bad choices.


“Yahoo CEO and former Google executive Marissa Mayer believed that she could turn around Yahoo when others could not.

Starting in 2012, as Yahoo's seventh CEO in a little more than five years, "Mayer put her resources in some of the wrong places, spent a lot of money, and didn't have a lot to show for it," said analyst Jan Dawson as quoted by CNN Money.

In a fire sale, in 2016 Verizon agreed to acquire Yahoo for $4.8 billion. This price was cut to $4.48 billion after Yahoo was hit by massive security breaches involving a million users. When the deal closed, Mayer left the Board and management, albeit with a $23 million golden parachute. Yahoo joined AOL and Huffington Post in Verizon’s digital media division.

It's unknown if turning around Yahoo was possible. But mental traps evidently made the task even harder for Mayer. Her story shows us just how difficult it is for us to "unlearn" what has made us successful when new circumstances demand another approach. We are wired to stay on the path we know and adjust our perspective to keep us on that road, even when it leads to a cliff edge.

Yet, Mayer was in a sense exempted from having to jump off the cliff, given her personal financial gain. “In March 2017, it was reported that Mayer could receive a $23 million termination package upon the sale of Yahoo to Verizon. Mayer announced her resignation on June 13, 2017. “ In spite of large losses in ad revenue at Yahoo! and a 50% cut in staff during her five years as CEO, Mayer was paid $239 million, mainly in stock and stock options.”

The Los Angeles Times wrote that in May 2015, Yahoo valued its investment in China's Alibaba, a giant Google competitor, at $29.4 billion and its 35.5% stake in Yahoo Japan at $8.7 billion. Yahoo’s market capitalization that day was $34.7 billion. "…the stock market valued everything other than those two holdings – that is, everything subject to Mayer’s management – at negative $3.4 billion."

The Street website wrote, “For all of the goals for remaking Yahoo! that Mayer and CFO Ken Goldman brought to the early internet company, they ultimately became simple stewards of capital, focused on returning cash to shareholders, monetizing the Asian equity stakes and ultimately setting Yahoo up for the sale to Verizon.” Forbes said of the sale, “Yahoo squandered its massive head start and let each wave of new technology in search, social, and mobile pass it by.”

Yet, later in the year when Mayer wrote to employees about Yahoo's sale to Verizon, she "put lipstick on the pig," by using reframing, changing how information is presented to elicit a different point of view, especially to accommodate our bias for loss aversion. Rather than recognizing that the strategies that had been pursued under Mayer's leadership had not turned around the company, her letter cited the company's acquisition by Verizon as evidence of "the immense value we’ve created" and stated, "We set out to transform this company – and we’ve made incredible progress."

A way to view Mayer’s perspective is that she was subject to anchoring, also called focalism or the relativity trap, our tendency to compare and contrast only a limited set of items, to fixate on a value or number that gets compared to everything else. If she thought that she was saving a company from failure, then the company’s announced sale amount was $4.8 billion could be seen by her as a victory. But the company valuation when she took over contrasted for the sale price show how skewed her perspective was.”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


4. Effects based on our desire not to deal with bad news and loss:

  • Bad news avoidance, in which we appear to treat bad news as something to be avoided. We don’t want to know.

  • The Ostrich effect, in which we ignore an obvious (negative) situation. We don’t want to have to act.

  • Loss aversion or loss avoidance, which can appear when we directly compare or weigh options against each other and losses loom larger than gains. Evolution appears to have led us to place more urgency on avoiding threats than on maximizing opportunities. The prospect of losses has become a more powerful motivator on our behavior than the promise of gains. 

  • Sunk cost fallacy, also called the investment trap or persistence of commitment, our tendency to persist in achieving a goal due to already committed expenditure and investment, including effort and attention, even when the prognosis for success is poor. It’s a case of “self-justification,” in which we persist in a failing action because we are justifying our previous decision. As a result, we are misallocating and wasting resources that are better invested in more promising opportunities. The greater the size of the sunk investment, the more people tend to invest further, even when the return on added investment appears not to be worthwhile. This can be seen as "throwing good money after bad," because the resources and effort are already lost, no matter what you do now.


“Why would two countries continue to waste huge sums of money over four decades to develop and market an airplane that no one would buy except their own national airlines who were heavily subsidized to do so?

That’s the puzzle presented by the Concorde, the first supersonic (SST) commercial airliner. The Concorde was built in a 42-year program by a consortium of British and French companies backed by their governments.

Concorde development began in 1962 based on a treaty between the two countries. The plane began commercial service in 1976 and flew for 27 years. The development cost was £1.134 billion, funded by the UK and French governments. The cost to build the small number of Concordes produced for commercial service, 16, was £654 million, of which only £278 million was recovered through sales. The two governments funded this debt.

Consultant Peter Saxton, a former RAF pilot and British Airways Captain, chief pilot and senior manager, says the Concorde was "a project which cost the British and French tax-payers a staggering amount for development and construction, was not well managed if massive cost overruns are anything to go by, never made anything close to a financial return for its investors (us), and led the British aircraft industry into a cul-de-sac." He calls it a "a stupendous example of a project that was kept alive for a whole raft of reasons, none of which seems to have included the serious intention of making a commercial return for investors. Those reasons...included maintaining technological expertise, providing employment, securing Britain’s entry into the European Common Market, and patriotism or prestige." Saxton surmises that the governments kept "throwing more good money after bad" because they "seemed prepared to pay the prestige premium no matter how high it rose."

The Concorde case spawned a new name for the trap previously labeled escalation of commitment or irrational escalation. The Concorde fallacy is a more extreme version of the sunk cost effect where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong and increased investment would not rescue the effort.”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


5. Effects based on our difficulty in waiting for payoffs

  • impulsivity, which is choosing an immediately gratifying option at the cost of long-term happiness.

  • Action-oriented bias, in which we are driven to act without considering all the potential ramifications of our actions. The leads us to potentially overestimate the odds of positive outcomes and underestimate the odds of negative ones. It arises because of we may put too much faith in our ability to produce desired outcomes based on our tendency to take too much credit for our past successes.


“PricewaterhouseCoopers' partner and accountant Brian Cullinan believed that he and his associate were well prepared to properly dole out the envelopes with winners' names to presenters at the 2017 Oscars ceremony.

Cullinan made the biggest Oscar award mistake ever by handing presenter Warren Beatty the wrong envelope. Beatty's on-stage partner, Faye Dunaway, announced La La Land was the winner when Moonlight was actually the best picture winner. The upshot was that PwC's plum 83-year assignment as the Academy of Motion Picture Arts and Sciences’ accountant was placed in jeopardy and Cullinan's reputation was trashed. The Academy subsequently banned him from any future participation in the Oscars process.

The accounting firm partner was likely entrapped by mental errors and biases, with these being among the most obvious:

• Cullinan was cocksure that the awards process would come off without a glitch. People Magazine reported that he said before the Oscars broadcast, “We’ve done this a few times, and we prepare a lot.” His overconfidence was surely rooted in his illusion of control, overestimating how much we influence external events, leading us to take actions that we believe will be effective when they won’t be.

• Tweeting a backstage photo of Emma Stone holding her Best Actress Oscar, right before the winning picture announcement, rather than focusing on his duties, Cullinan displayed impulsivity, which is choosing an immediately gratifying option at the cost of long-term happiness. The self-destructive nature of this action is amplified by allegation that Cullinan had been told by the Motion Picture Academy not to use social media during the ceremony.

• Cullinan's belief that no mistake would mar PwC's handing of the Oscar winners' envelopes was likely due to the overconfidence effect. We tend to overestimate our ability to predict the future and to have excessive confidence in our answers. (Alarmingly, for some types of questions, answers that people rate as "99% certain" turn out to be wrong 40% of the time!)

Working for a big-name firm and having a big title gives no protection from biases and traps. Indeed, it can lead to overlooking evidence and both rash action and stultifying inaction.”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


6. Effects based on our egos:

  • The Dunning-Kruger effect, in which incompetent people fail to realize they are incompetent because they lack the skill to distinguish between competence and incompetence.

  • Egocentric bias, our the tendency to rely too heavily on our own perspective and to have a higher opinion of ourselves than merited.

  • Illusory superiority or superiority bias, when we think too much of ourselves, without good reason, relative to others.

  • Inability to self assess, in that people in general hold the premise that their performance in domains where they are not experts is at the 70% level, meaning above average. This is in spite of the fact that half of any population will be below average. A psychologist has said about the inability to self assess, "How can I know what I don't know when I don't know what I don't know?"

  • Self-serving bias, in which we tend to view ambiguous information in the way that most benefits us. A payoff can lead us to think we are right and compromise our values and our group’s values.

  • Choice blindness, in which people do not notice big differences between what they intended to do and what they really did, and then give after-the-fact reasons as a defense for their misaligned choice.

  • The planning fallacy, in which we tend to underestimate task-completion times. because of our natural optimism, belief in our capabilities, and inexperience. It is easier for us to envision the success of a plan or project than to forecast all the things that can go wrong and the time and effort required to deal with them.


“Lehman Brothers CEO Richard Fuld believed that the investment bank was adequately capitalized when it increased its leverage from 12-1 to 40-to-1, became a major player in securitizing subprime mortgages, relied on risky credit default swaps for protection, and engaged in accounting maneuvers that disguised how much debt the firm had taken on. Also, he believed that the U.S. government would bail out the firm when the policies and actions he enabled put the firm on the brink of failure in 2008.

Contrary to Fuld's belief, Lehman Brothers was woefully undercapitalized as the financial crisis arose. The federal government walked away from a "too big to fail" tag that Fuld and others thought would be applied to the firm. Lehman Brothers failed. Fuld was disgraced.

Fuld's assuredness that his way was the road to great success for Lehman Brothers and that the U.S. government would backstop the firm suggests his incompetence, that he was captured by the Dunning–Kruger effect. In this trap, incompetent people often overestimate their abilities, competencies and characteristics and consider themselves more competent than others: They can't see their incompetence because they lack the skill to distinguish between competence and incompetence.”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


7. Effects based on not addressing choice properly

  • The false dilemma fallacy, which is when the choice is presented as being between two options, when in fact one or more additional options exist. This establishes a false construct.

  • The exclusive alternatives trap, in which too few options are presented. We use traditional logic to operate on a limited set of options. Yet many situations should lead us to juggle multiple alternatives.

  • The paradox of choice, choice overload or decision paralysis. Having choices can be powerful and positive. But having too many options can overload and paralyze us. People become worried they’ll regret the choice they make.  Limiting options actually helps us avoid choice overload. 


“Mountain climbing guide Rob Hall believed that his expedition plan would get the Mount Everest climbers in his charge onto and down from the summit. He believed his hard rule that the team would turn around at 2 p.m. if they were not yet on top of the mountain would protect the climbers from disaster. He believed that allowing his climbers to express any dissenting views while the expedition made the final push would hurt their chances of success.

Nearly all the climbers on the summit push that day, including Hall and his team, kept climbing and arrived at the top after two o’clock. As a result, many climbers found themselves descending in darkness, well past midnight, as a ferocious blizzard enveloped the mountain. Five people died in this highly publicized 1996 disaster and many others barely escaped with their lives.

What might have led renowned guide Hall astray, beyond the judgment-skewing effects of high stress, and oxygen and sleep deprivation?

While Hall's climbing and leading experience are evidence that he should have known about the dangers of scaling Mt. Everest, clearly his planning did not account for the bad luck of leading less experienced climbing clients in a surprise storm of epic proportions.

But Hall’s errors go beyond not planning for bad luck.

Evidence suggests he saw the climb as a "now or never" opportunity for his charges. However, other climbers including filmmaker David Breashears and his team who were on the mountain at the same time, got to the top and down in following days. There were other opportunities to make the summit push. Hall’s "all or nothing" thinking suggests that he was trapped by the false dilemma fallacy, which is when the choice is presented as being between two options, when in fact one or more additional options exist.

A reasoner who unfairly presents too few choices and then implies that a choice must be made among this short menu of choices is using the false dilemma fallacy, as does the person who accepts this faulty reasoning. Leaders, whether intentionally or not, can deceive others by establishing false constructs. Leaders who mislead narrow the options to exclude valid choices. They create a false dilemma by unfairly presenting too few choices and implying that a choice must be made among this short menu.

Leaders don’t always see that they are misleading by presenting too few options. Sometimes the situation is presented as an “either-or” analysis because the reasoner is pursuing traditional logic, which we are conditioned to do. That’s the exclusive alternatives trap. Yet many situations, as the deadly climb illustrates, should lead us to juggle more than two alternatives”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


8. Effects based on our tendency to avoid change

  • The default option, the option that will result if one does nothing, is often what people wind up opting for, whether or not it is good for them.

  • Negativity bias or status quo bias (also called system justification), in which we tend to normalize our current situation as our reference point and to defend and reinforce the status quo. We tend to view deviations - even changes from the status quo that will be in our and our group’s self interest - as riskier, less desirable, or simply too much effort.

  • Disconfirmation bias, also called motivated skepticism, which occurs when a person is more likely to accept information that supports previously held beliefs and more likely to dismiss information that refutes previously held beliefs. This bias can emerge when people subject disagreeable evidence to more scrutiny than agreeable evidence.


“It is startling that the engineer who invented the digital camera worked for the giant of photography, Kodak. It owned the patent for the engineer’s invention. Further, in 1981, Vince Barabba, then Kodak’s head of market intelligence, conducted extensive research that looked at the likely adoption curves for digital photography. His research showed that digital photography was capable of displacing Kodak’s film photography business. But it also offered Kodak a 10-year window to prepare for the supremacy of digital photography.

Yet, Kodak proceeded to bury the technology rather than commercialize it. Had it adopted the digital camera, today it could be the Apple of digital imaging. Kodak’s attitude was that no one needed digital photos. Film and photos printed on paper, using silver halide technology, had ruled for 100 years and Kodak ruled film and paper photography. How wrong was that conclusion! Evidence was ignored as digital photography caught hold.

Kodak’s huge strategic error likely stemmed from many mental traps and biases referenced in this book. First is the ambiguity effect or Ellsburg paradox, our tendency to choose the option with a clear probability of success rather than one whose probability is less clear because of missing information. When offered a choice of risks to take, people tend to "prefer the devil they know" rather than assuming a risk where odds are difficult or impossible to calculate. But the option with an unclear probability may have an even higher probability of success and provide a better outcome. Such proved true for digital photography, the technology without a track record that Kodak management spurned despite Barabba’s forecast, which then displaced “tried and true” film photography.

Here’s a baker’s dozen of the many other biases and traps that likely enmeshed Kodak’s leadership it its decision making about the digital camera:

Competitor neglect (planning without factoring in competitive responses)

Confirmation bias (ignoring or dismissing anything that threatens our world view by surrounding ourselves with people and information that confirm what we think)

Default choice (an option that is selected automatically)

Familiarity heuristic (believing that our behavior is correct if we have done it before)

Forever changeless trap (thinking that the current condition will never change).

Future blindness (we are unable to take into account the properties of the future)

Group think (when a group makes irrational or non-optimal decisions driven by the urge to conform or the belief that dissent is impossible)

Hyperbolic discounting (favoring immediate over later payoffs)

Illusion of control (overestimating our degree of influence over external events)

Illusion of explanatory depth (relying on social consensus to see what is true).

Overconfidence effect (having excessive confidence in our answers to questions)

Selective perception (what we expect influences what we perceive)

Sunk cost fallacy (persisting in achieving a goal due to committed expenditure and investment, including effort and attention, even when the prognosis for success is poor)

Unknowledge (our systematic underestimation of what the future has in store)

Wishful thinking (the formation of beliefs based on what might be pleasing to imagine, rather than on evidence, rationality, or reality.)”

BIG DECISIONS: 40 disastrous decisions and thousands of research studies tell us how to make a great decision when it really counts


These seven group effects help show why a coach/facilitator is essential for an effective group. The coach/facilitator can help the group avoid:

  • The availability cascade, which is the self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in discourse.

  • The bandwagon effect, in which group members go with the flow and pick a pick a winner or a favorite. It causes behaviors, social norms, and memes to propagate — regardless of the evidence or motives in support. This bias has to do with our innate desire to fit in and conform.

  • The false consensus effect, which is seeing more consensus in those around us for our beliefs than is actually the case, leading us to not consider judgments different from ours.

  • Group think, in which we substitute the pride of membership in a group for valid reasons to support the group's policy. We may think, “If that's what our group thinks, then that's good enough for me. It's what I think, too.”

  • The group polarization effect, in which the initial biases of group members lead the group to shift either toward risk or toward caution. The group polarization effect is driven by who states the argument and the sequencing and number of arguments pro or con offered in discussion.

  • Illusion of explanatory depth, which is when group members generally think they understand one another even when no one really has a clear understanding of what is being considered. That arises because in areas in which we do not have expertise we rely on social consensus.

  • Shared information bias, which is the tendency for group members to spend more time and energy discussing information about which all members are already familiar and less time and energy discussing information of which only some members are aware.

The message: Being part of a support and accountability group led by a qualified and experienced business coach/facilitator can help you navigate away from the mental traps and biases that can otherwise lead you to bad decision making, inaction, or misappropriate action.


Previous
Previous

Let’s not be complacent about our ignorance!

Next
Next

Why settle? Go for the gold!