10 surprising mental traps: Why we make bad decisions

6a00d8341c594453ef01b8d0e62cbc970c-800wi.jpg


True or false?

  • Trying to change a person's mind about a strongly-held belief by presenting evidence to the contrary may instead strengthen the person's belief.

  • Yellow cars are safer to drive.

  • If you hear it said, you are more likely to believe it, even if it is not true.

  • Two thirds of University of Nebraska faculty rated themselves in the top 25% for teaching ability.

  • People would rather eat an orange now and chocolate next week.

  • Eating saturated fats is a major cause of heart disease.

  • People often demand as much as three times as much money to sell an object than they would be willing to part with to buy it.

  • Eating beef is a cause of death by lightning. 

In research for my upcoming book, Big Decisions: Why we make them badly, how we can make them better, I have discovered more than 280 psychological, perception, memory, logic, physical and social effects, errors, biases, shortcuts, fallacies and traps that lead us into making bad decisions. 

Here are ten of what I find to be among the most surprising and thorny traps.  They go by varied names and have many disguises.  I have grouped these traps in three categories, and for each offer a definition and thoughts about why it poses a problem for decision making, and then give some examples of how they can lead us astray.  Read on to find the answers to the "true or false" quiz and to learn much more about ways we unknowingly trip ourselves up.

Belief and causality

1.  BACKFIRE EFFECT

Definition: People can react to evidence that disconfirms what they believe by strengthening their beliefs.

6a00d8341c594453ef01b8d0e62d00970c-800wi.jpg


Why it's a problem:
 "It seems that unless we are extremely careful, by trying to convince the most hardened cynics of the evidence we may end up doing more harm than good." - Simon Oxenham.1 

Examples:

  • Influenza vaccine. A recent study by researchers at Dartmouth College and the University of Exeter concluded that debunking the myth that the flu vaccine can give you the flu does did not lead people concerned with vaccine side effects to get vaccinated  In fact, it actually reduced the intent of those concerned about side effects to get vaccinated.2

  • Tax cuts. Likewise, a decade ago, Brendan Nyhan at the University of Michigan and Jason Reifler at Georgia State University found that when U.S. conservative voters were presented with evidence that President George W. Bush's tax cuts failed to stimulate economic growth, the percentage agreeing with the statement that Bush's tax cuts increased government revenue leapt from 36% to 67%, while the same evidence moved the views of non-conservatives in the other direction (from 31% to 28%).3

  • Vaccination in general. Worse yet, research on the effect of dissemination of vaccination information by the U.S. Centers for Disease Control led Norbert Schwarz of the University of Southern California to conclude, "Countering false information in ways that repeat it may further contribute to its dissemination by associating the information with a credible source."4

2.  FALSE CAUSALITY (also called the "false cause fallacy," the "post-hoc fallacy" or "Post hoc, ergo propter hoc," “after this, therefore because of this”)

Definition: A person sees two sequential events as evidence that the first caused the second - even if the first did not cause the second.


Why it's a problem:
 Correlation is not necessarily causation. Theo Clark writes in TheSkepticsFieldGuide.net, "False cause can have very serious consequences. For example, the false cause fallacy during the European dark ages led to the widespread belief that illness, famine and personal misfortune was caused by black magic and sorcery. Such beliefs led to 'witch-hunts' (literally) and unfounded but widely believed accusations of sorcery... The false cause fallacy varies in the magnitude of the problems it causes. From the simple and harmless superstitions of sports people undertaking rituals or wearing a 'lucky charm' in order to perform well, to the harm caused to seriously ill people when diverted from effective treatments to ineffective or harmful treatments by quacks or frauds."5

Examples:

  • After taking cold medication, most people recover from their colds a few days.  Therefore, cold medication is effective.  The rate of recovery is the same for those who do not take cold medication.6

  • Yellow cars are involved in fewer accidents.  Therefore, yellow cars are safer to drive.  It's not the color, it's the driver: People who buy yellow cars tend to be safer drivers.7

  • As the rate of use of antiperspirants has increased, so has the incidence of breast cancer. Stop using antiperspirants!   If antiperspirants are the cause, why isn't the rate of breast cancer in men much higher? 8

  • People who go outside at night get malaria.  It must be the night air that causes malaria, so let's be sure homes are shut up tight to keep the night air at bay.Nope, scientific experiments finally showed that it is the bite of the pesky anopheles mosquito, not the outside night air, that spreads malaria.9

  • Hormone replacement therapy reduces coronary heart disease in females. Use HRT! The research that led to this false conclusion did not consider that the patients using HRT tended to  come from a higher socioeconomic strata, had a better diet and exercised more.10

  • Eating breakfast reduces obesity.  I want my Cocoa Puffs!  But the studies that "proved" this did not consider other potentially even more important factors relating to weight loss, such as whether the participants worked out more, or changed their diet, or ate differently or less the rest of the day.11,12

  • As U.S. beef consumption has decreased, so have deaths caused by lightning.Eating beef is a cause of death by lightning.  Not!13

  • As the number of pirates has declined over the centuries, average global temperatures have increased. Pirates fight global warming.  Let's all root for more Johnny Dep pirate movies!13

3.  FUNDAMENTAL ATTRIBUTION ERROR

Definition: While we each individually tend to view our behaviors within the context of our circumstances, we tend to explain others' behaviors by over-emphasizing their character as opposed to the given situation they are in.

6a00d8341c594453ef01b8d0e62e81970c-800wi.jpg


Why it's a problem:
 Mediation expert John Ng calls this "an unhealthy win-lose approach."14  John Sterman of MIT observes, " When we attribute behavior to people rather than system structure, the focus of management becomes scapegoating and blame rather than design of organizations in which ordinary people can achieve extraordinary results."15

Examples:

  • As drivers, we have good reasons when we speed - we will be late for work, need to make time, the speed limit is unreasonably low, etc.  We tend to think others who speed past us are reckless and are different than we are.16

  • When someone is late, we tend to think they are inconsiderate.  When we are late, it is usually because of the situation - sick child, bad traffic, conflicting appointment, etc.17

  • In the 2012 U.S. presidential election race, Mitt Romney said 47% of the American population are people who pay no income tax, believe they are "victims," and "are dependent on the government." This assessment ignored the situational aspects: Many people who do not pay income tax are students or are retired.18

  • Earlier this year, Lisa Brown of Spokane and her husband called Comcast to cancel their cable service. Lisa's call was routed to a retention specialist who instead tried to sign them up to a new two-year contract. She politely refused. The couple's next cable bill was for $230 and was addressed to (expletive deleted) A**hole Brown.  What motivated the Browns to cancel their service was that they were having money problems and could no longer afford to pay for it.19

  • Before World War I erupted, the leaders of Britain, France, Germany, Russia and other soon-to-be-at-war countries perceived their own country to be significantly less hostile than the other countries.  Not understanding others' situation nor how others viewed them led the way to war.20

  • Europeans are different than Americans: They walk, ride bikes and use mass transit more than Americans.  Might that be because older European cities are more walkable and less conducive to automobiles, and that the cost of automobile use is relatively higher in Europe?21

4.  ILLUSION OF TRUTH EFFECT (also called the "illusory of truth effect")

Definition: People are more likely to believe statements they have previously heard (even if they can't remember having heard them), whether or not they are actually true.

6a00d8341c594453ef01b7c75d1120970b-800wi.jpg


Why it's a problem:
 Psychologist Jeremy Dean writes in PsyBlog, "Because of the way our minds work, what is familiar is also true. Familiar things require less effort to process and that feeling of ease unconsciously signals truth (this is called cognitive fluency). As every politician knows, there’s not much difference between actual truth and the illusion of truth. Since illusions are often easier to produce, why bother with the truth? The exact opposite is also true. If something is hard to think about then people tend to believe it less. Naturally this is very bad news for people trying to persuade others of complicated ideas in what is a very complicated world." 22

Examples:

  • In research conducted by Jason D. Ozubko and Jonathan Fugelsang of the University of Waterloo, participants read 60 plausible statements, some true and some not, every 14 days, and then rated them based on their validity. Some statements were presented more than once in different sessions. Participants were more likely to rate as true statements that they had heard previously, whether or not they were true.23

  • Eating saturated fats are major cause of heart disease, right?  Most us believe it.  We have heard it time and again.  But even several decades ago studies were questioning this oft repeated assertion.  Then last year The Annals of Internal Medicine published a meta analysis in which researchers looked at 72 published studies on fats and heart disease, involving 600,000 people from 18 countries. A finding: Saturated fats had no effect on heart disease risk. Experts are now debating the conclusion - should we now be revisiting our belief?24

  • U.S. President Barack Obama has faced continuing claims that he is a Muslim, despite denials and great evidence otherwise, including videos of him drinking alcohol, eating pork, swearing on the Bible and attending Christian churches.  According to research conducted by the Pew Research Center, almost 20% of people polled maintained their belief about President Obama's religion, especially after they continued to hear the false statements over and over again.25

  • Research by Nielsen shows that 68% of consumers trust consumer opinions posted online and 78% say online customer reviews influence their purchase decisions.26 According to PBS correspondent Jackie Judd, review site Yelp "labels about 25 percent of submitted reviews as suspicious or not recommended."  That's a lot of potentially false claims, given that 67 million reviews a year are posted on Yelp.  No doubt the illusion of truth effect leads many readers of multiple positive but false online reviews to believe and even act on these claims. Vince Sollitto of Yelp told PBS"If a business misleads consumers by writing fake reviews and you go out and you have a bad meal as a result, so what? But what if you’re looking for a pediatrician? What if you’re looking for an urgent care clinic?"27

Self knowledge and judgment

5.  INABILITY TO SELF ASSESS (also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and  "Lake Wobegon effect")


Definition:
 Most people demonstrate flawed self-assessment skills, with a tendency to overestimate their own abilities, competencies and characteristics, especially as compared to how others assess them.

Why it's a problem: A leading expert on this effect, psychologist David Dunning of Cornell University, offers reasons why we should care about the inability to self assess: "Consider...patients being treated for high blood pressure. Medical experts agree that there are no overt symptoms that reliably alert people to episodes of high blood pressure. But one study showed that 92 per cent of patients claimed that they could tell when their blood pressure was up (with over 60 per cent confidentially asking the interviewer not to reveal this opinion to their doctor). These misplaced beliefs about self-diagnoses matter. Patients tended to take their prescribed medication only when they thought it treated the ‘symptoms’ they were paying attention to, instead of taking their medication as their doctor prescribed.  Vaulted self-views also underlie risk-taking behaviors that have an impact on health. Teenage girls who rate their knowledge of birth control methods highly, independent of actual knowledge, are more likely to become pregnant than those who rate their knowledge more negatively."28

Examples:

  • Drivers rate themselves "above average": Ola Svenson at the University of Stockholm asked students in Sweden and the United States to compare their driving safety and skill to the other people in the experiment. For driving skill, 93% of US students and 69% of Swedish students put themselves in the top 50%. For safety, 88% of US students and 77% of Swedish students rated themselves in the top 50%.29  Likewise, researchers from the University of Wellington had 178 participants evaluate themselves on eight dimensions of driving skill. Only a small minority rated themselves as below average at any point, and for all eight dimensions together nearly 80% of participants evaluated themselves as above the average driver.30  Research at The Glennan Center for Geriatrics and Gerontology, Norfolk, VA, showed that elderly drivers referred for a driving evaluation who rated  themselves as "above average" drivers were four times more likely to be classified as "unsafe" after a 30-minute driving simulation than those rating their driving ability lower.31

  • Students and teachers claim to be "above the median": Student "self assessments are typically higher than teacher assessments," according to research conducted by John A. Ross at the University of Toronto.32 Research at Stanford University showed that 87% of Stanford MBA students rated their academic performance as above the median.33 In a survey of faculty at the University of Nebraska by researcher K. Patricia Cross of the University of California Berkeley, 68% rated themselves in the top 25% for teaching ability.34

  • Voters think they are "in the know": Economist Tyler Cowen of George Mason University writes, "That so many people vote is itself a sign of self-deception. The voters for one candidate all know that many other people are voting for the other candidate. ...An individual should infer that perhaps the others know something that she does not. In reality, most people think their view remains correct, even when they encounter others that disagree. Why, then, should a voter think that her choice for a candidate is wiser or better informed than the aggregated choices of other people? The very act of voting therefore implies considerable self-deception on their part."35

  • "Experts" don't score better: Psychologist David Dunning of Cornell University writes, "Medical residents’ impressions of their communication skills show little relationship with impressions held by patients or supervisors, although the impressions of patients and supervisors correlate rather highly... Peer ratings amongst junior doctors strongly predict who will do well on a surgical skills exam; self-ratings do not... In predictions of who will receive early promotion among Naval officers, peer ratings of leadership ability prove to be a more accurate indicator than self-ratings." 28

  • We think highly of ourselves. Daniel R. Ames and Lara K. Kammrath of Columbia University have found that "people are poor judges of their own interpersonal sensitivity and mind-reading. Across multiple tasks, featuring controlled video stimuli as well as face-to-face interactions, including measures of sensitivity to lies, relationships, status, motives, and emotions, we found only weak or non-significant correlations between self-estimates of performance and actual performance. ... We found that those in the lowest quartile in interpersonal sensitivity greatly overestimated their relative ability, often by as much as 40 or more percentile points."36

6.  CHOICE-SUPPORTIVE BIAS (also called "post purchase rationalization," "rosy retrospection", and "Buyer's Stockholm syndrome")

Definition: The tendency to remember one's choices as better than they actually were. We tend to recall more positive attributes for the option we chose and more negative attributes for options we reject.  Our attitudes shift and our memories become distorted so we feel better about our choices and have less regret for what in fact may be bad decisions.

6a00d8341c594453ef01b8d0e62ede970c-800wi.jpg


Why it's a problem:
 Sometimes it is helpful for us to misremember some details of an event or to forget others altogether. Princeton researchers Mara Mather, Eldar Shafir, and Marcia K. Johnson observe, "People’s conception of who they are is shaped by the memories of the choices they make: the college favored over the one renounced, the job chosen over the one rejected, the candidate elected instead of another one not selected. Memories of chosen as well as forgone alternatives can affect one’s sense of well-being. Regret for options not taken can cast a shadow, whereas satisfaction at having made the right choice can make a good outcome seem even better."37 However, choice-supportive bias works against changing one's mind. Reinforcing a bad decision can lead to more bad decisions, especially in a changing environment when unbiased fact finding and assessment is needed for to make better decisions.

Examples:

  • Hiring the best candidate.  Princeton University researchershad people choose between two job candidates.  Each candidate was described using four positive and four negative attributes. After the fact, the people remembered more positive attributes for the candidate they chose and more negative attributes for the candidate they rejected. 37

  • Grade inflation. Researchers asked 276 Ohio Wesleyan University alumni to recall their college grades, one to 54 years after graduating.  Overall, the alumni correctly recalled 3,025 of 3,967 grades, but errors began to occur soon after graduation and increased with the passage of time.  81% of errors inflated the actual grade.  While better students made fewer errors, their errors were greater.38 Ohio Wesleyan researchers also showed the relation between accuracy and distorted memory content by verifying 3.220 high school grades recalled by 99 college students.39 Accuracy of recall declined from 89% for grades of A to 29% for grades of D.  Most errors inflated the grade. Distortions were attributed to memory reconstruction in a positive, emotionally gratifying direction.

  • Rosy travel memories. Studies by researchers from the University of Washington and Northwestern University measured people's anticipation of, actual experiences in, and subsequent recollection of a 12-day trip to Europe, a five-day Thanksgiving vacation and a three-week California bike tour.40  Results showed that travelers' pre-event expectations and post-event recollections were much more positive than their actual experiences during their trips. Indeed, while just 5% of the bike tour participants expected to be disappointed, 61% expressed disappointment during the trip. That changed quickly after the trip: As early as a week later, only 11% remembered their disappointment on the trip.  Likewise, researchers at the University of Illinois at Urbana-Champaign gauged the expected, actual and remembered enjoyment of students' spring break trips.41 The students anticipated and recollected experiencing more positive emotions than they actually reported during spring break.

  • Forgetting the pain of running a marathon.  Researchers compared the expectations of runners in the Bellingham (WA) Bay Marathon before the race and their recollections afterward with the physical and mental experience they reported while actually running the marathon.42 Just after the race they recalled feeling significantly better than they reported they were actually feeling late in the race - and this inflated recollection of how well they felt during the race only grew stronger when they recalled the experience four weeks later.

Value and risk - asymmetrical assessment

7.  ENDOWMENT EFFECT           

Definition: The fact that people often demand much more to give up an object than they would be willing to pay to acquire it. The endowment effect is a factor in the exchange of goods and it mostly affects the seller. But, not all sellers feel this effect. For example, there is no endowment effect when the good is being held only as a money substitute - such as a commodity. On the other hand, if an item is purchased for use, the owner is very likely to be influenced by the endowment effect, especially if the item is rare or hard to replace. Loss aversion may be a cause: When people compare the known and familiar to the unknown and unfamiliar, they choose the known and familiar. The endowment effect occurs not only for individuals, but for organizations as well.

6a00d8341c594453ef01bb0800a60f970d-800wi.jpg


Why it's a problem: 
The endowment effect produces behavior that conflicts with economic theory, which says the value of something should not depend on who owns it.43 This effect permeates business affairs.  For example, in contract negotiations people demand more to give up standard contract provisions than they would have been willing to be paid had they bargained starting with a clean sheet of paper.44 Legal scholars worry that because of the endowment effect that the sale of public goods and rights such as pollution permits and radio spectrum may not result in the fairest and most efficient allocation.45

Examples:

  • Precious basketball tickets. Researchers Ziv Carmon and Dan Ariely conducted an experiment with college students at Duke University. Students who had won prized basketball tickets would sell them at around $2,400 each.46 Those who had not won would pay about $170 per ticket.

  • Mugs and pens, students and experts. Daniel Kahneman of University of California Berkeley and other researchers had students trade mugs and pens. In this experiment, average selling prices were more than double buying prices.43  Likewise, Sean Tamm of Stetson University approached 30 car salespeople and 46 real estate agents, people experienced in negotiating the highest prices they are willing to accept when selling items and the highest prices they are willing to pay when purchasing items.47 Half of the participants were given mugs and asked what price they would accept to sell the mugs.  The buyers were asked what price they would offer to buy the mugs. The results among these expert negotiators?  The price that sellers were willing to accept was about three times higher than the price that buyers were willing to pay.

  • Wine, anyone?  Eric van Dijk at Leiden University in The Netherlands gave people bottles of wine as payment for participation in a research study.48 Then, just before they left, he offered them the option to trade their bottles among other participants. At first, the more participants knew about wine, the more reluctant they were to trade, no matter the wine.  A second experiment added product knowledge to make it easier to value the wine. The result? When wine values were roughly equal, participants were more willing to trade. The conclusion was that the harder people find it to compute the net gains of a trade, the more likely they will feel fearful and hang on to what they have.

8.  ZERO-RISK BIAS  (also called "certainty bias", "certainty effect" and "100% effect")

Definition: A preference for reducing a small risk to zero over a greater reduction in a larger risk, to prefer completely eliminating one risk even if other options will produce greater overall risk reduction.


Why it's a problem:
 This risk aversion bias and discomfort with uncertainty leads us to conserve our resources and sometimes sways our decisions to our detriment. Take a drug that may offer some good for the greater number of patients but for a small minority results in unwanted side effects.49 Zero-risk bias can result in the removal of the drug because the risk that the few might get harmed takes precedent over the benefit that many might gain. Our discomfort with uncertainty can be manipulated because we find 100% reduction in an element of risk so compelling. An advertising expert counsels advertising agencies making client pitches to "try and make anything you pitch risk free. I know that sounds near on impossible, but the fact is that no risk is more palatable to humans than some risk. Use risk to your advantage when working against competitors." 50

Examples:

  • That's OK: Let the others die. Researchers from the University of Pennsylvania and the University of Oklahoma asked people to pick between three different clean-up plans for two hazardous waste sites.51 One site was causing eight cases of cancer per year and the other was causing only four per year. Two different plans involved different clean-up efforts that reduced the total cancer diagnoses over the two sites by six cases a year. A third plan only reduced the number of cases by five per year but eliminated the cancer risk entirely at one site. While choosing either of the first two plans would result in greater reduction of cancer diagnoses, 42% of the people chose the least effective plan, number three, because it totally eliminated cancer risk from one area.

  • It's just poison. Researchers from Northwestern University and Duke University showed 785 people at a shopping mall and a hardware store a can of insecticide with a current price of $10.52 insecticide the rate of poisoning and other injuries was 15 injuries for every 10,000 bottles sold.  When asked what they would be willing to pay to reduce or eliminate the risk of injury, people said they were willing to pay $1.04 extra to reduce the risk of injury from 15 to 10 per 10,000 bottles, but they would pay more, $2.41 extra, to reduce the risk from five injuries per 10,000 bottles to zero injuries.  For the same reduction in the number of injuries people were willing to pay more than twice as much additional when the end result was zero injuries.  The researchers called this "striking evidence for the existence of a certainty premium that more than compensates for the decline in willingness to pay that the rational economic model predicts should occur."

  • More safety trumps less disease. After the Boston Marathon bombings in 2013, Kevin Bonham of Harvard Medical School mused about the zero-risk bias as applied to terrorism.53 He noted that in 2012 the Transportation Security Administration (TSA) budget was $7.8 billion while the Centers for Disease Control and Prevention (CDC) budget was $6.1 billion. He posed the idea that doubling the CDC’s budget - by reducing the TSA's budget by $6.1 billion to $1.7 billion - might reduce the incidence of infectious and non-communicable diseases by 2% and thereby reduce deaths in the U.S. by 30,000 a year.  But, he asked, if that transfer meant the TSA would not have the resources to prevent the annual equivalent of a 9/11 scale terrorist attack - and thereby result in 2,996 deaths  - "Would you take that deal? I certainly wouldn’t and I’d venture to guess that any politician proposing such a scheme would be run out of office."

  • So what if my blood pressure is high? University of Oxford researcher Joao Fabiano studied cognitive enhancers, such as stimulants including caffeine.54  He notes that in discussions about his research people compare cognitive enhancer’s risks with absence of risks. He observes that if the cognitive enhancer's risk was greater than zero, people would mentally classify it as risky. However, he comments, not taking a cognitive enhancement can lead to deaths, accidents and injuries caused by drivers and others who are "cognitively deprived" and that "most people committing this bias were already on a cognitive enhancer, which was known to be pretty risky, namely, caffeine." In a specific case, people would be alarmed that one study showed that users of modafinil, a wakefulness-promoting agent, reported a slight rise in blood pressure, but they would not recognize that "they were already taking caffeine, a drug several studies already found [had] a bigger effect on blood pressure." Fabiano concludes that the cause was zero-risk bias: "One should compare modafinil’s rise in blood pressure with caffeine’s and not with no effect on blood pressure whatsoever."

9.  PSEUDO CERTAINTY EFFECT (also called the "illusion of certainty")

Definition: The tendency for people to make choices that limit risk if the expected outcome is positive, but to make riskier choices if the expected outcome is negative.


Why it's a problem.
 In the world of investing, it means that investors will limit their exposure to risk if they think their returns will be positive, to protect gains, but they will seek more risk if it looks like their returns will be negative, in a wrongheaded attempt to rescue themselves.55 Investors should be doing the reverse: taking on risk when returns are positive and limiting risk when returns are negative.  Researchers observe that the pseudo certainty effect can apply  in conflict situations, as well.20  It lowers the probability that a party will offer concessions if it believes those concessions might give an opponent an advantage in a possible future conflict. At the same time, it raises the probability of conflict by offering a worst-case scenario of the other’s intentions.

Examples:

  • It's 1,000 lives either way.  Nobel Prize winner Daniel Kahneman and his associate Amos Tversky had people choose between two disease control programs, each addressing a different disease that kills 1,000 people each per year.56 The first program would eradicate the first disease. The second program would reduce deaths from both diseases by half, with no way to identify who otherwise would have died. The researchers found in this and similar experiments that people are more prone to select the option that evokes "the illusion of certainty," in this case by choosing the option that eradicates one disease - even though the reduction in deaths from both programs is identical.

  • It's a 10% reduction no matter what. Researchers in Oregon first conducted a study that showed that people were more attracted to a vaccine that was described as eliminating a 10% risk for one of two equally probable diseases than a vaccine that was described as reducing the risk for one disease from 20% to 10%.57  Their second study presented people with vaccine choices: 40% of people chose a vaccine for a disease that afflicts 20% of the population that would protect recipients with 50% probability, while 57% of people chose a vaccine for two strains of a disease, each afflicting 10% of the population, which was completely effective against one strain but offered no protection for the other. Of course, in all cases the vaccine reduced risk from 20% to 10% but the complete elimination of one risk was perceived as better.

  • Jumping to a pseudo certain conclusion. In another study by Kahneman and Tversky, people preferred a 20% chance to win $45 over a 25% chance to win $30.56  But peoples' preferences changed when the same choices were framed in a two step process: The person had a 25% chance to win in the first phase and if he or she did win, then they would get their pre-made choice between an 80% chance of winning $45 or a 100% chance of winning $30.  Making people choose what they would do in step two before the outcome of step one was known revealed that people ignored the obvious uncertainty in the first step.  Most people chose the 100% certain result. Simple mathematics shows that in the two-step situation the resulting odds of winning $45 were 20% (25% x 80%) and the odds of winning $30  were 25% (25% x 100%).  The ultimate odds didn't change when the choices were offered in two steps, but people jumped to the pseudo 100% option in the second step and ignored the uncertainty in the first step.

10.  CURRENT MOMENT BIAS (also called "present bias" and "hyperbolic discounting")

Definition: The tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs. Humans find it difficult to modify current behavior for future benefit.  In fact, our brains have separate systems for assessing the value of immediate and delayed rewards. 

6a00d8341c594453ef01b7c75d1211970b-800wi.jpg


Why it's a problem:
 Because of the current moment bias, what we choose today is often what our future self would prefer that we not have chosen.  For consumers, it's why people don't exercise, eat right, save more and keep New Years' resolutions. For corporations, it leads senior management to maximize quarterly profit rather than invest in the business for greater gains in the long term.

Examples:

  • Healthy later, chocolate now. Behavioral economist Daniel Read and his associate asked people to choose a snack to eat, an orange or a chocolate bar.58  When given the choice for what to eat next week, 74% chose the orange.  When given the choice for what to eat today, 70% chose the chocolate bar.

  • I'll get smarter, but not tonight. Several economists and psychologists had people decide what three movies they would rent for viewing over a three-day period.59 The 24 choices were "low brow" light and fun comedies and action movies, and "high brow" more culturally enriching and educational movies. Those that rented all three movies ahead of the viewing period tended to chose more "high brow" movies.  If the rentals were for the next week, 63% chose high brow; if they were for the week after next: 71% chose high brow.  However, 66% of those that rented each movie right before viewing chose "low brow."

  • More later does not matter.  Hunger, strong emotions and other "visceral states" can affect people's decision making in ways that are not in their long-term interest, according to Carnegie Mellon behavioral economist George Loewenstein.60  To demonstrate the point, he and other scientists studied a group of very thirsty people.61  When given the choice of drinking a smaller amount of juice now versus twice as much juice in five minutes, 60% chose to drink half as much now.  When given the choice of drinking a smaller amount of juice in 20 minutes and twice as much juice in 25 minutes, 30% chose to drink half as much in 20 minutes.

  • Making the future real. Hal Hershfield, a professor of marketing at UCLA’s Anderson School of Management, and other researchers, asked people to hypothetically allocate money among three options: their retirement account, using it for a fun occasion or buying "something nice for someone special."62 People who viewed photos of themselves digitally altered to show what they would look like in several decades consistently saved significantly more and even as much as doubled the amount they allocated to their retirement account. 

The list above is just a start in understanding the hundreds of psychological, perception, memory, logic, physical and social factors that lead us to make to bad decisions and get poor outcomes in our personal, professional and business lives.  For a broader view of the road I am on to find and deliver tools and methods that will help us assure better decisions and better results, please visit my Indiegogo site, Make Big Decisions Better.

References

  1. Simon Oxenham. "When Evidence Backfires." Neurobonkers. 2014.  http://bigthink.com/neurobonkers/when-evidence-backfires

  2. Brendan Nyhan, Jason Reifler. "Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information." Vaccine, 2014; DOI: 10.1016/j.vaccine.2014.11.017

  3. Brendan Nyhan, Jason Reifler. "When Corrections Fail: The Persistence of Political Misperceptions." Political Behavior, 2010; 32 (2): 303 DOI: 10.1007/s11109-010-9112-2

  4. Skurnik I, Yoon C, Schwarz N.  “'Myths & Facts' about the flu: Health education campaigns can reduce vaccination intentions." 2007.  http://www.granthalliburton.org/images/Skurnik.pdf

  5. Clark, Theo.  "False Cause: Correlation Error."  The Skeptic's Field Guide to Spotting Fallacies and Deceptive Arguments. http://www.skepticsfieldguide.net/2012/10/false-cause-correlation-error.html

  6. Drake, Tom. "Drake's List of The Most Common Logical Fallacies." University of Idaho. http://www.webpages.uidaho.edu/eng207-td/Logic%20and%20Analysis/most_common_logical_fallacies.htm

  7. Furness S, Connor J, Robinson E, Norton R, Ameratunga S, Jackson R et al. "Car colour and risk of car crash injury: population based case control study." BMJ 2003; 327:1455

  8. Jones J. "Can rumors cause cancer?" Journal of the National Cancer Institute 2000; 92(18):1469–1471. http://jnci.oxfordjournals.org/content/92/18/1469.long

  9. Chase, Stuart. Guides to Straight Thinking, With 13 Common Fallacies. New York: Harper, 1956. OCLC 307334

  10. Debbie A Lawlor, George Davey Smith and Shah Ebrahim. "Commentary: The hormone replacement–coronary heart disease conundrum: is this the death of observational epidemiology?" Int. J. Epidemiol. (2004) 33 (3): 464-467. doi: 10.1093/ije/dyh124 http://ije.oxfordjournals.org/content/33/3/464.full

  11. Andrew W Brown, Michelle M Bohan Brown, and David B Allison. "Belief beyond the evidence: using the proposed effect of breakfast on obesity to show 2 practices that distort scientific evidence." Am J Clin Nutr. 2013 Nov;98(5):1298-308. doi: 10.3945/ajcn.113.064410. Epub 2013 Sep 4.

  12. David G Schlundt, James 0 Hill, Tracy Sbrocco, Jamie Pope-Cordle, and Teresa Sharp. "The role of breakfast in the treatment of obesity: a randomized clinical trial." Am J Clin Nutr. 1992 Mar; 55(3):645-51.

  13. Vigen, Tyler. "Spurious Correlations." http://www.tylervigen.com/

  14. John Ng and Sophia Ang, "Attribution Bias: Challenges, Issues, and Strategies for Mediation," Mediation Quarterly, vol. 16, no. 4 (1999).

  15. Sterman, J. (1994). “Learning In and About Complex Systems,” System Dynamics Review 10, 291–330

  16. Palmer, Nathan. "Teaching The Fundamental Attribution Error." Sociology Source.  March 14, 2011 http://thesocietypages.org/sociologysource/2011/03/14/teaching-the-fundamental-attribution-error/

  17. Blackburne, Liva. "Using the Fundamental Attribution Error to Control Character Sympathy." A Brain Scientist's Take on Writing. October 12, 2009.  http://blog.liviablackburne.com/2009/10/using-fundamental-attribution-error-to.html

  18. Sanchez, Julian. "The ‘47 Percent’ and the Fundamental Attribution Error."  September 18, 2012. http://www.cato.org/blog/47-percent-fundamental-attribution-error

  19. Elliot, Christopher. "Comcast thinks my husband is an a**hole – and they put it in writing." January 28, 2015  http://elliott.org/is-this-enough-compensation/comcast-thinks-husband-ahole-put-writing/

  20. Daniel Kahneman and Jonathan Renshon. "Hawkish Biases", in Trevor Thrall and Jane Cramer (eds.), American Foreign Policy and the Politics of Fear: Threat Inflation Since 9/11. (New York: Routledge Press, 2009), 79-96. http://www.princeton.edu/~kahneman/docs/Publications/Hawkish%20Biases.pdf

  21. Druker, Michael. "The fundamental attribution error in transportation choice." March 15, 2010 . Psystenance: Sustainability through the mind's eye. http://psystenance.com/2010/03/15/the-fundamental-attribution-error-in-transportation-choice/

  22. Dean, Jeremy. "The Illusion of Truth." PSYBLOG: Understand your mind. December 8, 2010. http://www.spring.org.uk/2010/12/the-illusion-of-truth.php

  23. Ozubko, J. D., and Fugelsang, J. (2010, November 8). "Remembering Makes Evidence Compelling: Retrieval From Memory Can Give Rise to the Illusion of Truth." Journal of Experimental Psychology: Learning, Memory, and Cognition. doi:10.1037/a0021323

  24. Chowdhury R, Warnakula S, Kunutsor S, et al. "Association of Dietary, Circulating, and Supplement Fatty Acids With Coronary Risk: A Systematic Review and Meta-analysis." Annals of Internal Medicine. Published online March 18, 2014

  25. Pew Forum on Religion & Public Life. "Growing Number of Americans Say Obama is a Muslim." August 18, 2010. http://www.pewforum.org/2010/08/18/growing-number-of-americans-say-obama-is-a-muslim/

  26. Nielsen Global Survey of Trust in Advertising, "Global Trust in Advertising and Brand Messages." September 2013. http://www.pnrc.net/wp-content/uploads/2014/01/Nielsen-Global-Trust-in-Advertising-Report-September-2013.pdf

  27. Judd, Jackie. "Spotting the fakes among the five-star reviews" PBS Newshour. January 19, 2015 at 6:25 PM EST. http://www.pbs.org/newshour/bb/spotting-fakes-among-five-star-reviews/

  28. Dunning, David. "Strangers to ourselves?" The Psychologist. The British Psychological Society. October 2006, Volume 19 (pp 600-603) https://thepsychologist.bps.org.uk/volume-19/edition-10/strangers-ourselves

  29. Svenson, Ola. "Are we all less risky and more skillful than our fellow drivers?"  Acta Psychologica, Volume 47, Issue 2, February 1981, Pages 143-148

  30. Gordon, Mark Adam. "Evaluating the Balloon Analogue Risk Task (BART) as a Predictor of Risk Taking in Adolescent and Adult Male Drivers."  The University of Waikato, 2007. http://researchcommons.waikato.ac.nz/bitstream/handle/10289/2455/thesis.pdf?sequence=1

  31. Barbara Freund and Maximiliane Szinovacz. "Effects of Cognition on Driving Involvement Among the Oldest Old: Variations by Gender and Alternative Transportation Opportunities." The Gerontologist (2002) 42 (5): 621-633 doi:10.1093/geront/42.5.621

  32. Ross, John A. "The Reliability, Validity, and Utility of Self-Assessment." Practical Assessment, Research & Evaluation.  Volume 11 Number 10, November 2006 ISSN 1531-7714 . http://pareonline.net/getvn.asp?v=11&n=10

  33. "It's Academic." 2000. Stanford GSB Reporter, April 24, pp.14–5. via Zuckerman, Ezra W.; John T. Jost (2001). "What Makes You Think You're So Popular? Self Evaluation Maintenance and the Subjective Side of the 'Friendship Paradox.'" Social Psychology Quarterly (American Sociological Association) 64 (3): 207–223. doi:10.2307/3090112. JSTOR 3090112

  34. Cross, P. (1977). "Not can but will college teachers be improved?" New Directions for Higher Education 17: 1–15. doi:10.1002/he.36919771703.

  35. Cowen, Tyler. "Self-Deception as the Root of Political Failure." April 11, 2003. George Mason University. https://www.gmu.edu/centers/publicchoice/faculty%20pages/Tyler/PrideandSelf.pdf

  36. Daniel R. Ames, Lara K. Kammrath. "Mind-Reading and Metacognition: Narcissism, not Actual Competence, Predicts Self-Estimated Ability." Journal of Nonverbal Behavior, 2004, Volume 28, Number 3, Page 187 http://www.columbia.edu/~da358/publications/ames_kammrath_mindreading.pdf

  37. Mara Mather, Eldar Shafir & Marcia K. Johnson. "Misremembrance of Options Past: Source Monitoring and Choice." 11 PSYCHOL. SCI. 132, 137 (2000) https://psych.princeton.edu/psychology/research/shafir/pubs/Misremembrance%202000.pdf

  38. Bahrick HP, Hall LK, Da Costa LA. "Fifty years of memory of college grades: accuracy and distortions."Emotion. 2008 Feb;8(1):13-22. doi: 10.1037/1528-3542.8.1.13

  39. Harry P. Bahrick, Lynda K. Hall, and Stephanie A. Berger. "Accuracy and Distortion in memory for High School Grades." Psychological Science. Vol. 7, No. 5, September 1996, pp 265-271 http://www-afs.secure-endpoints.com/afs/umich.edu/user/d/g/dgrey/SM632%20Homework/Accuracy%20for%20remembering%20High%20School%20Grades.pdf

  40. Mitchell, T., & Thompson, L. (1994). "A theory of temporal adjustments of the evaluation of events: Rosy Prospection & Rosy Retrospection." In C. Stubbart, J. Porac & J. Meindl (Eds.), Advances in managerial cognition and organizational information-processing (Vol. 5, pp. 85-114). Greenwich, CT: JAI Press

  41. Wirtz, D., Kruger, J., Scollon, C.N. and Diener, E. (2003). "What to do on Spring Break? The role of predicted, on-line, and remembered experience in future choice." Psychological Science, 14, 520–524

  42. Lemm, Kristi M., and Derrick Wirtz. "Exploring 'rosy' bias and goal achievement in marathon runners." Journal of Sport Behavior 36.1 (2013): 66+. http://go.galegroup.com/ps/i.do?id=GALE%7CA321334820&v=2.1&u=mnastcat&it=r&p=EAIM&sw=w&asid=835814f256ce230259d80f3c02ae0c3b

  43. Daniel Kahneman, Jack L. Knetsch and Richard H. Thaler. "Experimental Tests of the Endowment Effect and the Coase Theorem." Journal of Political Economy. Vol. 98, No. 6 (Dec., 1990), pp. 1325-1348

  44. Jones, Owen D. and Brosnan, Sarah F. "Law, Biology, and Property: A New Theory of the Endowment Effect" (2008). William & Mary Law Review, Vol. 49, 2008; Vanderbilt Public Law Research Paper No. 08-06; Vanderbilt Law and Economics Research Paper No. 08-14

  45. The Economist. "The endowment effect: It’s mine, I tell you. Mankind’s inner chimpanzee refuses to let go. This matters to everything from economics to law."  Jun 19th 2008 http://www.economist.com/node/11579107

  46. Carmon, Ziv, and Dan Ariely. "Focusing on the forgone: How value can appear so different to buyers and sellers." Journal of Consumer Research 27.3 (2000): 360-370.

  47. Sean Tamm , “Can Real Market Experience Eliminate the Endowment Effect?” Stetson University

  48. van Dijk, Eric and Daan van Knipenberg (1998) "Trading Wine: On the Endowment Effect, Loss Aversion, and the Comparability of Consumer Goods." Journal of Economic Psychology, 19, 485-495.

  49. Paul Petillo. "Embracing Zero Risk Bias." Financial Sense Editorials.  February 23, 2009.  http://www.financialsensearchive.com/editorials/petillo/2009/0223.html

  50. John Jackson. "Marketing – It’s all about the Heuristics." Psych Matters.  April 9, 2011 http://psychmatters.co/2014/04/marketing-heuristics-john-jackson/

  51. Baron, J., Gowda, R., & Kunreuther, H. (1993). "Attitudes toward managing hazardous waste: What should be cleaned up and who should pay for it?" Risk Analysis. 13, 183-192

  52. W. Kip Viscusi; Wesley A. Magat;  Joel Huber. "An Investigation of the Rationality of Consumer Valuations of Multiple Health  Risks." The RAND Journal of Economics,  Vol. 18, No. 4. (Winter, 1987), pp. 465-479

  53. Kevin Bonham. "Boston Lockdown--Fear, Uncertainty and Bias."  Scientific American.  April 29, 2013. http://blogs.scientificamerican.com/guest-blog/2013/04/29/boston-lockdown-fear-uncertainty-and-bias/

  54. Joao Fabiano. " How people are wrong about cognitive enhancement and how to fix it." Practical Ethics. University of Oxford. February 18, 2015. http://blog.practicalethics.ox.ac.uk/2015/02/how-people-are-wrong-about-cognitive-enhancement-and-how-to-fix-it/

  55. Andrew Beattie. "4 Psychological Traps That Are Killing Your Portfolio." Investopedia. http://www.investopedia.com/articles/financial-theory/11/psychological-factors-that-hurt-your-portfolio.asp

  56. Tversky, Amos; Kahneman, Daniel (1981). "The framing of decisions and the psychology of choice". Science 211 (4481): 453–458. doi:10.1126/science.7455683 http://psych.hanover.edu/classes/Cognition/papers/tversky81.pdf

  57. Paul Slovic, Baruch Fischhoff, and Sarah Lichtenstein (1981) ,"Facts and Fears: Societal Perception of Risk." Advances in Consumer Research. Volume 08, eds. Kent B. Monroe, Ann Abor, MI: Association for Consumer Research, Pages: 497-502

  58. Read, D., and van Leeuwen, B. (1998). "Predicting Hunger: The Effects of Appetite and Delay on Choice." Organizational Behavior and Human Decision Processes, 76(2), 189-205

  59. Read, D., Loewenstein, G., & Kalyanaraman, S. (1999, December). "Highbrow films gather dust: Time-inconsistent preferences and online DVD rentals." Management Science. 55(6), 1047–1059

  60. Loewenstein, G. (2000). "Emotions in economic theory and economic behavior." The American Economic Review, 90(2), 426-432.  http://www.cmu.edu/dietrich/sds/docs/loewenstein/emotionsEconTheory.pdf

  61. McClure, Laibson, Loewenstein and Cohen. "Separate Neural Systems Value Immediate and delayed Monetary Rewards.” Science. 2004 Oct 15;306(5695):503-7

  62. Hershfield HE, Goldstein DG, Sharpe WF, et al. "Increasing Saving Behavior Through Age-Progressed Renderings of the Future Self." JMR, Journal of marketing research. 2011;48:S23-S37. doi:10.1509/jmkr.48.SPL.S23.

Note: All photographs are in the public domain.

Previous
Previous

Sunk cost fallacy: Throwing good money after bad

Next
Next

Find out where strategy is going