Misuse of evidence can zap your strategy

Bad strategic decisions can result from insufficient or misused evidence. We are programmed to immediately "fit" the information we have at hand to our experience – magnified by what is most current and what seems "like" or relevant to the situation - whether it is applicable or sufficient.

Here's an example of a seemingly very smart organization that stumbled because of insufficient and misused evidence.

WHEN FEDEX STUMBLED

In the mid 1980s FedEx saw the data on rising fax usage as suggesting that business faxing was becoming the "hot" disruptive business communications technology. The company decided that just as it enabled delivery of print documents and packages through its physical network it could likewise use its delivery network to enable customers to send and receive fax communications.

FedEx got on what it saw as the fax bandwagon by investing in what at the time was the most advanced fax equipment. It launched Zapmail, a service for customers to send and receive what was then very high-quality faxes through its offices, or, for large organizations, through their offices. (I am so old I remember this service and even used it once!)

The service had two versions:

  • In the first, FedEx would pick up the print document from the sender and then fax it from its local office to its office nearest to the recipient. It would then print the document and deliver it to the recipient.

  • In the second, FedEx would install and maintain a Zapmailer fax machine in the client’s offices.

FedEx established a separate packet-switched data network for the service, rather than using the public telephone network that other fax machines used.

FedEx believed customers would pay a premium to send and receive high-quality faxes and have their documents either arrive on site or delivered in a few hours. It also expected to see cost savings by moving some document flow from air and ground transportation to electronic delivery.

But nobody used Zapmail - or at least not enough people used the service to start to cover the cost of equipping FedEx's offices with the expensive high quality fax equipment and the expense of staffing and marketing the service. Zapmail’s failure cost FedEx $320 million.

What went wrong? 

  • The Zapmailer used a communications protocol that could not talk with the growing numbers of Group 3 fax machines that customers were using.

  • The service experienced quality problems, which the company planned to address with upgraded technology that used the Group 3 protocol and satellite transmission (a plan that the Challenger space shuttle disaster killed).

  • The decision makers at FedEx overvalued the idea that customers would pay as much as $3.50 a page to send and receive faxes (a price that was later reduced to $1.00 a page).

  • FedEx's leaders did not look further to spot the plummeting cost and increasing quality of commercially available fax machines - it was becoming increasingly easy and less expensive, especially for larger companies, to buy their own equipment and send and receive faxes at their own offices.

  • The company's strategists failed to recognize the unfolding emergence of email as the real disruptive one-to-one business communications technology. 

HOW EVIDENCE (OR LACK OF IT) CAN TRIP US UP

The mental traps and biases that can affect how decision makers and groups evaluate and use evidence include:

Availability heuristic. We often base judgments on available information, even if it is insufficient and not representative.

Biased generalizing. We can misassess the situation by drawing a conclusion from a biased or insufficient sample.

Representativeness heuristic. We often judge the likelihood of an occurrence by matching it with a category or past circumstance, causing errors when the category or circumstance does not fit. When using representativeness, we tend to overestimate the likelihood of an occurrence. Even if something is representative, it is not more likely to occur.

Ambiguity effect. We tend to choose the option with a clear probability of success rather than one whose probability is less clear because of missing information. "Better the devil we know than the one we don't" goes the thinking. But the option with an unclear probability may have an even higher probability of success and provide a better outcome.

Ignoring parsimony. Parsimony is using the simplest assumption in forming a theory or interpreting data. For opportunities, threats and options, parsimony says to go to what's simplest and appears to be most likely.

Not considering degrees of freedom. We can "over fit" available data to model a predicted outcome that is not valid given the evidence at hand. For developing the best strategies, complex interpretations built on limited information - that is, non-parsimonious models with few degrees of freedom using limited data - are more likely be seen to be in error as more information is introduced.

Priming. When people’s mental representations are stimulated by information temporarily brought into memory through exposure or events, they are being primed. This information may then affect evaluations and actions. We often are unaware of being primed or don’t believe it will influence us. For example, people were more likely to believe in global warming when asked about it on hotter days and when they had been primed with words relating to heat. Our decisions can be primed by chance exposures or events. We can be unconsciously “primed” by irrelevant or inaccurate information before making a decision. 

EVIDENCE OF WHAT ORGANIZATIONS REALLY DO

Our 2016 Strategic Leader Survey looked at the extent to which leaders and their organizations are subject to decision errors by using insufficient evidence or misusing evidence in assessing strategic threats and opportunities.

The survey results show varied use of "best practices" for understanding evidence of opportunities and threats.

  • A majority of organizations (69%) continue to seek more evidence and nearly half (44%) look for alternative explanations for what the evidence might suggest. But that means that a significant share of organizations just take evidence at face value rather than seeking to validate it or otherwise explain it.

  • Even fewer organizations (36%) consider the possibility that the evidence is that of an exception that in the future will revert to the mean ("go back to normal"). Just 9% consider whether the evidence is the result of luck, which is out of our control and likely non-repeatable.

  • A rational way to assess evidence is to attempt to compute probabilities for what it seemingly shows. Yet, only 20% of organizations use a process to compute the probabilities of various outcomes suggested by the evidence. And only 25% assess the evidence by looking for the base rate for the seeming opportunity or threat (the best available information on its likelihood).

  • We are prone to seeing patterns in evidence even when nothing is there or overemphasizing what is there. One cause is priming, when current events or adjacent activities warp perceptions (e.g.causing us to see threats as larger than they really are or to chase illusory opportunities). In 75% of organizations there is no consideration of whether decision makers have been primed: They are open to making a bad decision by overweighting evidence or using spurious evidence. 

  • An even more dangerous practice is to impose a story on limited evidence, to explain the situation the evidence suggests. Imposing a story encourages decision makers to ignore or discount future evidence that does not fit the story. More than a quarter of organizations (28%) pursue the dubious practice of developing explanatory stories.

The lack of use of best practices in assessing evidence helps to explain why many organizations adopt strategies that do not work out.  Misusing evidence is a root cause of why they may experience their own version of FedEx's Zapmail fiasco. 

Previous
Previous

The risk of ignoring risk

Next
Next

Failure without facilitation: The French Canal Disaster