Can “Big Data” deliver “the right decision”?

6a00d8341c594453ef01b7c96042ee970b-500wi.png

Note: This post on the limitations of using algorithms to make big decisions is part of a chapter, "The madness of not knowing," in my upcoming book, Big Decisions: Why we make decisions that matter so poorly. How we can make them better.

One idea for maximizing the gain we get from decisions is to use machines to help us make them or even have machines make the big decisions for us.

But can an algorithm be perfected to always yield “the right decision”?

An algorithm is a process or set of rules used in calculations or problem-solving.[i] “Artificial intelligence” (AI) algorithms which process “Big Data” use logic rules and mathematics to solve problems and produce answers.

These algorithms engage in “machine learning” or “deep learning.” Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example or training data and a desired output.[ii]

Algorithmic limitations

In 2016, Pew Research Center and Elon University surveyed 1,302 technology experts, scholars, corporate practitioners and government leaders for their views on the impact of algorithms over the next decade. On net, the respondents were about equally divided on whether the effect of algorithms, big data and artificial intelligence would be positive or negative.

What’s most relevant for the potential of using algorithms for making or helping us make our big decisions is that, at least on the level of current application, the output of machine learning algorithms is shaped by:

  • Human-made design decisions.

  • Rules about what to optimize.

  • Choices about what training data to use.[iii]

Such “deep learning” AI algorithms only work well where the problem domain is well-understood and training data is available. They require a stable environment where future patterns are similar to past ones. Their decisions are only as unbiased as the data with which they were trained, and, because of the grounding in past data, “this supposedly disruptive technology cannot cope well with disruptive change.”[iv]

To illustrate, Justin Reich, executive director at the MIT Teaching Systems Lab, responded in the Pew survey that algorithms will inevitably benefit the people who design them — namely, educated white and Asian men.

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, replied to the survey, “The goal of algorithms is to fit some of our preferences, but not necessarily all of them: They essentially present a caricature of our tastes and preferences. My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.”

“Facebook’s struggle with fake news demonstrates that algorithms don’t always have the discernment a human would,” one observer notes.

By their very nature, machine-learning algorithms are effectively programming themselves. They learn through what they get wrong and approximation – which means that using these algorithms requires us to accept the possibility of errors.[v]

And, at this juncture, algorithms have made some whopper errors.[vi] [vii]

  • YouTube’s algorithm placed advertisements from some of the biggest global brands on videos with hate speech.

  • Facebook’s algorithm posted violent videos in its users’ feeds.

  • Google’s algorithm directed people looking for information about the Holocaust to neo-Nazi websites

  • Google’s photo app algorithm classified images of black people as gorillas.

  • Nikon’s smart camera algorithm thought Asian people were blinking.

  • Microsoft’s Tay algorithm was designed to learn to speak like millennials by interacting with people on Twitter and messaging apps — in less than a day it sent out such abhorrent misogynistic and racist messages that it had to be taken down.

  • Amazon’s algorithm for determining where it would roll out same-day delivery excluded poor urban ZIP codes.[viii]

  • COMPAS, a proprietary risk-assessment algorithm used to decide on the freedom or incarceration of defendants in the US criminal justice system, was alleged by ProPublica to be systematically biased against African Americans as compared to whites.[ix]

  • For a United Airlines flight from Chicago that was not overbooked, a corporate scheduling algorithm gave a deadheading flight crew priority over passengers, a corporate financial algorithm authorized gate employees to offer passengers up to $800 to take a later flight, for which there were no volunteers, and a customer value algorithm calculated the value of each passenger to United and flagged the lowest value customers for removal from the flight. One of the passengers asked to leave the plane had to be forcibly removed, creating a nationally covered incident and bad press for United.[x]

But let’s not just dismiss the current level of effectiveness of algorithms. Behavioral scientist Jason Collins observes that algorithms are showing their value beyond what humans can quickly and fairly achieve in “domains that involve regular decisions in a largely constant environment about which we are able to gather data.” These domains include – or will soon include – routine medical diagnosis, predictive policing, games (e.g. chess and Go), risk score analysis and even self-driving cars. It’s “in complex, dynamic, and uncertain domains” where algorithms may not be trustworthy.[xi]

Yet, this is the state of AI at the present time. Given more data, more computing power and more machine learning, can we expect that at some point that we can safely outsource our high-level decision-making to algorithms? Can the future Siri or Alexa or Watson decide for us?

Two great 20th Century mathematicians show us that “Big Data” and AI will never provide the full answer to our need for the perfect decision.

Complete, consistent and decidable?

In 1900, Viennese mathematician David Hilbert set out a series of problems for mathematicians to solve. Among his 26 problems, he asked whether there was a set of “basic truths” (axioms) from which all the statements in mathematics could be proven, without giving any contradictory answers (such as 2+2=5). He also asked if there was an algorithm that could determine if a statement was true or false, even if no proof or disproof was known. In other words, Hilbert was asking whether mathematics was “complete,” “consistent” and “decidable.”[xii]

In 1931, Austrian mathematician and logician Kurt Gödel proved that within a formal system questions exist that are neither provable nor disprovable on the basis of the axioms that define the system. That is, there are true statements that are unprovable within the system, that more is true in mathematics than can be proven.[xiii] [xiv] This is known as Gödel 's Undecidability Theorem.[xv]

He also showed that in a sufficiently rich formal system in which decidability of all questions is required, there will be contradictory statements. That is, the system’s consistency cannot be proven within the system.[xvi] Paradoxically, the only way to rid the system of incompleteness appeared to be to select rules that contradict one another.[xvii] This is known as Gödel 's Incompleteness Theorem.[xviii] In essence, “the theorem proved, using mathematics, that mathematics could not prove all of mathematics.”[xix]

Cognitive scientist Douglas Hofstadter gives a non-mathematical example to help us see what Gödel 's Incompleteness Theorem shows. He asks us to ponder how can we figure out if we are sane. “Once you begin to question your own sanity, you get trapped in an ever-tighter vortex of self-fulfilling prophecies, though the process is by no means inevitable. Everyone knows that the insane interpret the world via their own peculiarly consistent logic; how can you tell if your own logic is ‘peculiar’ or not, given that you have only your own logic to judge itself? I don’t see any answer.”[xx]

Are algorithms the answer?

The essence of Hilbert’s “decidable” question (the Entscheidungsproblem in German) was whether an algorithm could be created to decide in a finite number of steps if any given mathematical statement was true or not.

6a00d8341c594453ef01bb0a035fae970d-250wi.png

Brilliant young British mathematician Alan Turing took on the “decidable” question.

You may know about Turing from the movie “The Imitation Game.” He left a stunning list of achievements in mathematics, computing, cryptology and even biology. During World War II, he was instrumental in cracking German messages encrypted by the Enigma machine, enabling the British to anticipate Germany's actions, ultimately helping the Allies win the war.[xxi] He also developed ideas in non-linear biological theory, which paved the way for chaos and complexity theories.[xxii]

In 1936, Turing developed the idea of an idealized computer, a hypothetical machine now called a “Turing machine.” He imagined it reading an endless tape imprinted with symbols, one at a time, then either rewriting or erasing the symbol and shifting the tape to the left or right, based on a pre-determined set of rules.[xxiii]

A Turing machine is essentially an algorithm. If it solves the problem, it stops and gives the answer. If it doesn’t solve the problem, it keeps trying forever.[xxiv]

Will it halt?

Turing used his hypothetical computing machine to solve the so-called “halting problem,” to know if there were a way to know whether the algorithm would eventually halt or run forever.[xxv] He proved the impossibility of devising a Turing Machine program that can determine infallibly (and within a finite time) whether or not a given Turing Machine will eventually halt given some arbitrary input.[xxvi] He proved that a general algorithm to solve this problem, for all possible program and input pairs, cannot exist – thereby answering the “decidable” question in the negative.[xxvii]

Turing showed that once you stray beyond the most elementary areas of mathematics, it’s simply not possible to design a finite computing machine capable of deciding whether formulae are provable. Turing showed that it is definitely not the case that all well-defined mathematical tasks can be done by computer—not even in principle. He showed that "there exist problems that no decision process could answer." Some tasks just can’t be performed by computing machines, no matter how good the programmers or how powerful the hardware.[xxviii] [xxix]

Turing proved “that there were questions that were beyond the power of algorithms to answer.” His triumph was spectacular and devastating to those who believed (as Hilbert did) that all problems could be solved.[xxx]

Intelligence, maybe, but never certainty

So, to answer the question we started with, no, we cannot reliably leave or ever expect to leave our big decisions to “Big Data” and artificial intelligence. Certainly, we can use this technology to aid our decision making, but, summarizing:

6a00d8341c594453ef01b8d2ea7ee5970c-250wi.png
  1. The results from “deep learning” algorithms can be wrong or biased because they are produced using fallible human designs, rules and data choices.

  2. The effectiveness of AI algorithms depends on understanding the problem, data availability and unbiased data.

  3. AI algorithms give more reliable answers in an environment where future patterns are similar to past ones. They don’t cope well with disruptive change.

  4. Algorithms fit some of our preferences, but not necessarily all of them.

  5. AI algorithms don’t necessarily have the discernment that a human would have.

  6. Because “deep learning,” AI algorithms learn through their mistakes, we need to accept the possibility of errors.

  7. Gödel 's Undecidability Theorem shows us that questions exist that an AI decision-making system cannot answer.

  8. Gödel 's Incompleteness Theorem shows us that AI decision-making systems can produce contradictory answers.

  9. Turing demonstrated that beyond elementary questions, it is not possible know whether an AI algorithm will be able to answer the question.

“Big Data” and artificial intelligence will not let us off the hook. We humans will continue to need to make the big decisions, with whatever outside aid we can muster and trust.

NOTES

[i] https://en.oxforddictionaries.com/definition/algorithm

[ii] https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

[iii] https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable/

[iv] http://www.bossemergingleaders.com.au/2017/05/06/do-algorithms-make-better-decisions/

[v] http://www.slate.com/articles/technology/future_tense/2016/02/ what_is_an_algorithm_an_explainer.html

[vi] https://www.nesta.org.uk/blog/err-algorithm-algorithmic-fallibility-and-economic-organisation

[vii] https://www.nbcnews.com/mach/technology/ai-learns-us-we-re-becoming-better-teachers-n731861

[viii] http://theconversation.com/algorithms-can-be-more-fair-than-humans-64047

[ix] https://www.zdnet.com/article/inside-the-black-box-understanding-ai-decision-making/

[x] https://shift.newco.co/how-algorithms-and-authoritarianism-created-a-corporate-nightmare-at-united-52346f264a56

[xi] http://behavioralscientist.org/what-to-do-when-algorithms-rule/

[xii] https://nrich.maths.org/8050

[xiii] https://www.newyorker.com/tech/elements/waiting-for-godel

[xiv] https://www.bigquestionsonline.com/2013/04/30/what-did-turing-establish-about-limits-computers-nature-mathematics/

[xv] http://www.exploratorium.edu/complexity/CompLexicon/godel.html

[xvi] https://www.newyorker.com/tech/elements/waiting-for-godel

[xvii] https://www.bigquestionsonline.com/2013/04/30/what-did-turing-establish-about-limits-computers-nature-mathematics/

[xviii] http://www.exploratorium.edu/complexity/CompLexicon/godel.html

[xix] https://www.newyorker.com/tech/elements/waiting-for-godel

[xx] Douglas R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567

[xxi] Turing and colleagues, in essence, guessed the meaning of a stretch of letters in an Enigma message, used Bayesian inference to measure their belief in the validity of their guess, and then updated the probabilities that their guesses were correct as clues in more messages emerged. The importance of Bayesian methods as a tool in decision making will emerge in later chapters of this book.

[xxii] https://www.networkworld.com/article/2189575/data-center/how-alan-turing-set-the-rules-for-computing.html

[xxiii] http://www.wikiwand.com/en/Turing_machine

[xxiv] https://en.wikipedia.org/wiki/Halting_problem

[xxv] http://www.coopertoons.com/education/haltingproblem/haltingproblem.html

[xxvi] http://www.philocomp.net/home/hilbert.htm

[xxvii] https://nrich.maths.org/8050

[xxviii] https://www.bigquestionsonline.com/2013/04/30/what-did-turing-establish-about-limits-computers-nature-mathematics/

[xxix] https://www.networkworld.com/article/2189575/data-center/how-alan-turing-set-the-rules-for-computing.html

[xxx] https://www.newscientist.com/article/mg23130803-200-how-alan-turing-found-machine-thinking-in-the-human-mind/

Previous
Previous

Despite our growing ignorance, we must decide

Next
Next

The worst accident: “We’re going!”