AI revisited: Will it ever make perfect decisions?

“Is artificial intelligence less than our intelligence?” —Spike Jonze

In our book, BIG DECISIONS, published in 2022, we took on the topic of whether artificial intelligence would soon (or ever) become the answer when we are looking for the right decision.

The idea is that we can maximize our gain from decisions is by using machines to help us make them or even have machines make them for us. But the question then and even more so now with the sudden emergence of OpenAI’s Chat GPT, Meta’s Llama 2, Anthropic’s Claude 2, Google’s Bard, and other groundbreaking AI large language models (LLMs), is can an algorithm be perfected to always yield "the right decision"?

 An algorithm is a process or set of rules used in calculations or problem-solving.[1] Artificial intelligence (AI) algorithms which process Big Data use logic and mathematics to solve problems and produce answers. These algorithms engage in machine learning or deep learning. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm using training data and a desired output.[2] This is called “generative AI.”

 Even before the onset of AI large language models (LLMs) earlier this year, AI was "responsible for making decisions about pretty much every aspect of our lives," said Chris Gilliard, visiting research fellow at Harvard’s Kennedy School.[3]

 Algorithmic limitations

In 2016, Pew Research Center and Elon University gathered the views of 1,302 technology experts, scholars, corporate practitioners, and government leaders in a survey about the impact of algorithms over the next decade. On net, the respondents were equally divided on whether the effect of algorithms, big data, and artificial intelligence would be positive or negative. Here’s a summary of the impact seen by the respondents, which the advent of Chat GPT, Bard, and other AI LLMs have dramatically amplified:

Theme 1: Algorithms will continue to spread everywhere.

    • Algorithms will remain generally invisible to the public.

    • There will be an exponential rise in their influence.

Theme 2: Good things lie ahead.

    • Algorithms will help make sense of massive data.

    • This will spark breakthroughs in science, new conveniences, and human capacities in everyday life.

    • There will be an ever-better capacity to link people to the information that will help them.

Theme 3: Humanity and human judgment are lost when data and predictive modeling become paramount.

    • Advances in algorithms are allowing technology corporations and governments to gather, store, sort, and analyze massive data sets.

    • These algorithms are primarily written to optimize efficiency and profitability.

    • There is not much thought about the possible societal impacts of data modeling and analysis.

Theme 4: Biases exist in algorithmically-organized systems.

    • The algorithm creators build into their creations their own perspectives and values.

    • Datasets to which algorithms are applied have their own limits and deficiencies.

    • They do not capture the fullness of people’s lives and the diversity of their experiences.

    • The datasets themselves are imperfect because they do not contain inputs from everyone or a representative sample of everyone.

Theme 5: Algorithmic categorizations deepen divides.

    • An algorithm-assisted future will widen the gap between the digitally savvy (predominantly the most well-off) and those who are not nearly as connected or able to participate.

    • Social and political divisions will be abetted by algorithms, as algorithm-driven categorizations and classifications steer people into echo chambers of repeated and reinforced media and political content.

Theme 6: Unemployment will rise.

    • The spread of artificial intelligence has the potential to create major unemployment and all the fallout from that.

Theme 7: The need grows for algorithmic literacy, transparency, and oversight.

    • Public education is needed to instill literacy in the general public about how algorithms function.

    • Some method is needed to hold those who create and evolve algorithms accountable to society.

 What matters for the idea of using algorithms for making or helping us make big decisions is that, at least as now used, the output of machine learning algorithms is shaped by:[4]

  • Human-made design decisions.

  • Rules about what to optimize.

  • Choices about what training data to use.

 “Deep learning” AI algorithms only work well where the problem domain is well-understood and training data are available. They require a stable environment where future patterns resemble past ones. Their decisions are only as unbiased as their training data. Because of being grounded in past data, “this supposedly disruptive technology cannot cope well with disruptive change.”[5]

To illustrate, Justin Reich, executive director at the MIT Teaching Systems Lab, said in the Pew survey that algorithms will inevitably benefit those who design them — namely, educated white and Asian men.

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, replied to the survey, “The goal of algorithms is to fit some of our preferences, but not necessarily all of them: They essentially present a caricature of our tastes and preferences. My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.”

“Facebook’s struggle with fake news demonstrates that algorithms don’t always have the discernment a human would,” one observer noted in the Pew survey.

Since the survey, Facebook and peers have received growing scrutiny for the effects of their algorithms and policies for their use. Even in 2021, on 60 Minutes, Frances Haugen, a former product manager at Facebook, said she was speaking out because she believed that “Facebook’s products harm children, stoke division, and weaken our democracy.” She then testified in front of the U.S. Senate, Haugen “particularly blamed Facebook’s algorithm and platform design decisions for many of its issues.”[6]

 By their very nature, machine-learning algorithms are effectively programming themselves. They learn through what they get wrong and approximation. Using these algorithms requires us to accept the possibility of errors.[7]

 But let’s not summarily dismiss the effectiveness of algorithms. Behavioral scientist Jason Collins observes that they are showing their value beyond what humans can quickly and fairly achieve in “domains that involve regular decisions in a largely constant environment about which we are able to gather data,” including medical diagnosis, predictive policing, games (e.g. chess and Go), risk score analysis, and self-driving cars. It’s “in complex, dynamic, and uncertain domains” where algorithms may not be trustworthy.[8]

 Yet, this is the state of AI now. Given more data, more computing power and more machine learning, can we expect that at some point we can safely outsource our high-level decision making to algorithms? Can the future Chat GPT, Siri, Bard, Alexa or Watson decide for us?

 Two great 20th Century mathematicians show us that “Big Data” and AI will never provide the full answer to our need for the perfect decision.

Complete, consistent, and decidable?

In 1900, Viennese mathematician David Hilbert set out a series of problems for mathematicians to solve. Among his 23 problems, he asked whether there was a set of “basic truths” (axioms) from which all the statements in mathematics could be proven, without giving any contradictory answers (such as 2+2=5). He also asked if there was an algorithm that could determine if a statement was true or false, even if no proof or disproof was known. In other words, Hilbert was asking whether mathematics was “complete,” “consistent,” and “decidable.”[9]

In 1931, Austrian mathematician and logician Kurt Gödel proved that within a formal system questions exist that are neither provable nor disprovable on the basis of the axioms that define the system. That is, there are true statements that are unprovable within the system, that more is true in mathematics than can be proven.[10][11]This is Gödel 's Undecidability Theorem.[12]

He also showed that in a sufficiently rich formal system in which decidability of all questions is required, there will be contradictory statements. The system’s consistency cannot be proven within the system.[13]Paradoxically, the only way to rid the system of incompleteness appeared to be to select rules that contradict one another.[14]This is known as Gödel 's Incompleteness Theorem.[15]In essence, “the theorem proved, using mathematics, that mathematics could not prove all of mathematics.”[16]

Cognitive scientist Douglas Hofstadter gives an example of what Gödel 's Incompleteness Theorem reveals. He asks us to ponder if we figure out if we are sane. “Once you begin to question your own sanity, you get trapped in an ever-tighter vortex of self-fulfilling prophecies, though the process is by no means inevitable. Everyone knows that the insane interpret the world via their own peculiarly consistent logic; how can you tell if your own logic is ‘peculiar’ or not, given that you have only your own logic to judge itself?”[17]

Are algorithms the answer?

The essence of Hilbert’s “decidable” question (the Entscheidungsproblem in German) was whether an algorithm could be created to decide in a finite number of steps if any given mathematical statement was true or not.

Brilliant young British mathematician Alan Turing took on the “decidable” question.

You may know about Turing from the movie “The Imitation Game.” He left a stunning list of achievements in mathematics, computing, cryptology, and even biology. During World War II, he was instrumental in cracking German military messages encrypted by the Enigma machine, enabling the British to anticipate Germany's actions, ultimately helping the Allies win the war.[18]His ideas in non-linear biological theory paved the way for chaos and complexity theories.[19]

In 1936, Turing developed the idea of an idealized computer, a hypothetical machine now called a “Turing machine.” He imagined it reading an endless tape imprinted with symbols, one at a time, then either rewriting or erasing the symbol and shifting the tape to the left or right, based on a pre-determined set of rules.[20]

A Turing machine is essentially an algorithm. If it solves the problem, it stops and gives the answer. If it doesn’t solve the problem, it keeps trying forever.[21]

Will it halt?

Turing used his hypothetical computing machine to solve the so-called “halting problem,” to know if there were a way to know whether the algorithm would eventually halt or run forever.[22]He proved the impossibility of devising a Turing Machine program that can determine infallibly (and within a finite time) whether or not a given Turing Machine will eventually halt given some arbitrary input.[23]He proved that a general algorithm to solve this problem, for all possible program and input pairs, cannot exist – thereby answering the “decidable” question in the negative.[24]

Turing showed that beyond the most elementary areas of mathematics, it’s not possible to design a finite computing machine capable of deciding whether formulae are provable. Turing showed that it is definitely not the case that all well-defined mathematical tasks can be done by computer—not even in principle. He showed that "there exist problems that no decision process could answer." Some tasks just can’t be performed by computing machines, no matter how good the programmers or how powerful the hardware.[25][26]

Turing proved “that there were questions that were beyond the power of algorithms to answer.” His triumph was spectacular and devastating to those who believed (as Hilbert did) that all problems could be solved.[27]

Intelligence, maybe, but never certainty

To answer the question we started with, no, we cannot reliably leave or ever expect to leave our big decisions to “Big Data,” artificial intelligence, or LLMs. We can use technology to aid our decision making. In some applications it can produce superior results, but, summarizing:

  1. The results from “deep learning” algorithms can be wrong or biased because they are produced using fallible human designs, rules, and data choices.

  2. The effectiveness of AI algorithms depends on understanding the problem, data availability, and unbiased data.

  3. AI algorithms give more reliable answers in an environment where future patterns are similar to past ones. They don’t cope well with disruptive change.

  4. Algorithms fit some of our preferences, but not necessarily all of them.

  5. AI algorithms don’t necessarily have the discernment that a human would have.

  6. Because “deep learning” AI algorithms learn through their mistakes, we need to accept the possibility of errors.

  7. Gödel 's Undecidability Theorem shows us that questions exist that an AI decision-making system cannot answer.

  8. Gödel 's Incompleteness Theorem shows us that AI decision-making systems can produce contradictory answers.

  9. Turing demonstrated that beyond elementary questions, it is not possible to know whether an AI algorithm will be able to answer the question.

“Big Data” and artificial intelligence will not let us off the hook. We humans will continue to need to make the big decisions, with whatever outside aid we can muster and trust.

[1] https://en.oxforddictionaries.com/definition/algorithm

[2] https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

[3]https://www.cnn.com/2021/11/19/tech/algorithm-explainer/index.html

[4] https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable/

[5] http://www.bossemergingleaders.com.au/2017/05/06/do-algorithms-make-better-decisions/

[6]https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/

[7] http://www.slate.com/articles/technology/future_tense/2016/02/what_is_an_algorithm_an_explainer.html

[8] http://behavioralscientist.org/what-to-do-when-algorithms-rule/

[9] https://nrich.maths.org/8050

[10] https://www.newyorker.com/tech/elements/waiting-for-godel

[11] https://www.bigquestionsonline.com/2013/04/30/what-did-turing-establish-about-limits-computers-nature-mathematics/

[12] http://www.exploratorium.edu/complexity/CompLexicon/godel.html

[13] https://www.newyorker.com/tech/elements/waiting-for-godel

[14] https://www.bigquestionsonline.com/2013/04/30/what-did-turing-establish-about-limits-computers-nature-mathematics/

[15] http://www.exploratorium.edu/complexity/CompLexicon/godel.html

[16] https://www.newyorker.com/tech/elements/waiting-for-godel

[17] Hofstadter, D. R. (2000). Gödel, Escher, Bach: An eternal golden braid. London: Penguin Books.

[18] Turing and colleagues, in essence, guessed the meaning of a stretch of letters in an Enigma message, used Bayesian inference to measure their belief in the validity of their guess, and then updated the probabilities that their guesses were correct as clues in more messages emerged. The importance of Bayesian methods as a tool in decision making will emerge in later chapters of this book.

[19] https://www.networkworld.com/article/2189575/data-center/how-alan-turing-set-the-rules-for-computing.html

[20] http://www.wikiwand.com/en/Turing_machine

[21] https://en.wikipedia.org/wiki/Halting_problem

[22] http://www.coopertoons.com/education/haltingproblem/haltingproblem.html

[23] http://www.philocomp.net/home/hilbert.htm

[24] https://nrich.maths.org/8050

[25] https://www.bigquestionsonline.com/2013/04/30/what-did-turing-establish-about-limits-computers-nature-mathematics/

[26] https://www.networkworld.com/article/2189575/data-center/how-alan-turing-set-the-rules-for-computing.html

[27] https://www.newscientist.com/article/mg23130803-200-how-alan-turing-found-machine-thinking-in-the-human-mind/

 

Previous
Previous

On our mind: Get out of the way!

Next
Next

Success Starts With a Big Vision