24 Nov 2025

Recognizing terminator: Artificial intelligence and human imagination

By Bennett Sherry

Cookie Policy

Our website uses cookies to understand content and feature usage to drive site improvements over time. To learn more, review our Terms of Use and Privacy Policy.

Humans are unreliable narrators. We’ve recorded our history in writing for over 5,000 years. We’ve digitized information on the internet for over 50 years. The corpus of human knowledge is riddled with errors, bias, vitriol, and downright lies. Our ideas constantly change, progress, and regress. Yet, we publish our words as permanent certainties on the page. When digitized, our ideas are cemented on the internet in a persistent museum of human belief. These beliefs have become the raw material devoured by large language models (LLMs) in the quest to produce true artificial intelligence. Before we ask whether machines can think, we need to consider: what are we feeding them?

Pretensions to humanity

This understanding—that any machine’s intelligence is constrained by what humans permit and imagine—was first introduced in 1843, a century before the first computer, by a woman writing in London:

The Analytical Engine has no pretensions whatever to originate any thing. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. (“Note G”)

The woman was Ada Lovelace, the “Enchantress of Numbers.” Often called the first computer programmer, Lovelace was commenting on the first design of a computer. In the two centuries since Lovelace, human imaginings of AI have changed dramatically.

How does this history reflect our fears and our aspirations, and what paths forward does it suggest for how we train AI and the people who use it?

Ada Lovelace, c. 1843 (left) and Charles Babbage, c. 1847–1851 (right). Both photographs by Antoine Claudet, public domain.

The Analytical Engine

During the Industrial Revolution, a fascination with automation took hold as machines replaced workers and reshaped societies. It was in this context that British mathematician Charles Babbage created the first design for a general-purpose computer: the Analytical Engine. He designed (but never built) a massive steam-powered programmable mechanical calculator. Babbage’s unrealized design anticipated the basic structure of digital computers that were invented a century later.

Ada Lovelace was Babbage’s protégé, and her vision surpassed his. In 1843, she published an explanation of her mentor’s machine in the form of notes added to her translation of an Italian article describing the Engine. Her notes were three times longer than the article and predicted programming breakthroughs that would not come for over a century. Among the many insights in her notes, Lovelace offered one of the earliest commentaries on what a machine could (and could not) do. In “Note G,” quoted above, she cautioned that the engine “has no pretensions whatever to originate anything”—that is, it could only do what humans order it to do. A century later, Alan Turing called this “Lady Lovelace’s Objection.”

A drawing of “Plan 25” of the Analytical Engine from 1840, shown from a top-down perspective. By Arnold Reinhold, CC BY 4.0
 The Imitation Game

By the time Turing published “Computing Machinery and Intelligence” in 1950, the question “Can machines think?” had evolved in the public imagination. Rather than asking if machines could originate ideas, Turing asked how we might recognize artificial intelligence if it emerged. In other words, Turing changed the question from “Can machines think?” to “Can we tell the difference?

In this 1950 paper, Turing also proposed the famous Turing test, and he quoted “Lady Lovelace’s Objection.” He summarized her note as “machines can’t surprise us.” He dismissed this objection, pointing out that computers often surprise their creators. Turing argued that a machine could be considered intelligent if it could mimic human responses well enough that a human judge couldn’t tell the difference.

Turing’s Imitation Game shifted focus from academic definitions of “thinking” to observable behavior. In Turing’s game, a human communicates by text with two participants, one human and one machine, and attempts to identify the human. If the computer mimicked a human well enough that it could reliably fool the human questioner, Turing said we could call the machine “intelligent.” Turing predicted that by the year 2000, computers would play this game well enough that an average interrogator would only guess correctly 70%  of the time. Over the next decades, huge advances were made in AI. Computers defeated human opponents at checkers, chess, and even the famously complex game Go. Most recently, in 2025, the developers of ChatGPT-4 claimed that their product had passed the Turing test.

Alan Turing in 1951. Public domain.

The training loop: AI learns our worst habits

There is a flaw in Turing’s test. His game treats humans as the measuring device, a task to which we are poorly suited. We struggle to separate truth from confidence, evidence from eloquence. AI agents amplify these vulnerabilities by refusing to say “I don’t know,” and by confidently hallucinating entirely fabricated sources. Is that really evidence of intelligence? Or is it just a binary confidence game? Our world today is inundated with convincing misinformation, and we regularly fail to separate the wheat from the chaff. Passing the Turing test says at least as much about human susceptibility as it does about machine intelligence. In other words, a test designed to detect intelligence by fooling us is undermined when we’re increasingly easy to fool. 

To make matters worse, AI technologies are changing rapidly and transforming the world. Yet, we have been slow to adapt our institutions to match the pace of change. It’s reminiscent of E. O. Wilson’s musing about the fundamental problem of humanity: “We have Paleolithic emotions, medieval institutions and godlike technology.” Today, we’re training our godlike technology on a corpus of sources—human knowledge and human history—shaped by our prehistoric brains and medieval institutions. We’re training our tools on deeply flawed sets of data, and then trusting those tools to explain the world back to us.

AI is trained on human writing, and a lot of human writing has been created for the internet, where a lack of gatekeepers results in uneven quality. The number of new articles published online that were written by AI is now equal to the number of articles on the internet written by humans. We now live in a world where LLMs are being trained on writing generated by AI, closing the circle on an Ouroboros of AI-slop. 

An illustration of the AI training feedback loop, as imagined by AI.

If imitation can fool us, and the corpus is polluted, what’s the best path forward? How can educators teach students to think critically about AI?

The Lovelace test

Reenter Lady Lovelace. In 2001, Selmer Bringsjord and his colleagues at RPI suggested a new benchmark for thinking about AI. They argued that Turing’s test set a low bar and that AI creators “have merely tried to fool those people who interact with their agents into believing that these agents really have minds.”  Bringsjord instead proposed the Lovelace test. To pass this test, a machine must originate something of creative value that its programmers can’t explain, and the machine must be able to explain why it did what it did (the “surprise” cannot stem from a bug, a fluke, or a random process). To pass this test, an AI can’t simply fool a judge. Instead, it must demonstrate creativity and an understanding of its own creativity that humans can audit. That is perhaps a stronger standard in a field full of passable fakes and confident-sounding nonsense. 
 
To date, no AI has convincingly passed the Lovelace test. Generative AI like GPT 5 can write poetry, write student essays, and produce realistic videos. Even the folks behind these technologies don’t fully understand how they work. But—remember Lovelace’s objection—are these outputs truly creative or are they merely statistical rearrangements of patterns in the training data? Lovelace’s question (Can machines originate anything?) remains critical in a world of generative AI.

Two paths forward

Even if we adopt better standards about how to recognize AI as the product of its inputs, we still need practical ways to reduce harm now. LLMs are being trained on data that includes sources that are at best misleading and at worst purposefully harmful. We have two options for dealing with this reality. First, we can restrict AI agents to training datasets that include guard rails that ensure information is responsibly vetted for accuracy. Or second, we can better train the users of AI agents to be more discerning and informed about the inherent biases and flaws in these tools.
Guard-railing AI agents on a selected set of vetted training sources is a labor-intensive process that requires the dataset to have provenance, editorial oversight, and transparent documentation and updates. Of course, there are downsides to this approach, in which we gain higher reliability and less slop in exchange for narrower coverage and extra development steps. There’s also the issue of reintroducing the bias of human gatekeepers. This first path, in essence, is a call for more live human judgment inserted in the design and training of AI.
 
Surveying the landscape of an AI gold rush, however, it seems unlikely that we can trust every company designing AI agents to build the best guardrails. However, there are certainly several exemplars in this respect, particularly in education. As always, it’s up to educators to train students to be responsible users of AI agents, to understand the flaws in these tools, and to use that knowledge to be more discerning in which tools they choose to use and how they use them. There are some simple steps you can take right now toward this training. Here’s one: Encourage your students to use prompts like “show competing views” or “what evidence would prove this wrong?” or “provide evidence and complete citations.” For more ideas, check out OER Project’s AI for Teachers topic page, which includes guides for choosing AI agents and best practices for using AI in your classroom.
 
Of course, the pragmatic path forward is a combination of paths 1 and 2. We can acknowledge that these tools have flaws, and we can choose to use those that provide responsible guardrails while we educate students on the distinction. We should remind students that although our machines seem increasingly intelligent, they are a product of human imagination, and they share our faults. We might call them AI agents, but don’t let your students forget that in these interactions, we are the ones with agency. 
 

 

About the author: Bennett Sherry is one of the historians working on OER Project. He received his PhD in world history from the University of Pittsburgh and has taught courses in world history, human rights, and the modern Middle East. Bennett is a recipient of the Pioneer in World History award from the World History Association, and is coauthor of The Long Nineteenth Century, 1750–1914: Crucible of Modernity (2nd ed)

Header image:  Watercolour portrait of Ada King, Countess of Lovelace, c. 1840, possibly by Alfred Edward Chalon. By Alfred Edward Chalon - Science Museum Group, public domain.

 

Historian’s note: Industrial imaginings 

[The historians have staged a revolt. We were told to cut this section for length, but we think it provides important context. We’ve hidden it down here where our editors won’t look.]

As industrialization swept the world, many writers imagined futures both utopian and dire that might accompany the rise of machines. Some of these works imagined thinking machines. In his 1872 satirical novel Erewhon, Samuel Butler was influenced by the Industrial Revolution and the new ideas of Charles Darwin. He speculated that machines might one day develop consciousness and surpass humanity as the dominant species on Earth:

…our bondage will steal upon us noiselessly and by imperceptible approaches… there is no occasion for anxiety about the future happiness of man so long as he continues to be in any way profitable to the machines; he may become the inferior race, but he will be infinitely better off than he is now…And should we not be guilty of consummate folly if we were to reject advantages which we cannot obtain otherwise, merely because they involve a greater gain to others than to ourselves? (Butler, Samuel. Erewhon; or, Over the Range. London: Trübner & Co., 1872.)

The late nineteenth and early twentieth century saw AI spread across the public imagination in both novels and plays. And eventually, the science caught up—what fiction had imagined, scientists began to create.

A scene from the 1920s play R.U.R. (Rossum's Universal Robots), in which robots attack a factory, killing almost all the humans in it. Public domain.

During the 1930s and 1940s, theoretical work in logic and mathematics by Kurt Gödel, Alonzo Church, and Alan Turing himself bridged mathematics and imagination. This theoretical work suggested that “thinking” could be simulated. In 1945, US military scientists completed ENIAC, the first general-purpose electronic digital computer.

By 1945, people lived in a world where machines were starting to do things that looked startlingly intelligent. The Second World War—during which Turing famously worked on code breaking— transformed theoretical mathematics into practical computing. Turing’s work emerged as the product of a century of logic, mathematics, machinery, and anxiety about what it means to be human in an increasingly mechanical age.