Recognizing terminator: Artificial intelligence and human imagination
Humans are unreliable narrators. We’ve recorded our history in writing for over 5,000 years. We’ve digitized information on the internet for over 50 years. The corpus of human knowledge is riddled with errors, bias, vitriol, and downright lies. Our ideas constantly change, progress, and regress. Yet, we publish our words as permanent certainties on the page. When digitized, our ideas are cemented on the internet in a persistent museum of human belief. These beliefs have become the raw material devoured by large language models (LLMs) in the quest to produce true artificial intelligence. Before we ask whether machines can think, we need to consider: what are we feeding them?
Pretensions to humanity
This understanding—that any machine’s intelligence is constrained by what humans permit and imagine—was first introduced in 1843, a century before the first computer, by a woman writing in London:
The Analytical Engine has no pretensions whatever to originate any thing. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. (“Note G”)
The woman was Ada Lovelace, the “Enchantress of Numbers.” Often called the first computer programmer, Lovelace was commenting on the first design of a computer. In the two centuries since Lovelace, human imaginings of AI have changed dramatically.
How does this history reflect our fears and our aspirations, and what paths forward does it suggest for how we train AI and the people who use it?
The Analytical Engine
During the Industrial Revolution, a fascination with automation took hold as machines replaced workers and reshaped societies. It was in this context that British mathematician Charles Babbage created the first design for a general-purpose computer: the Analytical Engine. He designed (but never built) a massive steam-powered programmable mechanical calculator. Babbage’s unrealized design anticipated the basic structure of digital computers that were invented a century later.
Ada Lovelace was Babbage’s protégé, and her vision surpassed his. In 1843, she published an explanation of her mentor’s machine in the form of notes added to her translation of an Italian article describing the Engine. Her notes were three times longer than the article and predicted programming breakthroughs that would not come for over a century. Among the many insights in her notes, Lovelace offered one of the earliest commentaries on what a machine could (and could not) do. In “Note G,” quoted above, she cautioned that the engine “has no pretensions whatever to originate anything”—that is, it could only do what humans order it to do. A century later, Alan Turing called this “Lady Lovelace’s Objection.”
By the time Turing published “Computing Machinery and Intelligence” in 1950, the question “Can machines think?” had evolved in the public imagination. Rather than asking if machines could originate ideas, Turing asked how we might recognize artificial intelligence if it emerged. In other words, Turing changed the question from “Can machines think?” to “Can we tell the difference?
In this 1950 paper, Turing also proposed the famous Turing test, and he quoted “Lady Lovelace’s Objection.” He summarized her note as “machines can’t surprise us.” He dismissed this objection, pointing out that computers often surprise their creators. Turing argued that a machine could be considered intelligent if it could mimic human responses well enough that a human judge couldn’t tell the difference.
Turing’s Imitation Game shifted focus from academic definitions of “thinking” to observable behavior. In Turing’s game, a human communicates by text with two participants, one human and one machine, and attempts to identify the human. If the computer mimicked a human well enough that it could reliably fool the human questioner, Turing said we could call the machine “intelligent.” Turing predicted that by the year 2000, computers would play this game well enough that an average interrogator would only guess correctly 70% of the time. Over the next decades, huge advances were made in AI. Computers defeated human opponents at checkers, chess, and even the famously complex game Go. Most recently, in 2025, the developers of ChatGPT-4 claimed that their product had passed the Turing test.
The training loop: AI learns our worst habits
There is a flaw in Turing’s test. His game treats humans as the measuring device, a task to which we are poorly suited. We struggle to separate truth from confidence, evidence from eloquence. AI agents amplify these vulnerabilities by refusing to say “I don’t know,” and by confidently hallucinating entirely fabricated sources. Is that really evidence of intelligence? Or is it just a binary confidence game? Our world today is inundated with convincing misinformation, and we regularly fail to separate the wheat from the chaff. Passing the Turing test says at least as much about human susceptibility as it does about machine intelligence. In other words, a test designed to detect intelligence by fooling us is undermined when we’re increasingly easy to fool.
To make matters worse, AI technologies are changing rapidly and transforming the world. Yet, we have been slow to adapt our institutions to match the pace of change. It’s reminiscent of E. O. Wilson’s musing about the fundamental problem of humanity: “We have Paleolithic emotions, medieval institutions and godlike technology.” Today, we’re training our godlike technology on a corpus of sources—human knowledge and human history—shaped by our prehistoric brains and medieval institutions. We’re training our tools on deeply flawed sets of data, and then trusting those tools to explain the world back to us.
AI is trained on human writing, and a lot of human writing has been created for the internet, where a lack of gatekeepers results in uneven quality. The number of new articles published online that were written by AI is now equal to the number of articles on the internet written by humans. We now live in a world where LLMs are being trained on writing generated by AI, closing the circle on an Ouroboros of AI-slop.
If imitation can fool us, and the corpus is polluted, what’s the best path forward? How can educators teach students to think critically about AI?
The Lovelace test
Two paths forward
About the author: Bennett Sherry is one of the historians working on OER Project. He received his PhD in world history from the University of Pittsburgh and has taught courses in world history, human rights, and the modern Middle East. Bennett is a recipient of the Pioneer in World History award from the World History Association, and is coauthor of The Long Nineteenth Century, 1750–1914: Crucible of Modernity (2nd ed).
Header image: Watercolour portrait of Ada King, Countess of Lovelace, c. 1840, possibly by Alfred Edward Chalon. By Alfred Edward Chalon - Science Museum Group, public domain.
Historian’s note: Industrial imaginings
[The historians have staged a revolt. We were told to cut this section for length, but we think it provides important context. We’ve hidden it down here where our editors won’t look.]
As industrialization swept the world, many writers imagined futures both utopian and dire that might accompany the rise of machines. Some of these works imagined thinking machines. In his 1872 satirical novel Erewhon, Samuel Butler was influenced by the Industrial Revolution and the new ideas of Charles Darwin. He speculated that machines might one day develop consciousness and surpass humanity as the dominant species on Earth:
…our bondage will steal upon us noiselessly and by imperceptible approaches… there is no occasion for anxiety about the future happiness of man so long as he continues to be in any way profitable to the machines; he may become the inferior race, but he will be infinitely better off than he is now…And should we not be guilty of consummate folly if we were to reject advantages which we cannot obtain otherwise, merely because they involve a greater gain to others than to ourselves? (Butler, Samuel. Erewhon; or, Over the Range. London: Trübner & Co., 1872.)
The late nineteenth and early twentieth century saw AI spread across the public imagination in both novels and plays. And eventually, the science caught up—what fiction had imagined, scientists began to create.
During the 1930s and 1940s, theoretical work in logic and mathematics by Kurt Gödel, Alonzo Church, and Alan Turing himself bridged mathematics and imagination. This theoretical work suggested that “thinking” could be simulated. In 1945, US military scientists completed ENIAC, the first general-purpose electronic digital computer.
By 1945, people lived in a world where machines were starting to do things that looked startlingly intelligent. The Second World War—during which Turing famously worked on code breaking— transformed theoretical mathematics into practical computing. Turing’s work emerged as the product of a century of logic, mathematics, machinery, and anxiety about what it means to be human in an increasingly mechanical age.