Latin, it’s said, is dead. By which I mean a dead language.
Technically, all that means is that Latin no longer has native speakers. The language began to die out between the sixth and ninth centuries AD, after the fall of the Roman empire, but stayed on as the de facto language in academia and the Christian church for many more centuries.[1] Ah, but that little phrase—de facto—loosely translated to “as a matter of fact” or “in reality,” suggests that Latin is not as dead as it seems. Its influence on English, both directly and indirectly through French, Spanish and Italian, is profound—which is why I chose it for the title of this post.
When my father was a high school student 75 years ago in the Netherlands, Latin (along with Greek) was still a required part of the curriculum. A few decades later when I was a schoolboy, we barely learned the basics of Roman numerals and even this was virtually eliminated by the time my own kids went to school. I guess it’s no great loss if eventually we can’t make out the dates inscribed on a few building cornerstones here and there, and I’m not advocating for a return to full Latin instruction, but I can’t help but think that maybe we’re losing something significant if we no longer know the etymology of many of the words we take for granted in English.
Programmed obsolescence
As Latin illustrates, part of the human experience is forgetting. Arthur Conan Doyle’s Sherlock Holmes famously said of the brain, “It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.”[2] I think Holmes was partly right—our brainpower has more capacity to grow than we might think, but at the same time old knowledge and skills can be lost or become redundant as new ones develop. And we absolutely should be careful of the utility of the facts we choose to remember.
This is particularly true when we consider technological advancements. I used to have the phone numbers of family and friends memorized; now I just dial them directly from my contact list or call history. GPS-enabled step-by-step directions on our phones or car dashboards have rendered paper road maps all but obsolete. Cursive handwriting was never my strong suit, but I barely miss it because virtually all my written communication is via text messages, e-mail or Word documents.
Bear in mind, though: however useful forgetting might be, it also comes at a cost. If we lose Latin entirely, we lose a direct connection to over two millennia of Western history and philosophy, not to mention much of English grammar and vocabulary. Some day my phone might die, and I’ll have to find my own way home or actually remember my wife’s and kids’ numbers. We all have to decide for ourselves, in each case, whether the convenience of forgetting is worth the cost.
And so I can’t help but wonder, as AI continues its relentless advance, what else are we going to lose, and will it matter? Are we about to enter an era of reduced knowledge, reduced intelligence or even reduced wisdom? I think we need to examine these questions from two perspectives. First, if generative AI is our trusted source of information, what will become of our knowledge and ability to distinguish fact from fiction? And second, what effect does using AI tools have on our own cognitive behaviours and abilities?
Trust but verify
We’ve all seen the many stories of AI errors and hallucinations. They mostly pertain to large language models (LLMs) and generative AI, and seem to be an intrinsic effect of how these systems operate. More importantly, the consequences can range from just annoying to potentially dangerous. I’ll give a couple of quick examples from personal experience.
In prior posts I’ve mentioned that I’m an avid runner, and like pretty much every amateur athlete I know, I track my activities with the Strava app on my phone, which reads data from my Garmin smart watch. A few months ago, Strava introduced a feature called “Athlete Intelligence”—which is a generative AI tool that produces a summary of my runs and workouts. Usually, it just tells me stuff I already know about intensity, heart rate, etc. but recently it did something new—it started making easy mistakes. To make a long story short, I was running a workout session where I had to complete eight intervals of two minutes each with gradually increasing speed, but Strava’s Athlete Intelligence summarized the run as seven intervals with decreasing speed. This wasn’t particularly consequential, but it’s still a casual disregard for the truth on the part of the AI system.
In a more serious example, a physiotherapist acquaintance of mine posted on Facebook screen shots he took from the orthopaedic blog section of the American Physical Therapist Association (APTA) web site. Apparently, an entry was providing incorrect information about a condition called gluteal amnesia[3] and possible remedies. It looked believable and referenced very specific citations—but when my acquaintance researched each of the citations, none of them existed. When I went to the APTA blog myself, the entry had been taken down with an apologetic comment from the moderator promising to put into place more stringent fact-checking and controls over the use of generative AI by their writers.
An incorrect workout summary is just annoying, but a made-up article with citations covering a medical topic can be dangerous. Both examples illustrate that you still can’t take at face value what you receive from generative AI. I will admit, when I’m searching for information, I do from time to time click on the “AI Overview” that Google now puts at the top of the results page. But once I’ve skimmed through that, I always scroll down to the search results I was looking for and do a deeper dive into sources I know to be reputable.[4] At a minimum, it’s good advice to always double-check output from generative AI. Use your own judgement and common sense—if something feels off, check it out and consult other sources.
It’s not a bug, it’s a feature
In January 2025, the Danish professor of economic geography, Bent Flyvbjerg, who holds appointments at both Oxford University and the IT University of Copenhagen, published a short and easily readable paper entitled AI as Artificial Ignorance. He lists several experiences similar to my own and concludes by calling generative AI a “bullshit generator.” What he means by this is that both generative AI and a good bullshit artist “mix true, false and ambiguous statements in ways that make it difficult or impossible to distinguish which is which.”
The problem, as Flyvbjerg points out, is that this is exactly how generative AI was designed. It’s based on the idea of an LLM, which was originally intended for language translation. Give an LLM, or even generative AI, a large passage of text in one language and it will do a pretty good job translating it to any other language of your choice. As I’ve mentioned before, it breaks down its input text into tokens—words, phrases, sometimes even syllables—and then uses probabilistic algorithms to pick out what should be the most likely tokens to put together in a response. For like-to-like language translation, no big deal. But for a general-purpose response built from massive amounts of data scraped from the internet, well, all bets are off. Sometimes it’s right, sometimes it’s close, sometimes it’s wrong—but the design point is to always sound right.
Flyvbjerg makes several other good points, and I highly recommend his paper. He takes care to distinguish between generative AI and AGI—artificial general intelligence—which does not, and may never, exist. He allows that generative AI can be useful at tasks like writing computer code or producing standard form letters and other such productivity aids, but let’s take heed of his warning:
… the real risk in using ChatGPT and similar AI is not that the AI will prove better than human intelligence and make humans redundant. The real risk is that humans begin to trust an AI that is in fact ignorant and faulty, which could prove disastrous. Current AIs are well-formulated and persuasive, even when they are wrong, because they were designed that way. That makes it all-too easy to trust an AI, especially in areas where as user you do not know the subject well. Our biggest risk is, as usual, ourselves.
Or, returning to Latin, caveat emptor.
Dumbing down
Blindly trusting generative AI’s output can be risky, but what about the process of using AI itself? AI is a tool, and I would suggest that all tools ever invented by humanity either augment or replace some function we used to do for ourselves. By adopting a tool, therefore, we necessarily diminish or lose some human capability—and most of the time, this is a good thing. Would you rather dig a hole by hand or use a shovel? Would you rather walk to the next town or drive a car? I opened this article with several other examples: from word processors and e-mail to cell phones and GPS, tools and technology often improve our productivity and expand our reach in many ways.
Tools replacing physical skills are one thing, but tools replacing our mental faculties are another. I went to school at a time when putting a television in a classroom or a calculator on a student’s desk was considered extremely controversial. How, it was asked, would the kids ever learn to think for themselves anymore? It turned out, video could enhance the learning experience, and I always remind my friends and family that my degree was in mathematics, not arithmetic—meaning that I could focus on advanced concepts and theory, happily delegating rote calculation to the machine.
So, I do embrace the ability of tools and technology to make us better at our work, both physical and intellectual, and this is one of the basic premises of human progress. But at the same time, I want to be clear that these tools make our job easier only if we already know the fundamentals. I use a calculator because I memorized the “times tables” as a child, and I understand the basics of number theory that inform how arithmetic works. A programmer can use ChatGPT or Claude to generate code, but only if they already are familiar with good programming techniques and they know the programming language being used—so that they can understand and debug what the machine produced and integrate it into larger systems.
The critical question for me is how do we manage what we turn over to our tools, and might there be a point where we give up too much? I think there is, although it may be difficult to pinpoint. I’ve written about my favourite philosopher, Pierre Teilhard de Chardin, and his theory that human evolution has moved from the physical to the mental. If we’re now operating and developing in the noosphere as opposed to the biosphere, then it’s our cognitive and intellectual capabilities that distinguish us as humans, and we need to guard and nurture these very carefully. Giving up our ability to reason and think for ourselves is where the risk gets too high—and I’m afraid it might already be starting to happen.
Forgotten knowledge
Microsoft, who on the one hand is working very hard to embed its Copilot generative AI into all of its products, has on the other hand published a very interesting piece of research that gets to the heart of the matter. The paper carries the rather cumbersome but worrying title, “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers.” In the introduction, the authors claim that generative AI tools are “the latest in a long line of technologies that raise questions about their impact on the quality of human thought” and note further: “a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
In other words, a programmer who relies on Claude to generate code may lose their ability to read, understand and debug what the AI system produced. A knowledge worker using ChatGPT to summarize a document may miss key points and may not be able to build on the work or understand it in context.
The Microsoft researchers surveyed 319 professionally diverse knowledge workers executing 936 tasks with generative AI and asked them to report on when they felt that critical thinking[5] was necessary, as well as the level of effort involved in executing such critical thinking. The participants reported using critical thinking less frequently, and with less effort—anywhere from 55 per cent to 79 per cent of the time—when using generative AI as opposed to not using it. This was correlated strongly with their perception of trust in generative AI as well as confidence in its ability to perform the task. Notably, the researchers pointed out a shift in the use of critical thinking from task execution to task oversight or stewardship.
That trust or confidence in AI can be looked at in two ways. Per Bent Flyvbjerg’s observations, a blind trust in AI’s output can be very dangerous. But the Microsoft researchers note that the knowledge workers they surveyed indicated an increased use of critical thinking when they knew they could not trust AI: a lawyer looking for relevant case law to support an argument noted “AI tends to make up information to agree with whatever points you are trying to make, so it takes valuable time to manually verify.” This is critical thinking as verification instead of execution and can be done by a seasoned professional but would be more difficult for a novice. If AI can do the work of an articling student, how does the student gain the skills needed to eventually become the partner who can review and act on that work?
The upshot, as Microsoft notes, is that there is an inverse correlation between a user’s confidence in generative AI and their exercise of critical thinking. A naïve user might have high confidence in AI and therefore lose critical thinking skills, while an experienced user might limit that confidence to easy, routine tasks and save their critical thinking for synthesis of the results. The paper suggests that a design point for AI systems should be to incorporate feedback mechanisms that enable users to gauge the reliability of the AI outputs and therefore achieve a better balanced “relationship” with AI. To that end, IBM has already been experimenting with “uncertainty quantification” in its Watsonx AI products. If AI can express its own confidence in its correctness on a scale from zero to 100 per cent, it will help users in their exercise of critical thinking.
Another paper from the University of Pennsylvania, published in July 2024, ominously titled “Generative AI can Harm Learning,” notes that “skill learning is critical to productivity gains, especially in domains where generative AI is fallible.” The researchers tested the effects of generative AI on a sample of 1,000 high school math students. The students were divided into three groups: one had access to a basic ChatGPT interface, another had access to ChatGPT customized via system prompts to act as a tutor, and the third had no access to generative AI. The “tutor mode” version of ChatGPT would not simply provide answers but would give feedback and guide students through the steps to solve a problem.
Not surprisingly, the students with access to ChatGPT outperformed students with no access. But when ChatGPT was subsequently taken away, students who had used the base ChatGPT underperformed the other two groups, while those who had used ChatGPT in tutor mode ended up roughly equal to the control group without ChatGPT access. The authors conclude that “while access to generative AI can improve performance, it can substantially inhibit learning”; therefore, it “must be deployed with appropriate guardrails where learning is important.”
It's in the way that you use it
Almost four decades ago, Eric Clapton wrote the lyrics “It’s in the way that you use it / It comes and it goes … So don’t you ever abuse it / Don’t let it go.”[6] So it is with generative AI. How we use it (or abuse it) will determine whether its effect on our cognitive abilities is negative, neutral or positive. We’re long past the point of being able to let go of generative AI, so let’s learn to use it properly. And it also must adapt. If it can provide nuances of confidence in its answers and can be prompted to assist rather than regurgitate data, we may be able to achieve a reasonable balance of productivity improvements without losing our greatest human asset—our ability to think for ourselves.
Today’s last word, naturally, is in Latin: “ipsa scientia potestas est” is attributed to either Sir Francis Bacon in 1597, or Thomas Hobbes in 1668.[7] Either way, the two must have had some prescience about the advent of AI. “Knowledge itself is power”—so let’s try, at least for a little while longer, not to hand over all our power to the machines.
[1] Martin Luther wrote his 95 theses in Latin in 1517, and mediaeval universities taught the trivium and quadrivium.
[2] A Study in Scarlet
[3] Also known as “Dead Butt Syndrome,” gluteal amnesia is the condition when the gluteal muscles atrophy due to prolonged sitting and inactivity. It can affect your ability to walk, run and do other sports, and can be treated with exercise and physiotherapy targeting the muscles in question.
[4] Google’s AI Overview is particularly bad at math. In a few of my recent searches, it consistently mistook 1024 to be 1,024.
[5] Using a standard definition of critical thinking in six types: Knowledge, Comprehension, Application, Analysis, Synthesis and Evaluation
[6] “It’s in the Way That You Use it,” from the movie soundtrack “The Color of Money,” 1987.
[7] Hobbes, as a young man, was Bacon’s secretary. The phrase appears in both Bacon’s Meditationes Sacrae and Hobbes’ Leviathan but was probably borrowed from the book of Proverbs and other pre-existent sacred literature.
good article, Feite. I'm glad I took Latin in high school. Keep up the good work!