Every morning I enjoy going over my news feed, helpfully curated by Google in a way that provides me with technical articles on topics of special interest about quantum computing and artificial intelligence interspersed with the usual current events and the reliably dismal Toronto sports scores. Just a few days ago, I was delighted to come across a piece by John Nosta in Psychology Today with the intriguing title “LLMs and the Curious Notion of Panprotopsychism.”
Here, I thought, would be something new and compelling. Surely, with a title like that, there would be discussion of my favourite 20th-century philosopher, Pierre Teilhard de Chardin, including possibly a connection of his views on evolution to the rise of generative AI. Unfortunately, I was disappointed. Not only was there no mention of Teilhard de Chardin, but the article seems to have the notions of proto-consciousness and AI backward. On the other hand, it did raise some provocative questions in my mind about AI, sentience and evolution.
To wit: Turing Test aside, how will we really know if AI has become conscious or sentient in any meaningful way? And is sentient AI a logical, even inevitable, next step in the evolutionary process? I quickly went to my basement storage room and dug up an undergraduate paper from a 1989 philosophy elective—which, I remember, scored a solid A- grade—and started to collect my thoughts.
Synthesis
The term panpsychism suggests that mind or consciousness is pervasive through all of material reality and not necessarily confined to the higher orders of biological life. Inserting the additional prefix ‘proto,’ as Nosta did, suggests an ordering or direction of development of this universal sense of consciousness. Nosta’s article doesn’t give this enough weight, though, simply indicating that panprotopsychism might be manifested in the complexity of hardware and software that comprises today’s generative AI systems—and concludes with the (correct) disclaimer that there is to date no empirical evidence to support the idea.
Pierre Teilhard de Chardin, equal parts palaeontologist and Jesuit priest, devoted his career to reconciling Darwinism with his faith. By order of Church authorities, his writing was suppressed until after his death in 1955—although pope Francis and his predecessor pope Benedict XVI have subsequently viewed his work more favourably.
Teilhard de Chardin’s insight, which I summarized in my fourth-year paper, was to consider evolution as the general precondition or framework with which to make sense of the entire cosmos—and still place humanity at the centre of the process. He suggested evolution proceeding in three phases, one succeeding and complementing the other, and all progressing toward ever-greater complexity and consciousness.
He defined the first phase as physicochemical, the appearance and development of matter from quantum particles to atoms and molecules. The beginning of life marks the transition to biological evolution and the emergence of ever-more complex living organisms. Biological evolution culminates with humanity and the transition to psychosocial evolution—where physical improvements are made redundant by the rapid development of self-consciousness, mental faculties and social interactions. Each phase has its roots in the previous, so that proto-life and proto-consciousness can be traced back through biological and physicochemical evolution to the basics of matter itself. This is where I thought Nosta might have gone with panprotopsychism and AI. But he didn’t, so let me try to build on the idea myself.
Teilhard de Chardin coined the term noosphere to define where this psychosocial evolution would take place, as distinct from the biosphere. I have wondered for a couple of decades what he would have made of the internet and the world wide web. I think he would have viewed it as an unanticipated technological locus for the noosphere even as he would have been dismayed by the dominance of inane chatter on social media as arguably the most pervasive use of this marvelous technology.
And let’s not for a moment confuse the internet, let alone artificial intelligence, as being anything near Teilhard de Chardin’s “Omega point,” the culmination of psychosocial evolution into a unity of the human and divine. But if we admit that proto-consciousness is a characteristic of the cosmos itself, then it is logical to conclude that at some point our technological achievements may also exhibit signs of advanced consciousness—although, as Nosta points out, if it happens it could be so different from the human consciousness we’re familiar with to be almost unrecognizable.
Antithesis
But where do we stand right now? How close is current technology to Teilhard de Chardin’s noosphere? One of the most common fears about artificial intelligence is that it may already have achieved sentience or that it inevitably will—and like Mary Shelley’s Dr. Frankenstein, we will lose control over our creation. The so-called ‘Godfather of AI,’ Geoffrey Hinton, in a recent profile in the Globe and Mail, propagates this point of view and argues that AI chatbots even have emotions. With all due respect, I still disagree although I admit it may become possible in a more-distant future.
I think of AI as a kind of broken mirror. It reflects back at us, in a fractured sense, certain aspects of our way of communicating, and maybe even our way of thinking—but it does not give us anything close to a complete replica of our fully conscious, powerful yet flawed, semi-divine selves.
Generative AI, when its data and rules are confined to a limited and specific domain, can be very useful in replicating and automating certain human tasks. With more computing power, it will get better at these tasks, but we would never confuse, say, an AI customer service chatbot for a real human being. And when I consider the large commercial generative AI platforms like ChatGPT or Gemini, any appearance of emotion or intelligent behaviour seems more like a fleeting illusion quickly dispelled by the next error or hallucination in its output.
Maybe we need to adjust our language. By focusing on the words artificial intelligence, we inherently anthropomorphize the machine and subtly change our relationship with it. We call it intelligence, so we assume that it is intelligent, and we subconsciously interpret its responses as being intelligent—even when they’re not.
Perhaps we should go back to the term LLM, Large Language Model. The word model implies something different—a copy of some aspects of language but not necessarily intelligence or creativity. The implication is that natural language is how we interact with the machine, but without the attribution of intelligence to the machine. ChatGPT can ‘fake it’ by communicating with us in English or any other language, but it cannot yet ‘make it’—exhibit creative, original thought that could come across as human.
There may be more to be said on this subject. My brother, the philosopher, drew my attention to some very recent work by Eric Schwitzgebel that is directly relevant. Schwitzgebel notes that mimicry is a very common natural phenomenon. A monarch butterfly, for example, is toxic to many potential predators and also displays unique orange colours on its wings. The viceroy butterfly has adapted to mimic the same colours in order to deter predators, but is not necessarily toxic. Humans use speech to express our inner thoughts, feelings and desires. Parrots may mimic human speech in order to attract care and attention from humans, but that mimicry of speech does not imply that parrots have human thoughts or feelings or even actually ‘want a cracker.’
Schwitzgebel argues that LLMs like ChatGPT do a pretty good job of mimicking human conversation but, like parrots, this mimicry does not necessarily imply human consciousness. This argument naturally led me back to the Turing Test, which essentially is an exercise in mimicry.
What if a ‘good enough’ mimicry is indistinguishable from the real thing? Schwitzgebel is quick to point out that his argument does not eliminate the possibility of machine sentience, just that the machine’s linguistic ability is best explained by the mimicry relationship rather than underlying sentience—and, in agreement with Nosta, there’s still no empirical evidence that can prove machine intelligence: “One cannot infer the consciousness of an AI system that is built on principles of mimicry from the fact that it possesses features that normally indicate consciousness in humans,” Schwitzgebel says. “Some extra argument is required.”
What that extra argument entails, and what it might imply for the Turing Test, will have to remain an open question for now.
Thesis
And what about Pierre Teilhard de Chardin? As you may have guessed, I’m still a fan of his thinking 35 years after my first encounter with him. I like the notion of psychosocial evolution taking over, although it may be too early to tell—a few thousand years of human civilization compared to billions of years of pre-human life and even longer for the existence of the cosmos before the advent of life itself.
But the progression toward ever-growing complexity in our thoughts and social interactions is accelerating inexorably and the noosphere, the realm of the mind, is clearly where our advances are taking place. I think the noosphere is, or will be, hybrid—human thought and creativity augmented and supported, but not replaced, by artificial intelligence and other technologies yet to be invented.
Will the machines ever ‘make it’ and become sentient? Maybe they will, and maybe they won’t. If they do, as mentioned, they might be so different from us that we won’t even be able to tell—in which case, they may also not be aware of us. On the other hand, Schwitzgebel’s argument from mimicry suggests they could become quite similar to us. Only time will tell—and I do feel that the technology will need some major advances over the next few decades before it gets there.
Meanwhile, we can let AI continue to ‘fake it’ in useful ways, helping us along the ongoing project of human evolution.