A few months ago, I went for my annual appointment at the optometrist and as usual, this resulted in a new prescription for my glasses. Some days later, when the prescription was ready, I noticed that the package I picked up from the lab included a card telling me proudly that my new lenses had been made with the help of AI. No further detail was provided, and I was left wondering why AI is now deemed to be necessary to the process of grinding lenses, and how would I see better as a result?
Recently, I noticed that my Spotify playlist was playing a small number of the same songs by the same few artists over and over again, and although I don’t have anything against Dire Straits or Robert Plant, I found the repetition tiresome. It turned out that Spotify was defaulting to its “Smart Shuffle” mode which is an AI algorithm designed to insert new songs into my playlist based on its observation of my tastes. Aside from the fact that Spotify should have allowed me to opt in to this “feature”, it turns out that there is a glitch. Smart Shuffle pre-loads a small number of songs and in a large playlist like mine, a feedback loop might ensue which can cause the same songs to come up repeatedly. By turning off Smart Shuffle I was able to get back to what I wanted, a reasonably pseudo-random selection of music from my playlist.
Just ask me what I’m thinking about
It seems that no matter where I turn, I can’t escape AI, or at least messaging about AI, being thrust upon me whether I like it or not.
Google embeds Gemini in its search engine, and Microsoft does the same with Copilot in Bing. Microsoft also offers Copilot as an extension to its Office suite, although some users have derided this as not much more than “Clippy 2.0.” Every time I open a new PDF file, Adobe’s Acrobat viewer pops up a message offering me the services of its AI assistant to present a summary of the document. Well, no thanks, most of the time I would rather read the document in full for myself.
Corporate software vendors from SAP to Salesforce and ServiceNow to Workday all include some version of “Powered by AI” or “AI at the core” in their marketing messages. Many companies are rolling out generative AI internally as a productivity assistant for their employees. And when I attended IBM’s Think conference in Toronto this year, pretty much every single session, breakout and booth was dedicated to AI and Watsonx. (The only exception being one booth where I found my IBM Quantum friends hanging out.)
All this had me thinking of the hit Apple TV show Ted Lasso and its curmudgeonly, lovable character Roy Kent, the aging but popular captain of the fictional AFC Richmond soccer team. If you’ve seen the show, you’ll remember the fans’ chant: “He’s here! He’s there! He’s every-f***ing-where!” For better or worse, AI is the technology industry's Roy Kent—and his career path might shed some light on where AI will go.
AI’s pervasiveness is not necessarily a bad thing—there are lots of use cases where one form or another of AI has improved productivity, reduced costs or accelerated business processes. Chatbots help automate routine call centre and customer service tasks. Machine learning provides many improvements in applications from image analysis to fraud detection. AI algorithms can deliver a more personalized, relevant shopping experience with meaningful tips and recommendations.
Roy Kent retired from playing soccer when the limitations of age and a knee injury caught up with him. I’m not suggesting that AI is going to retire any time soon, quite the opposite, but in a way, it may also be showing its age—it has been around long enough now that its flaws and limitations, like Roy Kent’s, can no longer be ignored. Some uses of AI as I’ve noted above can range from irrelevant to annoying—and may even become dangerous. As good as AI can be, it can also be misused, misrepresented or pushed beyond its limits.
Systems like Gemini or ChatGPT occasionally hallucinate—and users who don’t understand the difference between generative AI and a search engine may end up relying on the fictitious results. Inherent bias in the data, or worse yet malicious tampering, can also produce erroneous outputs that may be hard to detect.
Earlier this year, in a case combining inappropriate anthropomorphism of AI with a hallucination, Air Canada tried to absolve itself of responsibility for incorrect advice given by its customer service chatbot—by claiming that the AI agent was responsible for its own actions. It took a lawsuit for this matter to be settled, as it obviously should have been, in favour of the misled customer. The Globe and Mail has reported on the unfulfilled promise of AI in pharmaceutical research, noting that many of the problems are still just too hard for the technology. And in the most extravagant claim yet of something that is forever far beyond AI’s capabilities, some have suggested that it may be a new god and start a new religion.
Don’t you dare settle for fine
When Roy Kent retired from active playing, he took some time to try other roles as he explored his new career path. He took a season coaching his niece’s school soccer team where his rough-and-ready style didn’t go over nearly as well with the parents and teachers as it did with the nine-year-olds. Yielding to pressure from his girlfriend and peers, he spent a short time as a television commentator—and again, his abrasive honesty, although popular with his fans, proved too much for his fellow pundits. He quickly left the show when it became clear that he would never fit in with their staid, inoffensive style.
I wonder if it’s similar with AI. Maybe, given AI’s pervasiveness, there’s a mismatch of expectations between what people want it to do, and what it’s actually capable of. A recent article in MIT’s Technology Review makes exactly this point. MIT quotes Sasha Luccioni, an executive at HuggingFace: “Most of the time, you don’t need a model that does everything. You need a model that does a specific task that you want it to do.” She goes on to point out that we should change how we measure AI performance, worrying less about the number of tokens or parameters, or the size of the training data, and focus instead on things that “actually matter”—such as accuracy, privacy, and trustworthiness of the data.
I wholeheartedly agree with Luccioni. I’ve also noticed a group on Substack called “AI Supremacy” and I must admit that the name gives me more than a little discomfort. Maybe the authors are borrowing from the well-known term “Quantum Supremacy”—but I am beginning to develop similar misgivings about that one, which I will delve into in a later post. I do not believe that AI Supremacy is technically feasible, and even if it were, I don’t think it’s desirable. The term implies a certain otherness to AI along with the unjustified sense of superiority. AI is another in a long history of technological tools that we humans have invented and integrated into our lives, not some completely foreign entity.
Having said all that, I do think that AI has a bright future. It’s doing fine right now especially in limited domains, but as Roy Kent points out, settling for just fine is a cop-out. AI in all its forms—machine learning, neural networks, LLMs and generative—has the potential for greatness. As it gets more multimodal, adding audio and visual interfaces, it will become more user-friendly. Governance models are beginning to mature, data curation is being taken seriously and companies will begin to think critically about measuring and realizing ROI. When all this comes to fruition, we will see AI do new things and help us in new ways that we can’t even imagine yet.
Whistle! Whistle!
Roy Kent, finally, settles into a successful role as assistant coach of AFC Richmond and (spoiler alert!) in the final episode of the series, is promoted to head coach when Ted Lasso returns to Kansas. His experience as a player gives him credibility and rapport with the team, and they understand that beneath his crusty exterior lies a genuine love and respect for them and for the game. AFC Richmond becomes a force to be reckoned with in the Premier League.
I think this might be a good metaphor for the role AI will play in our lives. Just as Roy Kent had to find his limitations and figure out where he could best fit in and make a difference, AI will also find its appropriate role in human society. AI, properly deployed, can assist us—perhaps, in some ways, guide us—and make us better, faster and more effective at our daily tasks. It won’t supersede us or replace us, but it will eventually earn our trust as we learn how to work with it.
Roy Kent’s career finale is what AI should aspire to be: A coach, not a captain.
Love the last sentence! It sums the article up very succinctly!
Feite, we love Ted Lasso so your comparisons are spot on. Delightfully clear writing. Keep it up!
David NYC