Mathematicians don’t get Nobel prizes.
The reason, according to a story circulating while I was a student, is that Alfred Nobel’s wife had had an affair with a mathematician. Nobel, therefore, jealously excluded mathematics from his prize categories. However, while on vacation in Scandinavia this past summer, my wife and I toured the Nobel Prize Museum in Stockholm where I learned that he never married. So, either there is more to this story than meets the eye, or, more likely, a lot less—it’s probably a myth concocted by mathematicians to make themselves feel better about this perceived slight. Naturally, they created their own honour, the Fields Medal, and, in the subset of mathematics known as computer science, there is also the Turing Prize.
Geoffrey Hinton is not a mathematician, and I am certain that he would never suggest otherwise. His undergraduate degree was in psychology before doing a Ph.D. in artificial intelligence. But, as arguably the pre-eminent computer scientist of this generation—among his students at the University of Toronto were Yann LeCun and Ilya Sutskever—I think mathematicians can at least lay spiritual claim to some of Dr. Hinton’s talents. One of his many accomplishments is winning the 2018 Turing Prize. So, it gives me a great deal of Canadian and mathematical pride to note that Dr. Hinton is 2024’s co-recipient of the Nobel Prize for Physics, along with John Hopfield of Princeton University. The citation noted their “foundational discoveries and inventions that enable machine learning with artificial neural networks.”
Just days later, the Nobel prize for Chemistry was announced—and it was awarded half to University of Washington biochemist David Baker, with the other half divided between computer scientists Demis Hassabis and John Jumper, both with Google DeepMind based in London. This trio was cited for “computational protein design” and “protein structure prediction”—making specific mention of the development and deployment of DeepMind’s AlphaFold machine learning system.
What’s going on here, when two of the most historically significant prizes for original scientific research are being awarded to work in artificial intelligence? There’s already a bit of blowback in the press about this and, given that Dr. Hinton also worked at Google until 2023, some concern about possible commercialization of the Nobel prize. I think that, although this year’s Nobel award committee clearly got caught up in the current wave of AI excitement, the specific citations indicate that they do understand where AI is most helpful and where it’s mostly hype—and there are legitimate connections to applied physics and biochemistry. Let’s take a deeper look at the work behind both awards.
Machine school
Neural networks are a particular type of machine learning, a branch of AI that has existed for decades. There’s a great article from IBM that explains how all the different branches of AI differ and how they relate to each other. Neural networks, as you might guess from the name, extend machine learning to simulate how neurons in the brain signal each other. Think of a neural network as multiple layers of data structures known as nodes, where each node has connections to nodes in the next layer. When you design the nodes, you can define what data they will contain and what threshold values they might be able to reach. When data in a node reaches its threshold, the node sends a signal to its connected nodes in the next layer. The process of reaching the threshold is what’s called a cost function, a mathematical measurement of how close the node is to its target values.
Consider, for example, handwriting recognition. The nodes will be images of alphanumeric characters. As the algorithm repeatedly compares ambiguous handwritten characters with known, legible type it progresses through layers of the neural network. At each layer it refines its set of possible matches with the cost function defining its confidence in deciding if, for example, a single vertical stroke is supposed to be a lower-case letter l, or an upper-case letter I or possibly the digit 1. The output layer can be compared with expected results, and the entire process can be re-executed to continually refine the results.
This is a typical example of reinforcement learning which repeatedly moves forward through the network. Dr. Hinton’s innovation was to modify reinforcement learning with backpropagation. The general idea is that you don’t just run another iteration of your machine learning from the beginning. Since you can measure the difference between your output layer and the expectation, you can then push this difference backwards through all the intermediate layers of your network and adjust the values of those nodes based on how much each one contributed to the observed discrepancy. Once you’ve propagated all those intermediate error measurements back to your input, you can then do another forward run of your algorithm and obtain a much better result. There’s a lot of intricate calculus involved in backpropagation having to do with partial differential equations on the cost function as you work your way backward through the neural network—which I think justifies my partial claim on Dr. Hinton as a mathematician.
Backpropagation essentially adds intelligence to the process of reinforcement learning, allowing your algorithm to achieve accurate results much faster. A popular use of backpropagation is in Large Language Models and generative AI systems like ChatGPT, enabling them to learn human languages and carry on realistic conversations. Without backpropagation, there just wouldn’t be enough computer power for generative AI to work.
As innovative and useful as backpropagation is, the Nobel citation for Dr. Hinton notes his work on another type of neural network, the Boltzmann Machine, named after Ludwig Boltzmann, a 19th-century Austrian scientist noted for his work in entropy, thermodynamics and statistical mechanics. Now, we’re getting a bit closer to physics and a justification for the Nobel prize—and, again, there’s a lot of mathematics under the covers. A Boltzmann Distribution measures the probabilities of a physical system being in any particular energy states—and Dr. Hinton’s innovation was to apply a similar probability distribution to the inputs into a neural network.
The Nobel citation mentions the applicability of Boltzmann machines to problems in pattern recognition and image classification. It also references the use of neural networks in general, as a valuable tool for research in applied physics. I think, unusual as it is for AI, it’s enough to justify the prize in the Physics category.
Life Lessons
AlphaFold is a deep learning system first developed by Google’s DeepMind division in 2018. Dr. Hassabis is the founder and CEO of DeepMind, and Dr. Jumper is a director at the company. Two of the biggest problems in biochemistry are protein prediction and protein design, exactly as noted in this year’s Nobel citation for the prize in Chemistry, and AlphaFold is poised to make disruptive progress in this area.
Proteins are the fundamental building blocks of life. No matter where you look—at the tissues in your body, hormones, diseases, pharmaceuticals, vaccines and much more—proteins underlie all of it. So it’s surprising how much we don’t know, or how much is really difficult to learn, about them. Proteins are formed as chains of amino acids, and the sequence of amino acids, known as the primary structure of a protein, is usually easily discoverable. The difficulty is determining the three-dimensional shape of a protein from its amino acid chain, including how it often folds in on itself—and this is the problem of protein prediction. If you can learn the protein’s shape, you can begin to figure out how it interacts with other proteins, allowing you to better understand diseases, and design drugs or vaccines, for example.
The biochemistry community organizes a biennial competition known as CASP, the Critical Assessment of Techniques for Protein Structure Prediction. DeepMind, using versions of AlphaFold, won CASP in 2018 and 2020 and although the company did not enter the competition in 2022, most of the entrants used software tools based on AlphaFold. CASP is scored by a measurement known as GDT, the Global Distance Test, which measures the similarity of the predicted protein to verified laboratory results. In 2018, AlphaFold achieved a median GDT score of 58.9 per cent and provided the best prediction for 25 out of the 48 proteins in the competition. In 2020, AlphaFold improved its median GDT score to a very impressive 92.4 per cent with the best prediction in 88 out of the 97 proteins in the competition.
AlphaFold is machine learning tuned specifically for the problem of protein prediction, using pattern recognition and reinforcement learning on training data of over 170,000 known protein sequences and structures. I’m probably oversimplifying, but I like to think of GDT as being, basically, the cost function for AlphaFold. The dramatic improvement in AlphaFold’s accuracy in just two years, combined with the release earlier this year of AlphaFold3 with the capability of predicting much more complicated proteins, suggests to me that it will soon achieve breakthrough results in pharmaceutical research and other fields of biochemistry. And when you combine protein prediction with Dr. Baker’s work in computational protein design, I think that a likely result will be tailored, personalized medicine specific to everyone’s individual body, situation and environment. A Nobel prize, although maybe a bit early, is justifiable based on the speed and accuracy of the results achieved and the direction the research is going.
One problem may be just how compute-intensive AI applications like AlphaFold are, and whether data centres will be able to scale up to meet the demands of the algorithms. Personally, I’d like to see more collaboration between AI and quantum computing researchers in the life sciences, which would take this work to the next level—but that’s food for thought for maybe a future post.
Sound the alarm (or not?)
Drs. Hinton and Hassabis are in the category of AI alarmists—leaders in the field who have expressed deep concern about the so-called existential threat of AI, the possibility that AI might at some point outperform human capabilities and possibly spell the end of humanity as we know it. Dr. Hinton left Google in 2023 in order to speak more openly on this topic. In an interview following the announcement of his Nobel prize, he indicated his hope that the award might help bring more public attention to the problem. Dr. Hassabis, in 2023, signed a public statement that “mitigating the risk of extinction from AI should be a global priority.” Dr. Hinton has also suggested that his innovation of backpropagation is modeled on how humans understand language, and infers from that that generative AI already has the capacity to feel emotion.
As much as I’m impressed with their accomplishments, I don’t share their worry. I’m more partial to the idea that AI can mimic certain aspects of human behaviour, but mimicry is not the same as the actual behaviour or its underlying intelligence. Google has a new product, NotebookLM, which I will discuss in more detail in a future post, but it has come closer than anything I’ve seen to passing the Turing Test—and even then, after only a few times interacting with it, it’s pretty obvious that it’s not human. I believe that AI in limited domains—and AlphaFold is a great example of that—can be extremely helpful, but broad generative AI is still very much an adolescent—awkward although occasionally useful. It will take a lot of improvement in the technology before AI even begins to approach the level of intuition and creativity humans exhibit every day—and for all I know, it may never happen.
So, I will grant AI some justification for its prizes this year in Physics and Chemistry—but if AI ever wins the Nobel Prize for Literature, I will put away my metaphorical pen and paper for good.
Feite, an impressive analysis from your literal intelligence. Keep writing!
Your last sentence is very hopeful! I was tending to be in the theory "the risk of extinction should be a global priority!
Have you read "Clara and the Sun"? Ever person choses a robot to be their personal companion and friend. I won't spoil the story but in the end the robot does learn empathy and saves Clara by destroying himself.
Look forward to every article that you write!
Margaret Tuer