On October 8, the Nobel Prize in Physics was awarded for the development machine learning. The next day, the Nobel Prize in Chemistry protein structure prediction through artificial intelligence. Backlash against this AI: It might have registered on the Richter scale.
Some argued that the prize for physics, in particular, it was not physics. “AI is also coming to science,” he said New York Times he concluded. Moderate comments went further: “Physics is officially over,” one viewer said declare on X (formerly Twitter). Future physics and chemistry awards, a physicist just kiddingIt will inevitably be given to advances in machine learning. in one laconic email To the AP, physics laureate and newly named AI pioneer Geoffrey Hinton gave his prognosis: “Neural networks are the future.”
For decades, it was AI research rather remote domain of computer science Its proponents often trafficked in prophetic predictions that AI would eventually lead to the creation of superhuman intelligence. Suddenly, in recent years, these visions have become vivid. the arrival of large language patterns has led to speculation about the permeation of powerful creative abilities into every branch of human achievement. AIs can take prompts, throw out illustrated images, essays, solutions to complex math problems, and now deliver Nobel Prize-winning discoveries. Have AIs taken over the Nobels of science, and possibly science itself?
About supporting science journalism
If you like this article, please consider supporting our award-winning journalism subscribe. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.
Not so fast. Before we swear allegiance to our future best computer lords, or ditch all technology from the pocket calculator (part of which co-inventor Jack Kilby won). 2000 Physics Novelby the way), maybe a bit of caution is needed.
To begin with, what were the Novels really for? physics prize Hinton and John Hopfield, a physicist (and former president of the American Physical Society), discovered how the physical dynamics of a network can encode memory. Hopfield drew an intuitive analogy: a ball, rolling across an undulating landscape, will often “remember” to return to the lowest valley. Hinton’s work extended Hopfield’s model by showing how increasingly complex neural networks with hidden “layers” of artificial neurons can be better learned. In short, the Nobel Prize in physics was awarded for fundamental research into the physical principles of information, not for the broad umbrella of “AI” and its applications.
The chemistry prizewhile half was given to the biochemist David Baker, and the other half went to two researchers from the AI company DeepMind: Demis Hassabis, a computer scientist and the CEO of DeepMind, and John Jumper, a chemist and the director of DeepMind. For proteins, form is function, their jumbled bundles assembled into elaborate shapes that act as keys to engage various molecular locks. But it has been very difficult to predict the emergent structure of a protein from its amino acid sequence; imagine trying to figure out how to fold a long string. First Baker developed software to address this problem, including a program to design new protein structures from scratch. However, as of 2018, of the 200 million proteins cataloged in all genetic databases, only about 150,000, less than 0.1 percent, had confirmed structures. Hassabis and Jumper then pioneered AlphaFold in a protein folding prediction challenge. His first iteration beat the competition by a wide margin; the second provided highly accurate calculations of the fold structures of the remaining 200 million proteins.
AlphaFold is “a pioneering application of AI in science”. 2023 revision indicate protein folding. But still, AI has limitations; his second iteration failed to predict protein defects and struggled with “loops,” a kind of structure key to drug design. It is not a panacea for all problems in protein folding, but a tool for excellence, like many others that have been awarded over the years: the 2014 physics prize for blue light diodes (in almost all LED displays today) or the 2019 chemistry prize for lithium-ion batteries (still indispensable, even in the phone flashlight era).
Many of these tools have since disappeared in their uses. We rarely stop to consider the transistor (which was awarded the 1956 physics prize) when we use electronics that have billions of them. Some powerful machine learning features are already on the way. Neural networks that provide accurate language translation or quirky song recommendations in popular consumer software programs are just part of the service; the algorithm has disappeared into the background. In science, as in many other domains, this trend suggests that once AI tools become commonplace, they will also fade into the background.
However, there might be a reasonable concern that such automation, whether subtle or overt, threatens to overwhelm or taint the efforts of human physicists and chemists. As AI becomes part of scientific progress, will any awards recognize work without AI? “It’s hard to make predictions, especially about the future,” said Nobel Prize-winning physicist Niels Bohr and iconic baseball player Yogi Berra.
AI can revolutionize science; there is no doubt about that. It has already helped us see proteins with a clarity that was previously impossible. Soon AI might dream up new molecules for batteries, or finding new particles hidden in collider data—in short, they can do many things, some of which seemed impossible. But they have a crucial limitation related to something wonderful about science: its empirical dependence on the real world, which cannot be overcome by computation alone.
An AI can, in some respects, only be as good as the data it provides. It cannot, for example, use pure logic to discover the nature of dark matter, the mysterious substance that makes up 80 percent of the matter in the universe. Instead, components that require elbow grease will have to rely on observations of a continuous physical detector. To know the real world, we will always have to deal with such bodily hiccups.
Science also needs experimenters: human experts driven to explore the universe and ask questions that an AI cannot. As Hopfield himself explained a 2018 essayphysics—science itself, really—is not a subject but a “viewpoint,” its basic ethos “that the world is comprehensible” in quantitative, predictive terms only through careful experiment and observation.
That real world, in all its majesty and endless mystery, still exists for future scientists to explore, with or without the help of AI.
This is an opinion and analysis article, and the views expressed by the author(s) are not necessarily their own. American scientific