Nobel prize distribution ceremony It happened in In December of this year, celebrating both works pertaining to artificial intelligence and team efforts Nihon Hidankyo to end nuclear war.
was striking juxtapositionwithout missing a beat, a mathematician who studies how deep learning works. In the first half of the 20th century, the Nobel Committees awarded prizes in physics and chemistry to discoveries that discovered the structure of atoms. This work also enabled the development and later deployment of nuclear weapons. Decades later the Nobel committees awarded this year’s Peace Prize for their work in trying to deal with one another. nuclear science ended up being used.
There are parallels between the development of nuclear weapons out of basic physics research and the dangers posed by AI applications born out of work that began as basic computer science research. These include the Trump administration’s push forThe Manhattan Projects” for AI, as well as the broader spectrum of social risks, including disinformation, labor displacement and surveillance.
About supporting science journalism
If you like this article, please consider supporting our award-winning journalism subscribe. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.
I worry that my colleagues and I are not sufficiently connected to the implications of our work. Will the Nobel Committees award the Peace Prize in the next century to people who clean up the mess left behind by AI scientists? I am determined that we will not repeat the story of nuclear weapons.
About 80 years ago hundreds of the world’s top scientists joined the Manhattan Project in a race to build an atomic weapon before the Nazis did. However, even after the German bombing effort stopped in 1944 and Germany surrendered the following year, work at Los Alamos continued unabated.
Even when the Nazi threat was over, the Manhattan Project’s only scientist…Joseph Rotblat– left the project. Looking back, explained Rotblat: “You involve yourself in a certain way and you forget that you are human. It becomes addictive and you just do it to produce a gadget without thinking about the consequences. And then, having done that, you find some justification for having produced it. Not the other way around.”
The US military conducted its first nuclear test shortly thereafter. US leaders then authorized the twin bombings of Hiroshima and Nagasaki on August 6th and 9th. The bombs killed hundreds of thousands of Japanese civilians, some instantly. Others died years and decades later from radiation poisoning.
Although Rotblat’s words were written decades ago, they are an accurate description of the ethos that dominates AI research today.
I first began to see the parallels between nuclear weapons and artificial intelligence while working at Princeton’s Institute for Advanced Study, where in the shocking final scene. Christopher Nolan’s film Oppenheimer was established Having made some progress in understanding the mathematics behind artificial neural networks, I also began to worry about the social implications of my work. At the suggestion of a colleague, I went to talk to the physicist Robbert Dijkgraaf, the director of the institute at the time.
He suggested I look to the life story of J. Robert Oppenheimer for guidance. I read one biography, then another. I tried to figure out what Dijkgraaf had in mind, but I didn’t see anything attractive in Oppenheimer’s way, and by the time I finished the third biography, the only thing I knew was that I didn’t want a mirror of my own life. his own I didn’t want to come to the end of my life with the burden of Oppenheimer weighing on me.
Oppenheimer is often quoted as saying that when scientists “see something technically sweet, they go ahead.” In fact, Geoff Hinton, one of the winners of the 2024 Nobel Prize in Physics, he referred to it. This is not universally true. The famous female physicist of the time, Lise Meitner, was asked to join the Manhattan Project. Despite being a Jew and narrowly escaping the Nazi occupation, he himself he flatly denied itsaying, “I will have nothing to do with a bomb!”
Rotblat also offers another model of how scientists can navigate the challenge of harnessing talent without losing value. After the war he returned to physics, focusing on the medical uses of radiation. He also became a leader in the anti-nuclear proliferation movement through the Pugwash Conference on Science and World Affairs, a group he founded in 1957. In 1995, he and his colleagues were thanked. Nobel Peace Prize for this work.
Now, as then, there are thoughtful, grounded people who excel in AI development. Taking the position suggested by Rotblat, Ed Newton-Rex He stepped down as the leader of Stability AI’s music creation team last year over the company’s insistence on creating AI creation models trained on copyrighted data without paying for its use. This year, Suchir Balaji He resigned as a researcher at OpenAI over similar concerns.
Echoing Meitner’s refusal to work on military applications of his findings, at a 2018 company town hall, Meredith Whittaker They expressed staff concerns about Project Maven, a Defense Department contract to develop AI for military drone targeting and surveillance. In the end, the workers succeeded in putting pressure on Google, where the 2024 Nobel laureate in physics, Demis Hassabis, works. drop the project.
There are many ways that society affects the way scientists work. The right is financial; we collectively choose what research to fund, and individually we choose whether we pay for the products that come out of that research.
Indirect but very effective is prestige. Most scientists care about their legacy. When we look at the nuclear age—for example, when we choose to make a movie about Oppenheimer, among other scientists of that age—we send a signal to scientists today about what we value. When Nobel Prize Committees choose which individuals to award Nobel Prizes from among those currently working in AI, they set a powerful incentive for AI researchers today and tomorrow.
It is too late to change the events of the 21st century, but we can expect better results from AI. We can begin to look beyond machine learning based on rapid skill development, following the lead of the likes of Newton-Rex and Whittaker, who insist on engaging with the context of their work and the ability to not only evaluate but also respond to changing situations. Paying attention to what scientists like them have to say will provide the best hope for positive scientific development, now and in the future.
As a society, we choose who to elevate, emulate and hold up as role models for the next generation. As the nuclear age teaches us, it is time to examine the applications of scientific discoveries and which of today’s scientists reflect the values not of the world we live in today, but of the world we hope to live in.
This is an opinion and analysis article, and the views expressed by the author(s) are not necessarily their own. American scientific