It is certain that this feature can be integrated into AI systems, but not sure.
“I think so,” Altman said when asked the question in an interview with Harvard Business School senior dean Debora Spar.
The question of an AI uprising was reserved only for Isaac Asimov’s science fiction or James Cameron’s action movies. But since the rise of AI, if not a hot issue, then at least a topic of discussion which requires real consideration. What was once considered a crank consideration is now a genuine regulatory question.
OpenAI’s relationship with the government has been “quite constructive,” Altman said. He added that a project as vast and wide-ranging as developing AI should be a government project.
“In a well-functioning society this would be a government project,” Altman said. “It’s not happening, I think it’s better that it happens that way as an American project.”
The federal government has yet to make much progress on AI safety legislation. An effort was made in California pass a law would have He held the AI developers accountable for catastrophic events such as developing weapons of mass destruction or using them to attack critical infrastructure. The law was passed in the legislature but it was veto From California Governor Gavin Newsom.
Some senior AI figures have warned that making sure it fully aligns with humanity’s good is a critical question. Nobel laureate Geoffrey HintonKnown as the godfather of AI, he said he “couldn’t see a way to ensure security”. Tesla CEO Elon Musk has regularly warned that AI could lead to the extinction of humanity. Musk was instrumental in the creation of OpenAI, providing significant funding to the non-profit in its early days. Funding that continues at Altman”grateful“, despite suing Musk.
There have been multiple organizations, such as a non-profit organization Alignment Research Center and the startup Safe superintelligence Founded by the former Science Director of OpenAI, dedicated in recent years exclusively to this question.
OpenAI did not respond to a request for comment.
AI as currently designed is well-suited for alignment, Altman said. Therefore, he argues, it would be easier than it seems to ensure that AI does no harm to humanity.
“One of the things that has worked incredibly well has been the ability to align an AI system to behave in a certain way,” he said. “So if we can express what that means in different cases, yes, I think we can get the system to behave that way.”
Altman also has a unique idea for how exactly OpenAI and other developers can “articulate” those principles and ideals needed to ensure AI stays on our side: use AI to question the public at large. He suggested asking AI chatbot users about their values and then using those answers to determine how to align an AI to protect humanity.
“I’m interested in the thought experiment (in which) an AI chats with you for a couple of hours about your value system,” he said. “He does that with me, with everyone else. And then he says, ‘well, I can’t make everyone happy all the time'”.
Altman hopes that by communicating with and understanding billions of people at a “deep level,” AI can identify challenges facing society more broadly. From there, it could come to a consensus on what AI should do to achieve the general well-being of the public.
AI has an in-house team superalignmentthe task of ensuring that future digital superintelligence does not cause malice and harm. In December 2023, the team released an early research paper showing that it was working on a large language model. he would supervise another. This spring the leaders of that group, Sutskever and Jan Leike, left OpenAI. Their group was disbanded, he said to report from CNBC at the time.
Leike said he left open disagreements with OpenAI’s leadership over its commitment to security as the company worked toward artificial general intelligence, a term that refers to an AI that is as intelligent as a human.
“Building smarter-than-human machines is an inherently risky endeavor,” Leike he wrote good X. “OpenAI takes a huge responsibility on behalf of all of humanity. But in recent years, security culture and processes have taken a back seat to shiny products.”
When Leike left, Altman He wrote in X that he was “very appreciated”. (his) contributions to openai’s (sic) alignment research and security culture.”