All day, every day, you make choices. Philosophers have long argued for this ability to act intentionally or with it agency, distinguishes humans from simpler life forms and machines. But artificial intelligence may soon overtake that portion, now that tech companies are building AI “agents,” systems capable of making decisions and achieving goals with minimal human supervision.
Faced with pressure to demonstrate returns on billion-dollar investments, AI developers are promoting agents as the next wave of consumer technology. Agents, like chatbots, use large language models and can be accessed from phones, tablets or other personal devices. But unlike chatbots, which require constant hand-holding to generate text or images, agents can autonomously interact with external applications to perform tasks on behalf of individuals or organizations. It has OpenAI listed as the third of the five steps in building the existence of agents artificial general intelligence (AGI)—AI that can surpass humans in any cognitive task—and it’s company as reported in January it will release an agent called “Operator”. That system could be the initial downpour of a downpour: Meta CEO Mark Zuckerberg has done it announced AI agents will eventually outnumber humans. Some AI experts, meanwhile, fear the commercialization of agents is a dangerous new step for an industry that has tended to prioritize speed over security.
According to big tech’s sales pitch, agents will liberate human workers from unemployment, opening the door to more meaningful work (and huge productivity gains for companies). “They free us from day-to-day tasks, so[agents]can focus on what’s really important: relationships, personal growth and informed decision-making,” says Iason Gabriel, senior researcher at Google DeepMind. Last May the company make it known “Project Astra” prototype, describe “AI as a universal agent helpful in everyday life.” In a video demonstration, Astra talks to a user through a Google Pixel phone and scans the environment through the device’s camera. At one point the user brings the phone to a colleague’s computer screen, which is filled with lines of code. The AI describes the code—it “defines encryption and decryption functions”—in a human-like female voice.
About supporting science journalism
If you like this article, please consider supporting our award-winning journalism subscribe. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.
Project Astra isn’t expected to be released to the public until next year at the earliest, and existing agents are mostly limited to monotonous tasks such as writing code or submitting expense reports. This reflects the technical limitations and the developers’ consideration of reliable agents in high-stakes fields. “Agents need to be deployed to implement sparse, repetitive tasks that can be defined very clearly,” says Silvio Savarese, chief scientist at cloud-based software company Salesforce. The company recently introduced Agentforce, a platform that provides agents who can handle customer service inquiries and other narrow functions. Savarese says he would be “very hesitant” to trust officers in more sensitive contexts, such as legal sanctions.
While Agentforce and similar platforms are mostly marketed to businesses, Savarese foresees the rise of personal agents who can access your personal data and constantly update their understanding of your needs, preferences and quirks. An app-based agent responsible for planning your summer vacation, for example, can book your flights, secure restaurant tables, and book accommodations by remembering window seats, your peanut allergy, and your penchant for hotels with pools. Crucially, it should also respond to the unexpected: if the best flight option is fully booked, it should adjust the route (perhaps checking another airline). “The ability to adapt and react to an environment is essential for an agent,” says Savarese. Initial iterations of personal agents may be on the way. Amazon, for example, is as reported working on agents who will be able to recommend and purchase products based on their online purchase history.
What makes an agent?
The sudden surge in corporate interest in AI agents belies their long history. All machine learning algorithms are technically “agents” in that they are constantly “learning” or improving their ability to achieve specific goals based on patterns gleaned from mountains of data. “In AI, for decades, we’ve seen all systems as agents,” says Stuart Russell, a pioneering AI researcher and computer scientist at the University of California, Berkeley. “It’s just that some of them are very simple.”
But today they are becoming modern AI tools more agentic, thanks to some new innovations. One is the ability to use digital tools such as search engines. Through a new “computer usage” feature released for public beta testing in October, the model behind AI company Anthropic’s Claude chatbot can move the cursor and press buttons after showing screenshots of the user’s desktop. A video released by the company shows Claude filling out and submitting a fictitious vendor order form.
Agency also relates to the ability to make complex decisions over time; As agents progress, they will be tasked with more sophisticated tasks. At Google DeepMind, Gabriel envisions a future agent that can help discover new scientific knowledge. And this may not be far off. A paper published on the arXiv.org preprint server in August described an “AI Scientist” agent capable of formulating and testing new research ideas through experimentation, effectively automating the scientific method.
Despite the close ontological connections between agency and consciousness, there is no reason to believe that, in machines, advances in the former will lead to the latter. Tech companies certainly don’t advertise these tools as anything close to free will. Users might treat the AI agent as if it were sentient, but that would reflect, more than anything else, millions of years of evolution that have wired people’s brains to attribute consciousness. it seems the human
Emerging challenges
The rise of influencers can create new challenges in the workplace, social media and the Internet, and the economy. Legal frameworks that have been carefully crafted over decades or centuries will have to take into account the sudden introduction of artificial agents to limit the behavior of humans whose behavior is fundamentally different from ours. Some experts insist that a more accurate description of AI is “alien intelligence.”
Let’s take, for example, the financial sector. Algorithms have long helped track the prices of various goods by adjusting for inflation and other variables. But the agent models are starting to make financial decisions for people and organizations, and they can potentially create thorny legal and economic problems. “We haven’t created the infrastructure to integrate (agents) into all the rules and structures we have to make sure our markets behave well,” says Gillian Hadfield, an expert on AI governance at Johns Hopkins University. If an agent signs a contract on behalf of an organization and then violates the terms of that agreement, should the organization—or the algorithm itself—be held liable? By extension, should agents be given legal “personality” like corporations?
Another challenge is designing agents that conform to human ethical norms—a problem known in the field as “alignment.” As agency increases, it becomes more difficult for humans to decipher how AI is making decisions. Goals are broken down into increasingly abstract subgoals, and models sometimes exhibit emergent behaviors that are impossible to predict. “There’s a clear path from having agents who are good at planning to losing control of humans,” says Yoshua Bengio, the computer scientist who helped invent the neural networks that are enabling today’s AI boom.
According to Bengio, the alignment problem is compounded by the fact that the priorities of big tech companies are often at odds with those of humanity at large. “There’s a real conflict of interest between making money and protecting public safety,” he says. In the 2010s, the algorithms used by Facebook (now Meta) began to promote hate, from the seemingly benign goal of maximizing user engagement. content to Myanmar users against that the country’s minority Rohingya population. That strategy—which the algorithms decided to do on their own after learning that inflammatory content was more conducive to user engagement—ultimately fueled a campaign of ethnic cleansing. he killed thousands of people The risk of incorrect patterns and human manipulation will increase as algorithms become more operative.
Caregivers of agents
Bengio and Russell argue that regulating AI is necessary to avoid repeating past mistakes or being caught off guard by new ones in the future. The two scientists are among the 33,000 signatories open letterPublished in March 2023, it called for a six-month pause in AI research in order to implement barriers. As tech companies move forward to build operational AI, Bengio urges the precautionary principle: that powerful scientific advances should be scaled slowly and that commercial interests take a backseat to security.
This principle is already the norm in other US industries. A pharmaceutical company cannot release a new drug until it has undergone rigorous clinical trials and received approval from the Food and Drug Administration; an aircraft manufacturer cannot launch a new passenger aircraft without certification from the Federal Aviation Administration. While some initial regulatory steps have been taken—most notably President Joe Biden’s executive order on AI (which President-elect Donald Trump has vowed to repeal)—there is currently no comprehensive federal framework to oversee the development and deployment of AI.
The race to commercialize agents, Bengio warns, can quickly pass a point of no return. “When we have agents, they will be useful, they will have economic value and their value will grow”, he says. “And once governments understand that they can also be dangerous, it might be too late because you can’t stop the economic value.” He compares the rise of the agents to that of social media, which quickly outgrew any possibility of effective government oversight in the 2010s.
As the world prepares to greet the flood of artificial agency, there has never been a more pressing time to exercise ours. As Bengio says: “We must think carefully before we leap.”