
AI agents take the control away
Current generative AI systems react to user input, such as prompts. By contrast, AI agents act autonomously within broad parameters. They operate with unprecedented levels of freedom – they can negotiate, make judgement calls, and orchestrate complex interactions with other systems. This goes far beyond simple command–response exchanges like those you might have with ChatGPT. For instance, imagine using a personal “AI financial advisor” agent to buy life insurance. The agent would analyse your financial situation, health data and family needs while simultaneously negotiating with multiple insurance companies’ AI agents. It would also need to coordinate with several other AI systems: your medical records’ AI for health information, and your bank’s AI systems for making payments. The use of such an agent promises to reduce manual effort for you, but it also introduces significant risks. The AI might be outmanoeuvred by more advanced insurance company AI agents during negotiations, leading to higher premiums. Privacy concerns arise as your sensitive medical and financial information flows between multiple systems. The complexity of these interactions can also result in opaque decisions. It might be difficult to trace how various AI agents influence the final insurance policy recommendation. And if errors occur, it could be hard to know which part of the system to hold accountable. Perhaps most crucially, this system risks diminishing human agency. When AI interactions grow too complex to comprehend or control, individuals may struggle to intervene in or even fully understand their insurance arrangements.A tangle of ethical and practical challenges
The insurance agent scenario above is not yet fully realised. But sophisticated AI agents are rapidly coming onto the market. Salesforce and Microsoft have already incorporated AI agents into some of their corporate products, such as Copilot Actions. Google has been gearing up for the release of personal AI agents since announcing its latest AI model, Gemini 2.0. OpenAI is also expected to release a personal AI agent in 2025. The prospect of billions of AI agents operating simultaneously raises profound ethical and practical challenges. These agents will be created by competing companies with different technical architectures, ethical frameworks and business incentives. Some will prioritise user privacy, others speed and efficiency. They will interact across national borders where regulations governing AI autonomy, data privacy and consumer protection vary dramatically. This could create a fragmented landscape where AI agents operate under conflicting rules and standards, potentially leading to systemic risks. What happens when AI agents optimised for different objectives – say, profit maximisation versus environmental sustainability – clash in automated negotiations? Or when agents trained on Western ethical frameworks make decisions that affect users in cultural contexts for which they were not designed? The emergence of this complex, interconnected ecosystem of AI agents demands new approaches to governance, accountability, and the preservation of human agency in an increasingly automated world.“AI agents are a multi-trillion dollar opportunity.” — Jensen Huang, Nvidia CEO pic.twitter.com/0Vr3buPm58
— Ryan Watkins (@RyanWatkins_) January 7, 2025