This week, France hosted an AI Action Summit in Paris to discuss burning questions around artificial intelligence (AI), such as how people can trust AI technologies and how the world can govern them.
Sixty countries, including France, China, India, Japan, Australia and Canada, signed a declaration for “inclusive and sustainable” AI. The United Kingdom and United States notably refused to sign, with the UK saying the statement failed to address global governance and national security adequately, and US Vice President JD Vance criticising Europe’s “excessive regulation” of AI.
Critics say the summit sidelined safety concerns in favour of discussing commercial opportunities.
Last week, I attended the inaugural AI safety conference held by the International Association for Safe & Ethical AI, also in Paris, where I heard talks by AI luminaries Geoffrey Hinton, Yoshua Bengio, Anca Dragan, Margaret Mitchell, Max Tegmark, Kate Crawford, Joseph Stiglitz and Stuart Russell.
As I listened, I realised the disregard for AI safety concerns among governments and the public rests on a handful of comforting myths about AI that are no longer true – if they ever were.
1: Artificial general intelligence isn’t just science fiction
The most severe concerns about AI – that it could pose a threat to human existence – typically involve so-called artificial general intelligence (AGI). In theory, AGI will be far more advanced than current systems. AGI systems will be able to learn, evolve and modify their own capabilities. They will be able to undertake tasks beyond those for which they were originally designed, and eventually surpass human intelligence. AGI does not exist yet, and it is not certain it will ever be developed. Critics often dismiss AGI as something that belongs only in science fiction movies. As a result, the most critical risks are not taken seriously by some and are seen as fanciful by others. However, many experts believe we are close to achieving AGI. Developers have suggested that, for the first time, they know what technical tasks are required to achieve the goal. AGI will not stay solely in sci-fi forever. It will eventually be with us, and likely sooner than we think.2: We already need to worry about current AI technologies
Given the most severe risks are often discussed in relation to AGI, there is often a misplaced belief we do not need to worry too much about the risks associated with contemporary “narrow” AI. However, current AI technologies are already causing significant harm to humans and society. This includes through obvious mechanisms such as fatal road and aviation crashes, warfare, cyber incidents, and even encouraging suicide. AI systems have also caused harm in more oblique ways, such as election interference, the replacement of human work, biased decision-making, deepfakes, and disinformation and misinformation. According to MIT’s AI Incident Tracker, the harms caused by current AI technologies are on the rise. There is a critical need to manage current AI technologies as well as those that might appear in future.3: Contemporary AI technologies are ‘smarter’ than we think
A third myth is that current AI technologies are not actually that clever and hence are easy to control. This myth is most often seen when discussing the large language models (LLMs) behind chatbots such as ChatGPT, Claude and Gemini. There is plenty of debate about exactly how to define intelligence and whether AI technologies truly are intelligent, but for practical purposes these are distracting side issues. It is enough that AI systems behave in unexpected ways and create unforeseen risks.
Several AI chatbots appear to display surprising behaviours, such as attempts at ‘scheming’ to ensure their own preservation. Apollo Research