Press "Enter" to skip to content

My testimony in front of the U.S. Senate – The urgency to act against AI threats to democracy, society and national security

The capabilities of AI systems have steadily increased with deep learning advances for which I received a Turing Award alongside Geoffrey Hinton (University of Toronto) and Yann LeCun (New York University and Meta) in 2018. This revolution enables tremendous progress and innovation, but it also entails a wide range of risks. Recently, I and many others have been surprised and concerned by the giant leap realized by systems such as ChatGPT, which can achieve what computing pioneer Alan Turing proposed in 1950 as a milestone of AI capability, when it becomes difficult to discern whether one is interacting with another human, or with a machine. GPT-4 generally but superficially feels human to many of us, indicating that there now exist AI systems capable of mastering language and possessing sufficient knowledge about humankind to engage in highly proficient, although sometimes unreliable, discussions. The next versions of such large language models will certainly show significant improvements and continue to rapidly propel us into the future.

These advancements have led many top AI researchers, including Hinton, LeCun and I, to revise our estimates of when human levels of broad cognitive competence could be achieved. Previously thought to be decades or even centuries away, the three of us now believe it could be within a few years or a couple of decades. The shorter timeframe, say within 5 years, is particularly worrisome because scientists, regulators and international organizations will most likely require a significantly longer timeframe to effectively mitigate the potentially significant threats to democracy, national security and our collective future. In addition to the existing AI harms, including discrimination and labor market disruptions, concerns have been raised that the growing power of AI could be exploited for disinformation, cyberattacks or even designing and deploying novel bioweapons. This requires urgent governmental intervention to mitigate those risks.

These severe risks could arise intentionally – because of malicious actors using frontier AI systems to achieve harmful goals, or unintentionally – if an AI system develops strategies to achieve its objectives that are misaligned with our values. I will be testifying in front of the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law today to present my perspective, emphasizing four factors that governments can focus on in their regulatory efforts to mitigate harms, especially major ones, associated with AI: 

1. Access: Limiting who and how many people and organizations have access to powerful AI systems, structuring the proper protocols, duties, oversight and incentives for them to act safely; 

2. Misalignment: Ensuring that AI systems will act appropriately, as intended by their operators and in agreement with our values and norms, mitigating against the potentially harmful impact of misalignment and banning powerful AI systems that are not convincingly safe;

3. Raw intellectual power: the capabilities of an AI system, which depend on the sophistication of its underlying algorithms and the computing resources and datasets on which it was trained; and 

4. Scope of actions: the ability to affect the world and cause harm, indirectly (e.g. through human actions) or directly (e.g. through the internet), considering society’s ability to prevent or limit such harm.

Importantly, none of the current advanced AI systems are demonstrably safe against the risk of loss of control to a misaligned AI. Looking at risks through the lens of each of these four factors is critical to designing appropriate actions. In light of the significant challenges societies face in designing the needed regulation and international treaties, I firmly believe that urgent efforts in the following areas are crucial:

a) Coordinate and implement agile national and international regulations – beyond voluntary guidelines – anchored in new international institutions that bolster public safety in relation to all risks and harms associated with AI, with more severe risks requiring more scrutiny. This would require comprehensive evaluation of potential harm through independent audits and restricting or prohibiting the development and deployment of AI systems with unacceptable levels of risk, like in the pharmaceuticals, transportation, or nuclear industries.

b) Significantly accelerate global research endeavors focused on AI safety and governance to understand existing and future risks better as well as study possible mitigation, regulation and governance. This open-science research should concentrate on safeguarding human rights and democracy, enabling the informed improvements to essential regulations, safety protocols, safe AI methodologies, and governance structures.

c) Research and develop countermeasures to protect citizens and society from future powerful AI systems, especially potential rogue AIs, because no legislation perfectly protects the public. This work should be conducted within several highly secure and decentralized laboratories operating under multinational oversight, aiming to minimize the risks associated with an AI arms race among governments or corporations. 

Given the significant risks, governments must allocate substantial resources to safeguard our future, inspired by efforts such as space exploration or nuclear fusion. I believe we have the moral responsibility to mobilize our greatest minds and ensure major investments in a bold and global coordinated effort to fully reap the economic and social benefits of AI, while protecting society, humanity and our shared future against its potential perils.

Web page of the US Senate here (with video of the hearing and pdf and other testimonies by Stuart Russell and Dario Amodei ), and pdf of my own written testimony here.