Press "Enter" to skip to content

Proposal for a Multilateral Network of Public Good AI Research Labs to Protect Democracy and Humanity

I have been thinking about this question for several months: what if AI continues to progress towards and beyond our abilities in areas where it could become dangerous, and what if our regulations won’t be 100% fullproof, opening the door to seriously harmful misuse by bad actors, historically never seen concentration of power and existential threats to our collective future? Even if this were a low-probability event, given the high stakes, should we have a plan B? I have been wondering about this because I would like to help minimize those catastropic risks, with both my public voice and my expertise in machine learning. As this article explains, I suggest the creation of a multilateral network of non-profit and non-governmental labs collaborating on the defense of democracy, human rights and against eventual rogue autonomous AIs and I explain why. The proposal hinges on avoiding a single point of failure, avoiding excessive power concentration (economic, political and military) and strong internationally and democratically mandated governance mechanisms. The paper is now out in the Journal of Democracy.