We sometimes come across catchy headlines about AI that can scare or excite us, such as “By the middle of this century, artificial intelligence will be a billion times smarter than the human brain,” and “Artificial intelligence could surpass the human brain by 2029” (La Presse, January 23, 2022).
Another example taken from the same article reads, “We will not experience 100 years of AI progress over the next century: rather, we will experience 20,000 years of progress at the current rate, writes Mo Gawdat.”
The reality is that the best AI researchers cannot seriously make such predictions unless they see them as part of a science fiction movie. Not only is there no consensus among AI researchers about the future pace of AI advances, but there is also no scientific basis for making such predictions. Scientific research can stagnate for a long time over particular problems (e.g., the unification of all forces in physics) or make rapid progress following a breakthrough (e.g., that of deep learning).
The more accurate this kind of prediction seems, the more one should be wary of it. Why not announce the precise date of superintelligence in the manner of Nostradamus? Indeed some people have but at the expense of their scientific credibility. Mr. Gawdat, quoted in this article, was an office manager at Google, which does not seem to me to be sufficient grounds for such an opinion on the question—a question so difficult that it would take much more than a single expert AI researcher’s opinion to rely on such projections.
Beyond the hype, there are qualitative things that we can say with a high degree of confidence:
- There is no reason to believe that we won’t be able to build AIs at least as smart as we are. Our brains are complex machines whose workings are becoming increasingly better understood. We are living proof that some level of intelligence is possible.
- Since humans sometimes suffer from cognitive biases that hinder their reasoning that may have helped our ancestors in the evolutionary process leading to homo sapiens, it is reasonable to assume that we will be able to build AIs without as many of these flaws (e.g., the need for social status, ego, or belonging to a group, with the unquestioning acceptance of group beliefs). In addition, they will have access to more data and memory. Therefore, we can confidently say that it will be possible to build AIs that are smarter than us.
- Still, it is far from certain that we will be able to build AIs wildly more intelligent than us as the article claims. All kinds of computational phenomena run into an exponential wall of difficulty (the infamous NP-hardness of computing) and we have yet to discover the limits of intelligence.
- The more the science of intelligence (both human and artificial) advances, the more it holds the potential for great benefits and dangers to society. There will likely be an increase in applications of AI that could greatly advance science and technology in general, but the power of a tool is a double-edged sword. As the article in La Presse mentions, it is essential to put in place laws, regulations and social norms to avoid, or at least reduce, the misuse of these tools.
- To prevent humans blinded by their desire for power, money or revenge from exploiting these tools to the detriment of other humans, we will undoubtedly need to change the laws and introduce compassion in machines (as the article suggests), but also reinforce inherent human compassion.
- Since we don’t really know how fast technological advances in AI or elsewhere (e.g., biotechnology) will come, it’s best to get on with the task of better regulating these kinds of powerful tools right away. In fact, there are already harmful uses of AI, whether voluntarily as in the military (killer drones that can recognize someone’s face and shoot the person) or involuntarily as with AI systems that make biased decisions and discriminate against women or racialized people, for example. Computing in general is very poorly regulated, and this must be changed. We must regulate these new technologies, just as we did for aeronautics or chemistry, for example, to protect people and society.
- Furthermore, applications of AI that are clearly beneficial to society should be encouraged, whether it be in health, in the fight against climate change, against injustice or in increasing access to knowledge and education. In all these areas, governments have a key role to play in directing the forces of AI research and entrepreneurship towards those applications that are beneficial to society but where the desire to make a profit is not always sufficient to stimulate the needed investments.