John Mecklin
Bulletin of the Atomic Scientists - Volume 75, 2019 - Issue 3: Special issue: The global competition for AI dominance - Published online: 26 Apr 2019
The world is engaged in a competition for leadership in the array of technologies and applications often grouped under the umbrella of artificial intelligence. This competition has often been compared to the nuclear arms race of the Cold War, but the comparison is misleading on many fronts. Artificial intelligence is not one technology but many, and those technologies may be used in a wide variety of classifying, optimizing, and predictive applications, many and probably most not military in nature. Unlike the US Manhattan Project and the Soviet Union’s equally secretive early nuclear program, AI research is done in both the private and public sectors; information about private sector research is regularly shared among participants in the field and, therefore, among countries. Progress is rapid, and knowledge about that progress is seldom contained to one country alone.
But the dissimilarity between today’s AI research programs and the highly secretive, government-controlled efforts that led to 60,000 nuclear weapons in the 1980s does not mean warnings about an AI arms race now are rare. In 2017, for instance, Russian President Vladimir Putin described the AI race in martial terms, as if he were Sauron, the dark lord of Middle-earth, seeking The One Ring to Rule them All1: “Whoever becomes the leader in this sphere will become the ruler of the world.” The term “AI arms race” also seems to have a natural attraction for news headline writers2 around the world (even when the articles under those headlines explain why the competition in AI technologies really is not an arms race)3.
But if it does not constitute an arms race per se, the worldwide competition in AI research and development does present the world with tricky ethical choices, particularly in the military arena – and a huge management problem. Clearly, some combinations of AI algorithms, robotics, and other emergent technologies – killer robots that choose their targets independent of human control, for example – may pose extraordinary dangers to humanity. Other applications involving one or more of the technologies in the AI basket, however, could and likely will provide the world with almost magical benefits, detecting, for example, diseases that doctors would miss, thereby saving countless lives, and guiding driverless cars that kill many thousands of fewer people than human-steered vehicles. Directing fast-moving and decentralized AI research in positive directions, while minimizing its potentially enormous negative potentials, is almost certainly the most difficult “dual-use” technology challenge the world has faced to date.
For this issue, we asked four highly regarded experts for their views on the global AI competition.
In “The frame problem: The AI ‘arms race’ isn’t one,” University of Cambridge researcher Heather M. Roff explains why using an arms race orientation when discussing the global competition in the AI field is not only fundamentally inaccurate but potentially dangerous. “[T]alking about technological competition – in research, adoption, and deployment – in all sectors of multiple economies and in warfare is not really an arms race. Indeed, to frame this competition in military terms risks the adoption of policies or regulations that could escalate the rivalry between states and increase the likelihood of actual conflict,” Roff writes.
Cyberspace scholar Chris C. Demchak agrees that AI competition does not qualify, in itself, as an arms race. But in its rivalry with the United States, a rising, authoritarian China focused on technological supremacy may threaten freedom of expression, human rights, and democratic self-government around the world, Demchak suggests in her article, “China: determined to dominate cyberspace and AI.”
“The rise of AI, a subset of cyber – as well as machine learning, quantum computing, and other new technologies – does not herald a new arms race so much as an enhanced, possibly exponentially accelerated, underlying competition between rising China and the United States,” Demchak notes. “Given the dual nature of most cyber tools, those worried about an AI arms race should rather be more concerned about the profound disruption to the existing global order that China’s rise to the top of the cyber pack could pose.”
Brenda Leong, senior counsel and director of strategy at the Washington, DC-based think tank Future of Privacy Forum, sees a more generalized threat to privacy and fundamental freedoms in the global use of facial recognition applications based on AI. In a world of closed-circuit video cameras and ubiquitous digital tracking devices – from radio frequency identification chips to electronic toll collectors to smartphones with location tracking – governments and large private corporations have “vast power to keep track of, manipulate, and potentially repress entire populations.” China’s censoring of its internet and broad use of facial recognition constitute perhaps the most well-known efforts of this sort. But, Leong notes, “even democratically elected governments have shown a tendency to want to digitally profile and analyze their citizens without sufficient respect for individual privacy. And just as rampant industrialization had to be reined in to protect human rights and individual dignity, information technology and digital systems must be controlled to prevent abuse and exploitation.”
One aspect of the competition in AI technology does lend itself somewhat to a “race” analogy. It involves efforts to create what is known as artificial general intelligence – that is, artificial intelligence with the same capacities as the human mind. Such a system “would automatically constitute a quantitative superintelligence by virtue of its ability to process information at least a million times faster than the human brain,” and it would almost certainly work to improve its cognitive capabilities as a matter of course, becoming, in short course, a “superintelligence.” Expert estimates of when (or even if) an AGI will be created vary greatly, but if and when AGI is achieved, emerging technology author Phil Torres writes, “[i]n a matter of minutes, hours, or days, humanity could be joined by a being not merely more intelligent than humans, but vastly more capable of solving problems in a wide range of cognitive domains.” At least two questions would be relevant at that point: Could humanity control the AGI it had created? And would the creator of that superintelligence indeed (as Putin has suggested) rule the world?
America and the world have a longstanding public fascination-and-fear relationship with the notion of artificial intelligence. That relationship is perhaps best encapsulated in the scene when he HAL 9000 computer refused to open the pod bay doors for Dave in 2001: A Space Odyssey4. In the instance, artificial intelligence exhibits its desire to continue to exist, even if that means humans must die. The vision is simultaneously repellent and … thrilling.
Though artificial general intelligence may be created at some point in the future, that possibility is anything but a certainty. To be sure, the world must be aware of and planning for the emergence of a machine intelligence so superior to the human intellect that it has a choice: to view humans as pleasant pets to be cared for, or as impediments to the AI-dominated future.
But the reality of managing AI as it exists in the current age has little if anything to do with sci-fi imaginings. In the short- and medium-term, world leaders must deal with a complex and rapidly evolving technological landscape that is anything but imaginary, and they must do so in a way that guards against negative outcomes and promotes positive change. Our current reality contains no Terminators; HAL does not yet exist. The situation before world leaders is unprecedented. The competition for leadership in artificial intelligence is multifaceted and global in nature. It is not an arms race. It is a widely distributed, private and public research and development enterprise that we, as a global society, must come to manage – or allow it to evolve, unmanaged, in ways that humanity may come to seriously regret.
Notes
1. If you have never encountered JRR Tolkien’s Lord of the Rings trilogy or the hit movies adapted from it, see https://www.amazon.com/Lord-Rings-J-R-R-Tolkien/dp/0544003411 and https://lotr.fandom.com/wiki/Lord_of_the_Rings_film_trilogy.
2. See: https://foreignpolicy.com/2019/03/05/whoever-predicts-the-future-correctly-will-win-the-ai-arms-race-russia-china-united-states-artificial-intelligence-defense/.
3. See: https://www.washingtonpost.com/outlook/2019/03/06/stop-calling-artificial-intelligence-research-an-arms-race/?utm_term=.f880a6813d4b.
4. This famous scene can be viewed here: https://www.youtube.com/watch?v=ARJ8cAGm6JE.
No comments:
Post a Comment