The Dangers of Artificial Intelligence: Understanding the Concerns of Computer Scientists
Key Highlights :
The rapid advancement of artificial intelligence (AI) technology has been met with a range of reactions from computer scientists. While some are excited by the potential of AI, others are concerned about the potential dangers it could pose. Computer scientists who helped build the foundations of today's AI technology are warning of its dangers, but that doesn't mean they agree on what those dangers are or how to prevent them.
Geoffrey Hinton, a so-called Godfather of AI, plans to outline his concerns Wednesday at a conference at the Massachusetts Institute of Technology. He's already voiced regrets about his work and doubt about humanity's survival if machines get smarter than people. Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he's "pretty much aligned" with Hinton's concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say "We're doomed" is not going to help.
The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet Thursday with Vice President Kamala Harris in what's being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations to pass sweeping new AI rules.
But all the talk of the most dire future dangers has some worried that hype around superhuman machines -- which don't yet exist -- is distracting from attempts to set practical safeguards on current AI products that are largely unregulated. Margaret Mitchell, a former leader on Google's AI ethics team, said she's upset that Hinton didn't speak out during his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google's Bard.
Yoshua Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, instrumental to the development of today's AI applications such as ChatGPT. Bengio, the only one of the three who didn't take a job with a tech giant, has voiced concerns for years about near-term AI risks, including job market destabilization, automated weaponry and the dangers of biased data sets. But those concerns have grown recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI's latest model, GPT-4.
Bengio said Wednesday he believes the latest AI language models already pass the "Turing test" named after British codebreaker and AI pioneer Alan Turing's method introduced in 1950 to measure when AI becomes indistinguishable from a human -- at least on the surface. "That's a milestone that can have drastic consequences if we're not careful," Bengio said. "My main concern is how they can be exploited for nefarious purposes to destabilize democracies, for cyber attacks, disinformation. You can have a conversation with these systems and think that you're interacting with a human. They're difficult to spot."
Aidan Gomez, one of the co-authors of the pioneering 2017 paper that introduced a so-called transformer technique -- the "T" at the end of ChatGPT -- for improving the performance of machine-learning systems, is enthused about the potential applications of these systems but bothered by fearmongering he says is "detached from the reality" of their true capabilities and "relies on extraordinary leaps of imagination and reasoning."
Computer scientists are still debating the potential dangers of AI, but it is clear that governments and tech companies are taking the issue seriously. It is important to understand the concerns of computer scientists in order to create effective regulations and safeguards to ensure the safe development and use of AI.