SHARE

Over 350 AI researchers, ethicists, engineers, and company executives co-signed a 22-word, single sentence statement about artificial intelligence’s potential existential risks for humanity. Compiled by the nonprofit organization Center for AI Safety, a consortium including the “Godfather of AI,” Geoffrey Hinton, OpenAI CEO Sam Altman, and Microsoft Chief Technology Officer Kevin Scott agree that, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The 22-word missive and its endorsements echo a similar, slightly lengthier joint letter released earlier this year calling for a six-month “moratorium” on research into developing AI more powerful than OpenAI’s GPT-4. Such a moratorium has yet to be implemented.

[Related: There’s a glaring issue with the AI moratorium letter.]

Speaking with The New York Times on Tuesday, Center for AI Safety’s executive director Dan Hendrycks described the open letter as a “coming out” for some industry leaders. “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things,” added Hendrycks.

But critics remain wary of both the motivations behind such public statements, as well as their feasibility.

“Don’t be fooled: it’s self-serving hype disguised as raising the alarm,” says Dylan Baker, a research engineer at the Distributed AI Research Institute (DAIR), an organization promoting ethical AI development. Speaking with PopSci, Baker went on to argue that the current discussions regarding hypothetical existential risks distract the public and regulators from “the concrete harms of AI today.” Such harms include “amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.”

A separate response first published by DAIR following March’s open letter and re-upped on Tuesday, the group argues, “The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”

Hendrycks, however, believes that “just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.” Hendrycks likened the moment to when atomic scientists warned the world about the technologies they created before quoting J. Robert Oppenheimer, “We knew the world would not be the same.”

[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]

“They are essentially saying ‘hold me back!’ media and tech theorist Douglas Rushkoff wrote in an essay published on Tuesday. He added that a combination of “hype, ill-will, marketing, and paranoia” is fueling AI coverage, and hiding the technology’s very real, demonstrable issues while companies attempt to consolidate their holds on the industry. “It’s just a form of bluffing,” he wrote, “Sorry, but I’m just not buying it.

In a separate email to PopSci, Rushkoff summarized his thoughts, “If I had to make a quote proportionately short to their proclamation, I’d just say: They mean well. Most of them.”

Load more...