Dozens of industry leaders and academics in the field of artificial intelligence have called for greater global attention to the possible threat of
extinction from AI.”
Signed by leading industry officials like OpenAI CEO Sam Altman and Geoffrey Hinton (the “godfather” of artificial intelligence) a statement highlights wide-ranging concerns about the ultimate danger of unchecked AI.
Dan Hendrycks, the director of the Center for AI Safety tweeted,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We just put out a statement:— Dan Hendrycks (@DanHendrycks) May 30, 2023
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc.https://t.co/N9f6hs4bpa
Experts say humanity is safe (so far) from the science-fiction-like AI overlords, but massive hype and huge investment into the AI industry is leading the call for regulation NOW before any major mishaps occur.
Lawmakers, advocacy groups and tech insiders have raised alarms about the potential for AI-powered language models like ChatGPT to spread misinformation and displace millions of jobs.
But will the public and lawmakers really take this warning seriously? Besides lost jobs (accidental nuclear war?) what’s the worst that could happen? A TV report below from Channel 4 in Britain ends with this scary sentence,
When AI is already writing science fiction, have humans lost the capacity to imagine the worst?“