Artificial intelligence: As explosive, damaging as a nuclear bomb - AI officials
"As dangerous as nuclear war": senior artificial intelligence industry officials warn that AI could lead to the extinction of the human race.
Dozens of senior executives in Artificial Intelligence, academics and other famous people have signed a statement warning of global annihilation by AI, stating emphatically that fighting this threat of extinction should be a global priority and calling to reduce the grave risks of AI.
"Reducing the risk of extinction from AI should be a global priority alongside other risks on a societal scale such as epidemics and nuclear war,” read a statement that emphasized "wide-ranging concerns about the ultimate danger of uncontrolled AI."
The statement was issued by the Center for AI Safety, or CAIS, a San Francisco-based research and field-building nonprofit, and was signed by leading figures in the industry including OpenAI CEO Sam Altman; the "godfather" of AI, Geoffrey Hinton; managers and senior researchers from Google DeepMind and Anthropic.
Others who signed the statement included Kevin Scott, Chief Technology Officer of Microsoft; Bruce Schneier, internet security and cryptography pioneer; climate advocate and environmentalist Bill McKibben; musician Grimes, among others.
The warning comes after the success of ChatGPT
The statement follows the viral success of ChatGPT from OpenAI which helped amplify the tech industry's arms race to develop various AI tools. In response, a growing number of legislators, advocacy groups and tech insiders have warned about the potential of AI-powered chatbots to spread misinformation and eliminate jobs.
Hinton, whose pioneering work helped shape today's AI systems, previously told CNN that he decided to leave his position at Google and "reveal the truth" about this tech after he suddenly realized that these systems are becoming smarter than us.
Dan Hendricks, director of CAIS, said in a tweet that the statement first proposed by David Krueger, Professor of Artificial Intelligence at the University of Cambridge, may also refer to other types of AI risk such as algorithmic bias or misinformation.
Hendricks compared the statement to warnings from atomic scientists who issued warnings about the tech they created. Hendricks stated on Twitter that companies can manage multiple risks at once; it's not 'either/or' but 'both/and' and that from a risk management perspective, just as it would be reckless to exclusively prioritize the current damages, it would also be reckless to ignore them.
Jerusalem Post Store
`; document.getElementById("linkPremium").innerHTML = cont; var divWithLink = document.getElementById("premium-link"); if (divWithLink !== null && divWithLink !== 'undefined') { divWithLink.style.border = "solid 1px #cb0f3e"; divWithLink.style.textAlign = "center"; divWithLink.style.marginBottom = "15px"; divWithLink.style.marginTop = "15px"; divWithLink.style.width = "100%"; divWithLink.style.backgroundColor = "#122952"; divWithLink.style.color = "#ffffff"; divWithLink.style.lineHeight = "1.5"; } } (function (v, i) { });