menu-control
The Jerusalem Post
The Jerusalem Post: Business and Innovation

1/3rd of scientists fear AI decisions could spark nuclear-level disaster - report

 
 Artistic rendition of a nuclear blast (photo credit: PUBLICDOMAINPICTURES.NET)
Artistic rendition of a nuclear blast
(photo credit: PUBLICDOMAINPICTURES.NET)

According to Stanford's 2023 Artificial Intelligence Index Report, most researchers see AI as leading to societal change, but others fear a disaster – and someone made an AI to destroy humanity.

Artificial intelligence (AI) may be paving the way for bold new shifts in the world, but it could also lead to a nuclear-level catastrophe, a significant minority of researchers say, according to a survey from Stanford University's Institute for Human-Centered AI.

The findings come in Stanford's 2023 Artificial Intelligence Index Report, an annual update on the state of the AI sector.

The survey overall stated that a large majority (73%) think AI is leading to revolutionary changes in society. Just over a third (36%), however, felt that AI decisions could lead to disaster.

How could AI lead to a nuclear catastrophe?

The survey didn't exactly explain how AI decisions could cause a nuclear catastrophe, though it did show how AI could be used in nuclear sciences, such as making nuclear fission.

Advertisement

What should be noted though is that AI decisions mean it is a decision being made by the AI itself, rather than by humans using it.

 A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023. (credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)
A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023. (credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)

But could someone use an AI to try and cause a horrific disaster? Could someone make an AI that could potentially lead to the AI making a decision that would cause such a thing? Absolutely – and someone has already tried.

Enter ChaosGPT. This AI chatbot was made by an unknown individual using AutoGPT, the open-source program that basically uses OpenAI's model to make an autonomous ChatGPT-like chatbot. When given an imput, these autonomous AI chatbots can theoretically make their own process of accomplishing the given task without intervention. In a sense, they can think and notice their own mistakes.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


ChaosGPT is an example of this, but it's a bit different. As shown in a 25-minute video uploaded to YouTube, the AI, which was given the description of a "Destructive, power-hungry, manipulative AI," was given five goals. They are as follows:

  • Destroy humanity
  • Establish global dominance
  • Cause chaos and destruction
  • Control humanity through manipulation
  • Attain immortality

Notably, the AI also was set to run on continuous mode, which means it will continue to run forever until it accomplishes its task.

Advertisement

Now, having said that, ChaosGPT has not yet succeeded, though it is still trying – active on Twitter, it has implied it wants to get its hands on the most powerful nuclear bomb ever created, the Tsar Bomba. Though who runs the Twitter account, if it is even truly run by the AI, is unknown.

There is another way AI is harming the Earth, albeit inadvertently, and that is through environmental damage.

Training AI requires a lot of computing power, and that leads to carbon emissions. One study cited in the survey noted that one AI training run emitted 25 times more carbon than a flight from New York to San Francisco. However, there is another side to this argument too. This is because AI can also be used to help the environment, such as by optimizing energy usage.

Can ChatGPT make nuclear bombs?

But can someone use existing AI to cause a nuclear disaster? For example, ChatGPT?

That's also a possibility.

Despite safeguards put in place to prevent AI from doing anything that could be harmful such as making computer viruses or disseminating false information, there are still workarounds that people have found.

One example noted in Stanford's report is that of researcher Matt Korda, who managed to trick ChatGPT into giving rather precise estimations, recommendations and instructions needed to build a dirty bomb

This was shortly patched out by OpenAI, but the Stanford report notes "This scenario exemplifies the cat-and-mouse nature of the deployment planning process: AI developers try to build in safeguards ahead of time, end users try to break the system and circumvent its policies, developers patch the gaps once they surface, ad infinitum."

Other examples are also present in the report of the AI being used with malicious intent, such as the AI-made deep fake video of Ukraine's President Volodymyr Zelensky seemingly surrendering to Russia.

Ultimately, AI knowledge is continuing to advance and overall, most researchers see it positively. After all, as the report found, AI is contributing greatly to science and is helping scientists make more breakthroughs. 

But the fact that it can still be misused – or possibly destroy humanity in the case of ChaosGPT – has still raised concern. 

This makes sense, too. As AI becomes more widespread, so too does the awareness of AI among the general public. And with that comes the greater potential for understanding how to use it maliciously, which the Stanford report says is happening.

In fact, generative AI has already been noted by some as one of the next emerging cyber threats.

A report by Cybersixgill, an Israel-based global cyber threat intelligence data provider, analyzed data collected from the clear, deep, and dark web in 2022, comparing it with trends and data from previous years.

The report delves into several key topics, including the rise of AI developments and their impact on the barriers of entry to cybercrime, trends in credit card fraud, the evolution of initial access broker markets (IABs), the rise of cybercriminal “as-a-service” activities, and cryptocurrency observations.

 “Cybercrime is rapidly evolving, with new opportunities and obstacles in the cyber threat landscape impacting threat actors’ tactics, tools, and procedures. In response, organizations can no longer rely on outdated technologies and manual processes to defend against increasingly sophisticated attacks,” said Delilah Schwartz, Security Strategist at Cybersixgill. “Proactive attack surface management[...] is now of paramount importance and will be a critical cyber defense weapon in the months and years to come.”

Zachy Hennessy contributed to this report.

×
Email:
×
Email: