Vogon Today

Selected News from the Galaxy

Economic Scenarios

For experts, AI is a risk like nuclear war or a pandemic

In a simple 22-word statement in English, a group of AI researchers, engineers and CEOs expressed concern about the existential threat posed by artificial intelligence. The report underscores the urgent need to prioritize mitigating AI risks alongside global threats such as pandemics and nuclear warfare.

The statement reads: “ Mitigating the extinction risk from AI should be a global priority alongside other societal risks such as pandemics and nuclear warfare.”

Signatories include Demis Hassabis, CEO of Google DeepMind, and Sam Altman, CEO of OpenAI. Turing Award winners Geoffrey Hinton and Youshua Bengio also backed the scare. At the same time, Yann LeCun, the third winner, has yet to sign.

The statement, published by the Center for AI Safety in San Francisco, contributes to the ongoing debate about the safety of artificial intelligence. Earlier this year, some proponents of the 22-word message penned an open letter calling for a six-month "pause" in AI development. However, the letter was criticized due to differing opinions on the severity of the AI ​​risk and the proposed solution.

Dan Hendrycks, executive director of the Center for AI Safety, explained that the brevity of the latest statement was intended to avoid generating further disagreements. Furthermore, Hendrycks pointed out that proposing many potential interventions to mitigate the AI ​​threat could have diluted the message. Instead, the terse statement represents a collective concern from influential industry figures who have long harbored fears about the risk of AI.

Debates for and against the decision

While the debate is well known, the details can be complex and often revolve around hypothetical scenarios where AI systems exceed the safety limits of controls. AI risk advocates point to rapid advances in technologies such as large language models as evidence of potential future increases in intelligence that could become uncontrollable.

On the other hand, skeptics point to the current limitations of AI systems, such as the continuing challenges in developing self-driving cars, despite huge investments and efforts.

Regardless of different perspectives, advocates and skeptics of AI risk recognize the current threats that AI systems pose. These threats range from facilitating mass surveillance and flawed “predictive policing” algorithms to creating and spreading disinformation and disinformation.

As the debate continues, this succinct yet impactful notice serves as a wake-up call to the world, urging us to address the challenges AI poses and find ways to mitigate its potential risks. The future of AI and its impact on humanity hang in the balance, underlining the need for prudent and farsighted navigation.


Telegram
Thanks to our Telegram channel you can stay updated on the publication of new articles from Economic Scenarios.

⇒ Register now


Minds

The article For experts, AI is a risk like nuclear war or the pandemic comes from Economic Scenarios .


This is a machine translation of a post published on Scenari Economici at the URL https://scenarieconomici.it/per-gli-esperti-la-ai-e-un-rischio-come-la-guerra-nucleare-o-la-pandemia/ on Tue, 30 May 2023 19:43:38 +0000.