AI has been compared to a pandemic and a nuclear war.

0
475

It was published by the AI ​​Security Center

Many tech industry leaders agree that AI technology could one day pose a threat to humanity. The Center for AI Safety has published an open letter signed by more than 350 leaders, researchers and engineers working in the field of artificial intelligence.

“Reducing the risk of human extinction due to AI should become a global priority along with other social risks such as pandemics and nuclear war,” the letter says.

Among the signatories of the agreement were top executives from leading AI companies — OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei. The list consists of leading researchers who are considered the “godfathers” of the modern AI movement, including Geoffrey Hinton and Joshua Bengio.

Artificial Intelligence.
Artificial Intelligence.

AI has been compared to a pandemic and a nuclear war.

The announcement came amid growing concern about the potential harm of artificial intelligence. Recently launched AI-based chatbots have heightened fears that AI could soon be used to spread disinformation and propaganda or cut many jobs in the near future.

American-Brazilian researcher Ben Herzel, a leading artificial intelligence specialist, said that AI could replace 80% of human jobs in the coming years.

The White House even planned to meet with top executives from Google, Microsoft, OpenAI and Anthropic earlier this month to discuss the prospects and risks of artificial intelligence.

The UK government also invited the heads of some of the world’s largest AI companies to a meeting this May as Prime Minister Rishi Sunak set out a policy to manage the risks and benefits of AI technology.