DeepSeek’s Advances May Increase AI Safety Risks, Warns AI Pioneer
Yoshua Bengio cautions that growing competition in artificial intelligence could elevate risks, as an international report highlights AI’s potential for misuse.
Artificial intelligence systems are becoming increasingly susceptible to malicious applications, according to a comprehensive new study led by AI experts. The report’s lead author, Yoshua Bengio, has raised concerns that emerging players like DeepSeek could further amplify these risks.
Bengio, recognized as a key figure in the development of modern AI, has pointed to the progress made by Chinese startup DeepSeek as a potentially unsettling shift in a field where American firms have long maintained dominance.
“It’s going to mean a closer race, which usually is not a good thing from the point of view of AI safety,” he said.
According to Bengio, firms in the U.S. and other regions competing with DeepSeek may prioritize reclaiming market leadership over ensuring safety. OpenAI, the company behind ChatGPT, which now faces competition from DeepSeek’s AI assistant, recently announced plans to accelerate product releases in response.
“If you imagine a competition between two entities and one thinks they’re way ahead, then they can afford to be more prudent and still know that they will stay ahead,” Bengio said. “Whereas if you have a competition between two entities and they think that the other is just at the same level, then they need to accelerate. Then maybe they don’t give as much attention to safety.”
His comments come ahead of the release of a wide-ranging AI safety report.
The International AI Safety Report, compiled by a panel of 96 experts—including Nobel laureate Geoffrey Hinton—marks the first full assessment of AI’s risks and challenges. Bengio, a 2018 recipient of the Turing Award—often described as the Nobel Prize of computing—was tasked by the UK government with overseeing the report, which was introduced at the 2023 Global AI Safety Summit at Bletchley Park. The panel included representatives from 30 countries, as well as the EU and the UN. The next AI safety summit is scheduled to take place in Paris on 10 and 11 February.
AI’s Growing Potential for Harm
Since an interim study was published in May last year, AI systems designed for general purposes—such as chatbots—have shown increasing capabilities in areas relevant to misuse. These include exploiting vulnerabilities in software and IT systems and offering guidance on the production of biological and chemical weapons.
According to the report, new AI models can provide highly detailed technical instructions for creating dangerous pathogens and toxins—potentially exceeding the expertise of trained professionals. OpenAI has acknowledged that its advanced o1 model could help specialists in planning the production of biological threats.
Despite these concerns, the report notes that it remains unclear whether individuals without relevant expertise would be able to act on such AI-generated guidance. It also highlights the beneficial applications of AI, particularly in medicine.
In an interview with The Guardian, Bengio pointed out that AI models capable of using a smartphone camera could theoretically assist individuals in carrying out high-risk activities, such as assembling bioweapons.
“These tools are becoming easier and easier to use by non-experts, because they can decompose a complicated task into smaller steps that everyone can understand, and then they can interactively help you get them right. And that’s very different from using, say, Google search,” he said.
The report also highlights improvements in AI’s ability to autonomously identify weaknesses in software. While this capability could help bolster cybersecurity, it could also assist hackers in planning cyberattacks.
However, the report notes that executing real-world attacks autonomously remains beyond AI’s reach due to the exceptionally high level of precision required.
The Rise of Deepfake Threats and AI Exploits
Another major concern outlined in the study is the increasing prevalence of deepfake technology, which allows for the creation of highly realistic fake images, voices, or videos of individuals. According to the report, deepfake content has been used in financial scams, blackmail schemes, and the creation of explicit images. However, the study states that quantifying the precise scale of the issue remains difficult due to the absence of comprehensive and reliable data.
The report also examines vulnerabilities in different types of AI models. Closed-source systems, where the underlying code cannot be altered, may still be susceptible to exploits that bypass built-in safety measures. Meanwhile, open-source models such as Meta’s Llama, which can be freely downloaded and modified, carry the risk of being adapted for harmful or misguided purposes by bad actors.
Emerging AI Models and Their Implications
In an unexpected late addition to the report, Bengio highlighted the emergence of OpenAI’s advanced o3 reasoning model in December—shortly after the study was completed. He noted that the model had achieved a breakthrough in an abstract reasoning test, a milestone that many experts, including himself, had believed was out of reach in the near term.
“The trends evidenced by o3 could have profound implications for AI risks,” writes Bengio, who also flagged DeepSeek’s R1 model. “The risk assessments in this report should be read with the understanding that AI has gained capabilities since the report was written.”
Bengio told The Guardian that advancements in AI’s reasoning capabilities could have significant effects on the job market, particularly by enabling autonomous agents to perform tasks traditionally carried out by humans. However, he also warned that these developments could make AI more attractive for malicious actors.
“If you’re a terrorist, you’d like to have an AI that’s very autonomous,” he said. “As we increase agency, we increase the potential benefits of AI and we increase the risks.”
That said, Bengio believes that AI systems still lack the long-term planning abilities required to develop fully autonomous tools capable of evading human control.
“If an AI cannot plan over a long horizon, it’s hardly going to be able to escape our control,” he said.
AI’s Broader Risks and Global Impact
The nearly 300-page report also reaffirms long-standing concerns about AI’s potential for harm, including its role in generating scams, producing child exploitation imagery, delivering biased outputs, and violating privacy by leaking sensitive data shared with chatbots. Researchers, the report notes, have yet to fully mitigate these risks.
AI is broadly defined as computer systems capable of performing tasks that typically require human intelligence.
The International Scientific Report on the Safety of Advanced AI also examines the environmental consequences of AI, highlighting the increasing energy consumption associated with data centers. Additionally, it explores AI’s growing influence on the job market and how autonomous AI agents could fundamentally reshape employment landscapes.
The report concludes that the future trajectory of AI remains highly uncertain, with both highly positive and highly negative outcomes possible in the coming years. It stresses that governments and societies still have the power to determine how AI develops.
“This uncertainty can evoke fatalism and make AI appear as something that happens to us. But it will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take,” the report says.