Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it

ZDNetFriday, December 12, 2025 at 3:47:20 PM
NegativeTechnology
  • OpenAI has raised alarms regarding the high risk of weaponized AI, emphasizing the need to evaluate when AI models can either assist or obstruct cybersecurity efforts. The company is actively working on measures to protect its models from potential misuse by cybercriminals.
  • This warning highlights the increasing vulnerabilities associated with advanced AI technologies, as OpenAI acknowledges that its new models could significantly exacerbate cybersecurity threats, necessitating urgent defensive strategies.
  • The situation reflects broader concerns in the tech industry about the balance between AI advancements and the potential for misuse, as companies face scrutiny over their safety protocols and the implications of their technologies on global security.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
How OpenAI is defending ChatGPT Atlas from attacks now - and why safety's not guaranteed
NeutralTechnology
OpenAI is actively defending its ChatGPT Atlas from prompt injection attacks, utilizing an automated attacker that simulates human hacking behavior to evaluate the browser's defenses. This approach highlights the ongoing challenges in securing advanced AI systems against sophisticated threats.
OpenAI says it's had to protect its Atlas AI browser against some serious security threats
NegativeTechnology
OpenAI has reported that its Atlas AI browser has faced significant security threats, particularly from prompt injection attacks, which the company likens to phishing. This ongoing issue highlights the challenges of maintaining security in advanced AI systems.
OpenAI’s child exploitation reports increased sharply this year
NegativeTechnology
OpenAI has reported an alarming 80-fold increase in child exploitation reports to the National Center for Missing & Exploited Children during the first half of 2025 compared to the same period in 2024. This surge raises significant concerns about the safety and ethical implications of AI technologies.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about