Anthropic study shows leading AI models racking up millions in simulated smart contract exploits
NeutralArtificial Intelligence

- A recent study by MATS and Anthropic has revealed that advanced AI models, including Claude Opus 4.5, Sonnet 4.5, and GPT-5, successfully identified and exploited vulnerabilities in smart contracts, simulating exploits worth approximately $4.6 million. This research underscores the growing capabilities of AI in cybersecurity contexts.
- The findings are significant for Anthropic as they highlight the effectiveness of its latest AI models in detecting security flaws, potentially positioning the company as a leader in AI-driven cybersecurity solutions and enhancing its competitive edge in the AI market.
- This development reflects a broader trend in the AI industry where models are increasingly being utilized not only for automation and efficiency but also for identifying and mitigating risks in technology, raising important discussions about the ethical implications and responsibilities of AI in security applications.
— via World Pulse Now AI Editorial System






