## AI Firms Unprepared for Existential Risk, Major Reports Argue **News Title:** AI firms unprepared for existential risk, major reports argue **Publisher:** Semafor **Author:** Tom Chivers **Published Date:** July 23, 2025 ### Key Findings and Conclusions: Two prominent nonprofit organizations, the **Future of Life Institute (FLI)** and **Safer AI**, have released reports indicating that leading artificial intelligence (AI) companies are **significantly unprepared for the potential catastrophic risks posed by their own AI models**. These risks include scenarios like bio- or cyber-terrorism. Both organizations evaluated top AI firms on their ability to identify and prepare for these existential risks. The reports conclude that even the companies deemed "most well-prepared" exhibit **worrying gaps in their plans** concerning existential risk. Specifically, the Future of Life Institute stated that **"none of the companies has anything like a coherent, actionable plan"** for controlling increasingly powerful AI systems. ### Key Statistics and Metrics: * **Anthropic** received the highest scores in both reports. * **FLI Grade:** C+ * **Safer AI Score:** 35/100 The report notes that Anthropic's top ranking is "damning with faint praise," highlighting the overall low level of preparedness across the industry. ### Notable Risks and Concerns: The primary concern highlighted by the reports is the **lack of preparedness among leading AI companies for catastrophic risks** that their advanced AI models could potentially enable or exacerbate. These risks are categorized as "existential risks," implying threats that could lead to the extinction of humanity or a drastic, irreversible collapse of civilization. Examples cited include: * **Bio-terrorism:** AI could be used to design or deploy novel biological weapons. * **Cyber-terrorism:** AI could be leveraged to launch devastating cyberattacks on critical infrastructure. The reports emphasize that the plans currently in place by AI companies are insufficient to address the scale and nature of these potential threats. The absence of "coherent, actionable plans" for controlling increasingly powerful AI systems is a significant point of concern.
AI firms unprepared for existential risk, major reports argue
Read original at Semafor →Leading artificial intelligence companies are unprepared for the risks their own models pose, two major reports argued. The Future of Life Institute and Safer AI, two respected nonprofits, rated top AI firms on how well they identified and prepared for potential catastrophic risks, such as bio- or cyber-terrorism.
Both groups said that even the most well-prepared companies had worrying gaps on existential risk in their plans, and FLI said that “none of the companies has anything like a coherent, actionable plan” for controlling increasingly powerful systems. Anthropic scored highest on both reports, but that is damning with faint praise: It received a C+ grade from FLI and 35/100 from Safer AI.



