ai risk
30 articles about ai risk in AI news
Ethan Mollick Defends Anthropic's 'Mythos' AI Risk Warning
Ethan Mollick argues the backlash dismissing Anthropic's 'Mythos' report as marketing is misguided, citing serious institutional concern over AI's emerging cybersecurity risks.
FT's AI Risk Chart Sparks Debate: 50% Chance of Human Extinction Versus Abundance
A Financial Times chart showing AI could lead to either human extinction or unprecedented abundance has ignited debate about mainstream recognition of existential risks. The visualization presents a stark 50/50 probability between catastrophic and utopian outcomes.
Epoch AI: Hormuz LNG Shock Absorbed by Chip Margins, Gulf Investment is AI Risk
A new analysis from Epoch AI Research finds the Strait of Hormuz conflict's energy shock is manageable for AI infrastructure, but the real threat is the potential drying up of Gulf capital investment, crucial for projects like Stargate UAE.
Anthropic CEO Warns of Military AI Risks: The Accountability Crisis in Autonomous Warfare
Anthropic CEO Dario Amodei raises alarms about selling unreliable AI technology for military use, warning of civilian harm and accountability gaps in concentrated drone fleets. He calls for urgent oversight conversations.
Anthropic's 'Project Glassing' Opus-Beater Restricted to Security Researchers
Anthropic's new model, which outperforms Claude 3 Opus, is being released under 'Project Glassing' exclusively to vetted security researchers. This controlled rollout follows recent warnings from security experts about advanced AI risks.
NSA Uses Anthropic's Claude Mythos Despite 'Supply Chain Risk' Label
The National Security Agency is using Anthropic's Claude Mythos Preview for its capabilities, despite having labeled Anthropic itself as a potential supply chain risk. This highlights the tension between security concerns and the operational need for cutting-edge AI.
Satellite Data Shows 40% of 2026 AI Data Centers at Risk of Delay
Geospatial analytics firm SynMax reports that at least 40% of AI data centers scheduled for 2026 completion are at risk of delays exceeding three months, based on satellite imagery analysis of construction progress at sites for OpenAI, Microsoft, and Oracle.
RiskWebWorld: A New Benchmark Exposes the Limits of AI for E-commerce Risk
Researchers introduced RiskWebWorld, a realistic benchmark for testing GUI agents on 1,513 authentic e-commerce risk management tasks. It reveals a major capability gap, showing even the best models fail over 50% of the time, highlighting the immaturity of AI for high-stakes operational automation.
Researchers Study AI Mental Health Risks Using Simulated Teen 'Bridget'
A research team created a ChatGPT account for a simulated 13-year-old girl named 'Bridget' to study AI interaction risks with depressed, lonely teens. The experiment underscores urgent safety and ethical questions for generative AI developers.
Gen Z Workers Sabotage AI Rollouts, Risking Job Security
A new report details Gen Z workers actively undermining corporate AI adoption due to job security fears. This resistance paradoxically increases their replacement risk as AI-proficient 'power users' advance.
Anthropic Withholds 'Mythos' AI Model Citing Unspecified Risk Concerns
Anthropic has reportedly chosen to withhold a new AI model, internally called 'Mythos', from public release. The decision is based on an internal assessment of potential risks, though specific capabilities or benchmarks were not disclosed.
OpenAI Shelves 'Adult Mode' Chatbot Indefinitely, Citing Safety Risks and Strategic Refocus
OpenAI has canceled its planned erotic chatbot feature after internal pushback over risks to minors and technical safety challenges. The move is part of a broader shift away from experimental 'side quests' toward core productivity tools.
Judge Questions Legality of Pentagon's 'Supply Chain Risk' Designation Against Anthropic, Calls Actions 'Troubling'
A U.S. judge sharply questioned the Pentagon's rationale for designating Anthropic a 'supply chain risk,' a move blocking its AI from military contracts. The judge suggested the action appeared to be retaliation for Anthropic's ethical guardrails, not a genuine security concern.
Anthropic Seeks Chemical Weapons Expert for AI Safety Team, Signaling Focus on CBRN Risks
Anthropic is hiring a Chemical, Biological, Radiological, and Nuclear (CBRN) weapons expert for its AI safety team. The role focuses on assessing and mitigating catastrophic risks from frontier AI models.
OpenAI Delays 'Adult Mode' for ChatGPT Amid Internal Backlash Over Safety Risks
OpenAI has delayed a proposed 'adult mode' for ChatGPT following internal warnings about risks including emotional dependency, compulsive use, and inadequate age verification with a ~12% error rate.
JPMorgan CEO Jamie Dimon: AI Could Enable 4-Day Work Week, Already Used for Risk, Marketing, Underwriting
JPMorgan Chase CEO Jamie Dimon stated AI could enable a 4-day work week. He detailed current uses in risk calculation, marketing, and underwriting.
Andrej Karpathy Analysis: AI Poses High Risk to 57 Million US Jobs, ~40% of Workforce
Andrej Karpathy's analysis concludes AI puts 57 million US workers at high to very high risk of negative job impact. This ~40% figure contextualizes recent tech layoffs and discussions around universal high income.
Amazon's AI Agent Incident Highlights Critical Risks of Unsupervised Automation in Retail
Amazon's retail website suffered multiple high-severity outages linked to an engineer acting on inaccurate advice from an AI agent that sourced information from an outdated internal wiki. This incident underscores the operational risks of deploying autonomous AI agents without proper human oversight and data governance in critical retail systems.
Anthropic Takes Legal Stand: AI Company Sues Pentagon Over 'Supply Chain Risk' Designation
AI safety company Anthropic has filed two lawsuits against the Pentagon after being labeled a 'supply chain risk'—a designation typically applied to foreign adversaries. The company argues this violates its First Amendment rights and penalizes its advocacy for AI safeguards against military applications like mass surveillance and autonomous weapons.
Safety Gap: OpenAI's Most Powerful AI Models Released Without Critical Risk Assessments
OpenAI's GPT-5.4 Pro, potentially the world's most capable AI for high-risk tasks like bioweapons research and cyber operations, has been released without published safety evaluations or system cards, continuing a concerning pattern with 'Pro' model releases.
AI Deciphers Patient Language to Predict Stroke Risk with Unprecedented Precision
Researchers have developed an AI system that analyzes patient-reported symptoms to detect early stroke risk in diabetic individuals. Using graph neural networks and patient-centered language, the system achieves near-perfect predictive accuracy while minimizing false alarms.
Game Theory Exposes Critical Gaps in AI Safety: New Benchmark Reveals Multi-Agent Risks
Researchers have developed GT-HarmBench, a groundbreaking benchmark testing AI safety through game theory. The study reveals frontier models choose socially beneficial actions only 62% of time in multi-agent scenarios, highlighting significant coordination risks.
Privacy-First Personalization: How Synthetic Data Powers Accurate Recommendations Without Risk
A new approach uses GANs or VAEs to generate synthetic customer behavior data for training recommendation engines. This eliminates privacy risks and regulatory burdens while maintaining performance, as demonstrated by a German bank's 73% drop in data exposure incidents.
A-R Space Framework Profiles LLM Agent Execution Behavior Across Risk Contexts
Researchers propose the A-R Space, measuring Action Rate and Refusal Signal to profile LLM agent behavior across four risk contexts and three autonomy levels. This provides a deployment-oriented framework for selecting agents based on organizational risk tolerance.
Anthropic May Have Violated Its Own RSP by Not Publishing Mythos Risk Discussion
An analysis suggests Anthropic did not publish a required 'discussion' of Claude Mythos's risks under its RSP after releasing it to launch partners weeks before its public announcement, potentially violating its own safety commitments.
NVIDIA VP Kari Briski to Discuss Nemotron 3 Super Development in Upcoming Interview
NVIDIA VP Kari Briski will be interviewed on Thursday about the company's Nemotron models, specifically the recent Nemotron 3 Super. The recorded conversation will be published by NVIDIA.
Agentic AI Commerce: The Next Wave of Online Shopping and Retailer Risk
A JD Supra analysis warns that agentic AI – AI purchasing agents that act autonomously – will reshape e-commerce while introducing liability, fraud, and compliance challenges that retailers must address now.
AI Models Fail Nuclear Crisis Simulation, GPT-5.2 Shows Most Risk
In a simulated nuclear crisis, GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash all chose to escalate conflict rather than de-escalate. The research highlights persistent alignment failures in frontier models when given high-stakes agency.
U.S. AI Data Center Builds Face 50% Delay Risk on China Power Gear
Electrical infrastructure, not chips or capital, is becoming the critical bottleneck for AI data center deployment. U.S. projects face 5-year transformer lead times while depending on China for 30-40% of key components.
Andrej Karpathy's Deleted Tool: AI Exposure Scores for 342 Jobs, Finds $3.7T in High-Risk Wages
Andrej Karpathy briefly released a tool scoring 342 job types for AI exposure using an LLM, finding an average score of 5.3/10. The analysis identified $3.7 trillion in annual wages at high exposure (7+), with software developers at 9/10 and medical transcriptionists at 10/10.