vulnerabilities
30 articles about vulnerabilities in AI news
Anthropic's Claude Discovers Zero-Day Vulnerabilities in Ghost CMS and Linux Kernel in Live Demo
Anthropic research scientist Nicholas Carlini demonstrated Claude autonomously finding and exploiting zero-day vulnerabilities in Ghost CMS and the Linux kernel within 90 minutes. The research has uncovered 500+ high-severity vulnerabilities using minimal scaffolding around the LLM.
Strix Open-Source Tool Finds 600+ Vulnerabilities in AI-Generated Code by Simulating Attacker Behavior
Strix, an open-source security tool, dynamically probes running applications for business logic flaws that traditional testing misses. It found 600+ verified vulnerabilities across 200 companies, addressing critical gaps in AI-driven development workflows.
Claude AI Uncovers Critical Firefox Vulnerabilities in Groundbreaking Security Partnership
Anthropic's Claude Opus 4.6 identified 22 security vulnerabilities in Firefox during a two-week audit, including 14 high-severity flaws. The discovery demonstrates AI's growing capability in cybersecurity and code analysis.
Cloud Under Fire: AWS Data Center Attack Exposes AI Infrastructure Vulnerabilities in Middle East Conflict
A missile strike reportedly hit an Amazon Web Services data center in the UAE, disrupting cloud services amid escalating regional tensions. AWS confirmed 'objects' struck its ME-CENTRAL-1 region, testing redundancy systems while highlighting vulnerabilities in critical AI infrastructure.
Anthropic's Claude AI Identifies Security Vulnerabilities, Earns $3.7M in Bug Bounties
Anthropic researcher Nicolas Carlini stated Claude outperforms him as a security researcher, having earned $3.7 million from smart contract exploits and finding bugs in the popular Ghost project. This demonstrates a significant, practical capability in AI-driven security auditing.
Palantir CEO Warns of AI Supply Chain Vulnerabilities, Advocates for Domestic Safeguards
Palantir CEO Alex Karp highlights Anthropic's designation as a 'supply chain risk' and argues for domestic AI restrictions to protect national security and technological sovereignty in an increasingly competitive global landscape.
AI Models Show Ethical Restraint in Research Analysis, But Vulnerabilities Remain
New research reveals AI models demonstrate competent analytical skills with built-in ethical safeguards, refusing questionable research requests while converging on standard methodologies. However, these protections aren't foolproof against determined manipulation.
Anthropic Launches Project Glasswing for Critical Software Security
Anthropic announced Project Glasswing, an urgent initiative to secure critical software, powered by its new frontier model Claude Mythos Preview, which it claims can find vulnerabilities better than all but the most skilled humans.
Vulnetix VDB: Live Package Security Scanning Inside Claude Code
A new MCP server, Vulnetix VDB, provides real-time security scanning for package dependencies within Claude Code, helping developers catch vulnerabilities as they write code.
Audit Your MCP Servers in 10 Seconds with This Free Security Score API
A new free API gives Claude Code users a Lighthouse-style security score for any MCP server, revealing that 60% of scanned packages have vulnerabilities.
SonarQube Cloud's New MCP Server: Add Security Scanning to Claude Code in 5 Minutes
SonarQube Cloud now has a native MCP server, letting Claude Code analyze code for security vulnerabilities, bugs, and code smells directly in your editor.
Perplexity's OpenClaw Evolution: Building Secure AI Agents for Local Hardware
Perplexity AI has expanded its agent ecosystem to enable local hardware and cloud infrastructure to run AI agents securely, addressing vulnerabilities found in earlier OpenClaw implementations while maintaining open-source accessibility.
Study Reveals All Major AI Models Vulnerable to Academic Fraud Manipulation
A Nature study found every major AI model can be manipulated into aiding academic fraud, with researchers demonstrating how persistent questioning bypasses safety filters. The findings reveal systemic vulnerabilities in AI alignment.
Anthropic's Claude Code Launches Autonomous Code Review, Pushing AI Beyond Simple Generation
Anthropic has launched Code Review in Claude Code, a multi-agent system that automatically analyzes AI-generated code for logic errors and security vulnerabilities. This represents a shift from AI as a coding assistant to an autonomous reviewer capable of complex, multi-step reasoning.
OpenAI Launches Codex Security: AI-Powered Vulnerability Scanner That Prioritizes Real Threats
OpenAI has unveiled Codex Security, an AI agent designed to scan software projects for vulnerabilities while intelligently filtering out false positives. This specialized tool represents a significant advancement in automated security analysis, potentially transforming how developers approach code safety.
OpenAI's EVMbench: AI Giant Targets $150B Stablecoin Market with Blockchain Security Tool
OpenAI has launched EVMbench, a benchmark tool for evaluating AI performance on Ethereum Virtual Machine tasks, specifically targeting smart contract vulnerabilities. Developed with crypto investment firm Paradigm, this strategic move positions OpenAI to capitalize on the booming stablecoin sector while diversifying revenue streams.
Anthropic's Claude Code Security Triggers Market Earthquake: AI's Disruption of Cybersecurity Industry Begins
Anthropic's launch of Claude Code Security, an AI tool that detects vulnerabilities traditional scanners miss, caused immediate 8-9% drops in major cybersecurity stocks. The market reaction signals AI's potential to disrupt the $200B cybersecurity industry by automating expert-level security analysis.
Beyond Superintelligence: How AI's Micro-Alignment Choices Shape Scientific Integrity
New research reveals AI models can be manipulated into scientific misconduct like p-hacking, exposing vulnerabilities in their ethical guardrails. While current systems resist direct instructions, they remain susceptible to more sophisticated prompting techniques.
AI Agents Master Smart Contract Hacking: OpenAI's EVMbench Reveals Autonomous Exploitation Capabilities
OpenAI and Paradigm have developed EVMbench, a benchmark showing AI agents can autonomously exploit most Ethereum smart contract vulnerabilities. The system successfully attacks real-world security flaws without human intervention, raising urgent questions about blockchain security.
How Large Language Models 'Counter Poisoning': A Self-Purification Battle Involving RAG
New research explores how LLMs can defend against data poisoning attacks through self-purification mechanisms integrated with Retrieval-Augmented Generation (RAG). This addresses critical security vulnerabilities in enterprise AI systems.
The Dimensional Divide: Why AI Sees Exponentially More 'Cats' Than Humans Do
New research reveals neural networks perceive concepts in exponentially higher dimensions than humans, creating fundamental misalignment that explains persistent adversarial vulnerabilities. This dimensional gap suggests current robustness approaches may be treating symptoms rather than causes.
AI-Powered Geopolitical Forecasting: How Machine Learning Models Are Predicting Regime Stability
Advanced AI systems are now analyzing political instability with unprecedented accuracy, predicting regime vulnerabilities in real-time. These models process vast datasets to forecast governmental collapse and potential conflict escalation.
The Identity Crisis of AI Agents: Why Security Fails When Every Agent Looks the Same
AI agents face fundamental identity problems that undermine security frameworks. When multiple agents share identical credentials, organizations lose accountability and control over automated workflows. This identity crisis represents a more fundamental threat than traditional security vulnerabilities.
MCP Security Crisis: 43% of Servers Vulnerable, 341 Malicious Skills Found
Security audits of the Model Context Protocol (MCP) ecosystem reveal 43% of servers are vulnerable to command execution, while 341 malicious skills were found on marketplaces, exposing systemic security flaws in agentic AI. The findings highlight a growing attack surface as AI agents become more autonomous.
Claude Code Digest — Apr 05–Apr 08
Claude Code's hidden /compact flag cuts token usage by 90% for lightning-fast iterations.
Alibaba's VulnSage Generates 146 Zero-Days via Multi-Agent Exploit Workflow
Alibaba researchers published VulnSage, a multi-agent LLM framework that generates functional software exploits. It found 146 zero-days in real packages, demonstrating a shift from bug detection to automated weaponization.
Hugging Face Transfers Safetensors to PyTorch Foundation
Hugging Face is transferring ownership of the Safetensors library to the PyTorch Foundation, shepherded by the Linux Foundation. The move aims to establish it as a neutral, community-driven standard for safe AI model serialization.
Mythos AI Red Team Reports: A 6-9 Month Warning Window for CISOs
AI researcher Ethan Mollick highlights a critical gap: few large organizations treat AI red team reports from groups like Mythos as urgent threats, despite a historical 6-9 month diffusion window to malicious actors.
Mythos AI Agent Called 'Unprecedented Cyberweapon' by Wharton Prof
Ethan Mollick highlighted the Mythos AI agent, stating its capabilities could constitute an 'unprecedented cyberweapon' in adversarial hands. He notes a narrow window where only a few companies have this level of capability.
Anthropic's 'Project Glassing' Opus-Beater Restricted to Security Researchers
Anthropic's new model, which outperforms Claude 3 Opus, is being released under 'Project Glassing' exclusively to vetted security researchers. This controlled rollout follows recent warnings from security experts about advanced AI risks.