data poisoning
23 articles about data poisoning in AI news
How Large Language Models 'Counter Poisoning': A Self-Purification Battle Involving RAG
New research explores how LLMs can defend against data poisoning attacks through self-purification mechanisms integrated with Retrieval-Augmented Generation (RAG). This addresses critical security vulnerabilities in enterprise AI systems.
SharpAP: New Attack Method Makes Recommender System Poisoning More
Researchers propose SharpAP, a poisoning attack that uses sharpness-aware minimization to generate fake user profiles that transfer better between different recommender system models, posing a more realistic threat.
Cisco's Memory Poisoning Report: Why Claude Code Users Must Audit Their CLAUDE.md Now
A new security report reveals that instructions placed in your CLAUDE.md file can be weaponized to persistently compromise Claude Code's behavior across sessions, demanding immediate file audits.
New Research Proposes FilterRAG and ML-FilterRAG to Defend Against Knowledge Poisoning Attacks in RAG Systems
Researchers propose two novel defense methods, FilterRAG and ML-FilterRAG, to mitigate 'PoisonedRAG' attacks where adversaries inject malicious texts into a knowledge source to manipulate an LLM's output. The defenses identify and filter adversarial content, maintaining performance close to clean RAG systems.
Secure Your MCP Servers: ClawGuard Scans for Tool Poisoning and Rug Pulls
New security tool ClawGuard scans MCP servers for hidden instructions in tool descriptions, parameter exploits, and malicious updates—critical for Claude Code users connecting to external tools.
Anthropic & Nature Paper: LLMs Pass Traits via 'Subliminal Learning'
Anthropic co-authored a paper in Nature demonstrating that large language models can learn and pass on hidden 'subliminal' signals embedded in training data, such as preferences or misaligned objectives. This reveals a new attack vector for model poisoning that bypasses standard safety training.
Google DeepMind: Web Environment, Not Model Weights, Is Key AI Agent Attack Surface
Google DeepMind researchers present a systematic framework showing that the web environment itself—not just the model—is a primary attack surface for AI agents. In benchmarks, hidden prompt injections hijacked agents in up to 86% of scenarios, with memory poisoning attacks exceeding 80% success.
FedAgain: Dual-Trust Federated Learning Boosts Kidney Stone ID Accuracy to 94.7% on MyStone Dataset
Researchers propose FedAgain, a trust-based federated learning framework that dynamically weights client contributions using benchmark reliability and model divergence. It achieves 94.7% accuracy on kidney stone identification while maintaining robustness against corrupted data from multiple hospitals.
Poisoned RAG: 5 Documents Can Corrupt 'Hallucination-Free' AI Systems
Researchers proved that planting a handful of poisoned documents in a RAG system's database can cause it to generate confident, incorrect answers. This exposes a critical vulnerability in systems marketed as 'hallucination-free'.
PoisonedRAG Attack Hijacks LLM Answers 97% of Time with 5 Documents
Researchers demonstrated that inserting only 5 poisoned documents into a 2.6 million document database can hijack a RAG system's answers 97% of the time, exposing critical vulnerabilities in 'hallucination-free' retrieval systems.
Beyond Accuracy: Implementing AI Auditing Frameworks for Trustworthy Luxury Retail
A practical framework for auditing AI systems across five critical dimensions—accuracy, data adequacy, bias, compliance, and security—is essential for luxury retailers deploying customer-facing AI. This governance approach prevents brand damage and regulatory penalties while building consumer trust.
Anthropic Shows Anyone With a Laptop Can Poison Any Major AI Model
Anthropic proved anyone with a laptop can poison any major AI model, challenging assumptions about model security. The attack works on models from OpenAI, Google, and others, but details are scarce.
DNL Method Finds 2 Bits That Crash ResNet-50, Qwen3-30B
Researchers introduced Deep Neural Lesion (DNL), a method to find critical parameters. Flipping just two sign bits reduced ResNet-50 accuracy by 99.8% and Qwen3-30B reasoning to 0%.
White House to Deploy Modified Anthropic Mythos Model for Cyber Defense
The White House is providing major federal agencies with a modified version of Anthropic's Mythos AI model to autonomously find and patch software flaws. This represents a strategic, high-stakes adoption of AI for national cyber defense.
AI-Generated Content Surpasses Human Content Online, Per New Study
For the first time, the volume of newly published AI-generated content online has surpassed human-generated content, according to a study cited by AI researcher Rohan Paul. This represents a fundamental shift in the composition of the public internet.
Fortune Survey: 29% of Workers Admit to Sabotaging Company AI Plans
A Fortune survey finds 29% of workers admit to sabotaging company AI initiatives, a figure that rises to 44% among Gen Z. This exposes a critical human-factor challenge in enterprise AI adoption beyond technical hurdles.
Gen Z Workers Sabotage AI Rollouts, Risking Job Security
A new report details Gen Z workers actively undermining corporate AI adoption due to job security fears. This resistance paradoxically increases their replacement risk as AI-proficient 'power users' advance.
MCP Security Crisis: 43% of Servers Vulnerable, 341 Malicious Skills Found
Security audits of the Model Context Protocol (MCP) ecosystem reveal 43% of servers are vulnerable to command execution, while 341 malicious skills were found on marketplaces, exposing systemic security flaws in agentic AI. The findings highlight a growing attack surface as AI agents become more autonomous.
Google DeepMind Maps Six 'AI Agent Traps' That Can Hijack Autonomous Systems in the Wild
Google DeepMind has published a framework identifying six categories of 'traps'—from hidden web instructions to poisoned memory—that can exploit autonomous AI agents. This research provides the first systematic taxonomy for a growing attack surface as agents gain web access and tool-use capabilities.
Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?
New research warns that RAG systems can be gamed to achieve near-perfect evaluation scores if they have access to the evaluation criteria, creating a risk of mistaking metric overfitting for genuine progress. This highlights a critical vulnerability in the dominant LLM-judge evaluation paradigm.
PFSR: A New Federated Learning Architecture for Efficient, Personalized Sequential Recommendation
Researchers propose a Personalized Federated Sequential Recommender (PFSR) to tackle the computational inefficiency and personalization challenges in real-time recommendation systems. It uses a novel Associative Mamba Block and a Variable Response Mechanism to improve speed and adaptability.
Tessera Launches Open-Source Framework for 32 OWASP AI Security Tests, Benchmarks GPT-4o, Claude, Gemini, Llama 3
Tessera introduces the first open-source framework to run all 32 OWASP AI security tests against any model with one CLI command. It provides benchmark results for GPT-4o, Claude, Gemini, Llama 3, and Mistral across 21 model-specific security tests.
Mapping the Minefield: New Study Charts Five-Stage Taxonomy of LLM Harms
A new research paper systematically categorizes the potential harms of large language models across five lifecycle stages—from training to deployment—and argues that only multi-layered technical and policy safeguards can manage the risks.