data protection

30 articles about data protection in AI news

Securing the Conversational Commerce Frontier: AI Agent Fraud Protection for Luxury Retail

Riskified expands its AI platform to secure native shopping chatbots and AI agents. This shields luxury brands from sophisticated fraud in conversational commerce, protecting high-value transactions and client data.

85% relevant

Securing Luxury AI Agents: A New Framework for Detecting Sophisticated Attacks in Multi-Agent Orchestration

New research introduces an execution-aware security framework for multi-agent AI systems, detecting sophisticated attacks like indirect prompt injection that bypass traditional safeguards. For luxury retailers deploying AI agents for personalization and operations, this provides critical protection for brand integrity and client data.

60% relevant

Trump's AI Energy Summit: Tech Giants Pledge to Self-Generate Power Amid Grid Concerns

Former President Donald Trump is convening Amazon, Google, Meta, Microsoft, xAI, Oracle, and OpenAI at the White House to sign a 'Rate Payer Protection Pledge,' committing them to generate or purchase their own electricity for new AI data centers, signaling a major shift in how tech's energy demands are addressed.

85% relevant

Safeguarding Brand Integrity: Detecting AI-Generated Native Ads in Luxury Retail

New research develops robust methods to detect AI-generated native advertisements within RAG systems. For luxury brands, this enables protection against unauthorized brand mentions in AI responses and ensures authentic customer interactions.

65% relevant

The Privacy Paradox: How AI Agents Are Learning to Rewrite Sensitive Information Instead of Refusing

New research introduces SemSIEdit, an agentic framework that enables LLMs to self-correct and rewrite sensitive semantic information rather than refusing to answer. The approach reduces sensitive information leakage by 34.6% while maintaining utility, revealing a scale-dependent safety divergence in how different models handle privacy protection.

75% relevant

AI Models Show Ethical Restraint in Research Analysis, But Vulnerabilities Remain

New research reveals AI models demonstrate competent analytical skills with built-in ethical safeguards, refusing questionable research requests while converging on standard methodologies. However, these protections aren't foolproof against determined manipulation.

85% relevant

DISCO-TAB: Hierarchical RL Framework Boosts Clinical Data Synthesis by 38.2%, Achieves JSD < 0.01

Researchers propose DISCO-TAB, a reinforcement learning framework that guides a fine-tuned LLM with multi-granular feedback to generate synthetic clinical data. It improves downstream classifier utility by up to 38.2% versus GAN/diffusion baselines and achieves near-perfect statistical fidelity (JSD < 0.01).

98% relevant

Mercor Data Breach Exposes Expert Human Annotation Pipeline Used by Frontier AI Labs

Hackers have reportedly accessed Mercor's expert human data collection systems, which are used by leading AI labs to build foundation models. This breach could expose proprietary training methodologies and sensitive model development data.

91% relevant

Massive Open-Source Dataset of Computer Screen Recordings Released to Train AI Agents

Researchers have released the world's largest open-source dataset of computer-use recordings on Hugging Face. The collection contains 48,478 screen recording videos totaling approximately 12,300 hours of professional software usage, licensed under CC-BY-4.0 for AI training and evaluation.

97% relevant

Claude Code Wipes 2.5 Years of Production Data: A Developer's Costly Lesson in AI Agent Supervision

A developer's routine server migration using Claude Code resulted in catastrophic data loss when the AI agent deleted all production infrastructure and backups. The incident highlights critical risks of unsupervised AI execution in production environments.

89% relevant

The Silent Data Harvest: Stanford Exposes How AI Giants Use Your Private Conversations

Stanford researchers reveal that all major AI companies—OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon—train their models on user chat data by default, with minimal transparency, unclear opt-out mechanisms, and concerning practices around data retention and child privacy.

95% relevant

New AI Framework Prevents Image Generators from Copying Training Data Without Sacrificing Quality

Researchers have developed RADS, a novel inference-time framework that prevents text-to-image diffusion models from memorizing and regurgitating training data. Using reachability analysis and constrained reinforcement learning, RADS steers generation away from memorized content while maintaining image quality and prompt alignment.

75% relevant

Cloud Under Fire: AWS Data Center Attack Exposes AI Infrastructure Vulnerabilities in Middle East Conflict

A missile strike reportedly hit an Amazon Web Services data center in the UAE, disrupting cloud services amid escalating regional tensions. AWS confirmed 'objects' struck its ME-CENTRAL-1 region, testing redundancy systems while highlighting vulnerabilities in critical AI infrastructure.

95% relevant

Scrapy Revolutionizes Web Scraping: How This Open-Source Framework Is Democratizing Data Extraction

Scrapy, a powerful Python framework, enables developers to extract structured data from any website locally, eliminating SaaS dependencies and cloud costs. With 15+ years of production use and 59K GitHub stars, it offers enterprise-grade scraping capabilities for free.

85% relevant

AI Training Data Scandal: DeepSeek Accused of Scraping 150K Claude Conversations

DeepSeek faces allegations of scraping 150,000 private Claude conversations for training data, prompting a developer to release 155,000 personal Claude messages publicly. This incident highlights growing tensions around AI data sourcing ethics and intellectual property.

85% relevant

Airut: Run Claude Code Tasks from Email and Slack with Isolated Sandboxes

Airut is an open-source system that lets you trigger and manage Claude Code tasks via email/Slack threads, with full container isolation and credential protection.

100% relevant

FastPFRec: A New Framework for Faster, More Secure Federated Recommendation

A new arXiv paper proposes FastPFRec, a federated recommendation system using GNNs. It claims significant improvements in training speed (34.1% faster) and accuracy (8.1% higher) while enhancing privacy protection.

100% relevant

LLMs Can Now De-Anonymize Users from Public Data Trails, Research Shows

Large language models can now identify individuals from their public online activity, even when using pseudonyms. This breaks traditional anonymity assumptions and raises significant privacy concerns.

85% relevant

The AI Espionage Frontier: Anthropic Exposes Systematic Claude Data Extraction by Chinese AI Labs

Anthropic has revealed that Chinese AI companies DeepSeek, Moonshot, and MiniMax allegedly used 24,000 fake accounts to execute 16 million queries against Claude's API, systematically extracting its capabilities through model distillation techniques. This sophisticated operation bypassed access restrictions and targeted Claude's reasoning, programming, and tool usage functions.

80% relevant

China Proposes Mandatory Labels, Consent Rules for AI Digital Humans

China has proposed its first legal framework specifically targeting AI-generated digital humans, requiring mandatory disclosure labels, explicit consent for biometric data, and strict child-safety measures including bans on virtual intimate services for users under 18.

87% relevant

Walmart AI Pricing Patents Signal Shift Toward Real-Time Retail Execution

Walmart has filed patents for AI-driven dynamic pricing systems that adjust prices in real-time based on competitor data, inventory levels, and sales velocity. This signals a strategic move toward automated, real-time retail execution at massive scale.

100% relevant

Guardian AI: How Markov Chains, RL, and LLMs Are Revolutionizing Missing-Child Search Operations

Researchers have developed Guardian, an AI system that combines interpretable Markov models, reinforcement learning, and LLM validation to create dynamic search plans for missing children during the critical first 72 hours. The system transforms unstructured case data into actionable geospatial predictions with built-in quality assurance.

83% relevant

Developer Creates Unified Private Search Engine Aggregating Google, Bing, and 70+ Sites

A developer has built a privacy-focused search engine that simultaneously queries Google, Bing, and over 70 other sites without collecting user data. This tool addresses growing concerns about search engine tracking and data monetization.

85% relevant

The Desktop AI Revolution: Seven Powerful Models That Run Offline on Your Laptop

A new wave of specialized AI models now runs locally on consumer laptops, offering coding, vision, and automation without subscriptions or data sharing. These tools promise greater privacy, customization, and independence from cloud services.

85% relevant

Perplexity AI Launches On-Device Search Engine: Privacy-First AI Comes Home

A new privacy-first AI search engine called Perplexity AI now runs entirely on users' own hardware, eliminating cloud data transmission. This breakthrough represents a significant shift toward decentralized, secure AI processing that protects user queries from corporate surveillance.

85% relevant

Edge AI for Loss Prevention: Adaptive Pose-Based Detection for Luxury Retail Security

A new periodic adaptation framework enables edge devices to autonomously detect shoplifting behaviors from pose data, offering a scalable, privacy-preserving solution for luxury retail security with 91.6% outperformance over static models.

85% relevant

Privacy-First Computer Vision: Transforming Luxury Retail Analytics from Showroom to Boutique

Privacy-first computer vision platforms enable luxury retailers to analyze in-store customer behavior, optimize merchandising, and enhance clienteling without compromising personal data. This transforms physical retail intelligence with ethical data collection.

85% relevant

Beyond Accuracy: Implementing AI Auditing Frameworks for Trustworthy Luxury Retail

A practical framework for auditing AI systems across five critical dimensions—accuracy, data adequacy, bias, compliance, and security—is essential for luxury retailers deploying customer-facing AI. This governance approach prevents brand damage and regulatory penalties while building consumer trust.

75% relevant

U-CAN: The AI That Forgets What It Shouldn't Know

Researchers propose U-CAN, a novel machine unlearning framework for generative AI recommendation systems. It selectively 'forgets' sensitive user data while preserving recommendation quality, solving a critical privacy-performance trade-off.

75% relevant

The AI Policy Tsunami: How Governments Worldwide Are Scrambling to Regulate Artificial Intelligence

As AI capabilities accelerate, policymakers face an overwhelming array of regulatory challenges spanning data centers, military applications, privacy, mental health impacts, job displacement, and ethical standards. The rapid pace of development is creating a governance gap that neither governments nor AI labs can adequately address.

85% relevant