Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

trustworthy ai

30 articles about trustworthy ai in AI news

Beyond the Chat: How Adaptive Memory Control Unlocks Scalable, Trustworthy AI Clienteling

A new framework, Adaptive Memory Admission Control (A-MAC), solves a critical flaw in AI agents: uncontrolled memory bloat. For luxury retail, this enables scalable, long-term clienteling assistants that remember what matters—client preferences, purchase history, and brand values—while forgetting hallucinations and noise.

60% relevant

The Silent Threat to AI Benchmarks: 8 Sources of Eval Contamination

The article warns that subtle data contamination in evaluation pipelines—from benchmark leakage to temporal overlap—can create misleading performance metrics. Identifying these eight leakage sources is essential for trustworthy AI validation.

74% relevant

Beyond Accuracy: Implementing AI Auditing Frameworks for Trustworthy Luxury Retail

A practical framework for auditing AI systems across five critical dimensions—accuracy, data adequacy, bias, compliance, and security—is essential for luxury retailers deploying customer-facing AI. This governance approach prevents brand damage and regulatory penalties while building consumer trust.

75% relevant

Paper Details Full-Stack MFM Acceleration: Quant, Spec Decode, HW Co-Design

A research paper details a full-stack approach for accelerating multimodal foundation models, combining hierarchy-aware mixed-precision quantization, structural pruning, speculative decoding, model cascading, and a specialized hardware accelerator. Demonstrated on medical and code generation tasks.

72% relevant

Anthropic Launches STEM Fellows Program to Pair Experts with AI Research

Anthropic announced the Anthropic STEM Fellows Program, a new initiative to bring science and engineering experts into its research teams for collaborative, months-long projects aimed at accelerating progress with AI.

89% relevant

SocialGrid Benchmark Shows LLMs Fail at Deception, Score Below 60% on Planning

Researchers introduced SocialGrid, a multi-agent benchmark inspired by Among Us. It shows state-of-the-art LLMs fail at deception detection and task planning, scoring below 60% accuracy.

100% relevant

Researchers Achieve Ultra-Long-Horizon Agentic Science with Cohesive AI Agents

A research team has developed AI agents capable of executing and maintaining coherent, long-horizon scientific research workflows. This addresses a core challenge in creating autonomous systems for complex discovery.

85% relevant

Your AI Agent Is Only as Good as Its Harness — Here’s What That Means

An article from Towards AI emphasizes that the reliability and safety of an AI agent depend more on its controlling 'harness'—the system of protocols, tools, and observability layers—than on the underlying model. This concept is reportedly worth $2 billion but remains poorly understood by many developers.

100% relevant

OpenAI Expands Codex into Desktop Agent with Vision & Memory

OpenAI has reportedly expanded its Codex model beyond code generation into a multimodal desktop agent that can see, click, type, and learn user habits. This signals a strategic move from an API tool into a proactive, personalized AI assistant.

85% relevant

Shopify Engineering Teases 'Autoresearch' Beyond Model Training in 2026 Preview

Shopify Engineering has previewed a 2026 perspective suggesting 'autoresearch'—automated research processes—will have applications extending beyond just training AI models. This signals a broader operational automation strategy for the e-commerce giant.

100% relevant

Microsoft Tests OpenClaw-Style AI Agents for Autonomous 365 Copilot

Microsoft is reportedly testing OpenClaw-style AI agents to evolve Microsoft 365 Copilot into an always-on, autonomous assistant. This move aims to directly handle complex, multi-step tasks like email triage and calendar management without constant user prompting.

89% relevant

Fortune: 80% of Enterprise Workers Skip Company AI Tools Despite Spending

A Fortune report finds roughly 80% of enterprise workers are not using company-provided AI tools, citing confusion and distrust, even as corporate investment in AI soars. This highlights a critical adoption failure in the enterprise AI rollout.

87% relevant

OpenAI Projects $2.5B in 2026 Ad Revenue, Targets $100B by 2030

OpenAI projects $2.5 billion in advertising revenue for 2026, with plans to scale to $100 billion by 2030. This strategy, banking on 2.75 billion weekly users, directly pits it against Google and Meta and contrasts with Anthropic's ad-free model.

97% relevant

CMU Study: Top LLMs Fail Simple Contradiction Tests, Lack True Reasoning

Carnegie Mellon researchers tested 14 leading LLMs on simple contradiction tasks; all failed consistently, revealing fundamental reasoning gaps despite advanced benchmarks. (199 chars)

89% relevant

Agentic AI in Beauty: How ChatGPT Is Reshaping Discovery, Trust, and Conversion

The article explores how conversational AI, particularly ChatGPT, is being deployed in the beauty sector to transform the customer journey. It moves beyond simple Q&A to act as an agent that proactively guides users, personalizes recommendations, and builds trust to drive conversion.

91% relevant

Chamath Palihapitiya: SpaceX to Underpin AI-Driven Space Economy

Investor Chamath Palihapitiya stated that SpaceX's infrastructure will allow AI to rebuild every dimension of Earth's economy in space, creating vast new value layers.

85% relevant

Perplexity Launches AI Tax Assistant, Expanding Beyond Search into Financial Services

Perplexity has launched an AI assistant for tax preparation, a significant move beyond its core search product into a high-stakes, real-world application. This represents a major test for AI in regulated financial domains.

75% relevant

Truth AnChoring (TAC): New Post-Hoc Calibration Method Aligns LLM Uncertainty Scores with Factual Correctness

A new arXiv paper introduces Truth AnChoring (TAC), a post-hoc calibration protocol that aligns heuristic uncertainty estimation metrics with factual correctness. The method addresses 'proxy failure,' where standard metrics become non-discriminative when confidence is low.

76% relevant

Top AI Agent Frameworks in 2026: A Production-Ready Comparison

A comprehensive, real-world evaluation of 8 leading AI agent frameworks based on deployments across healthcare, logistics, fintech, and e-commerce. The analysis focuses on production reliability, observability, and cost predictability—critical factors for enterprise adoption.

82% relevant

Microsoft Copilot Researcher Adopts Two-Model System: OpenAI GPT Drafts, Anthropic Claude Audits

Microsoft has restructured its Copilot Researcher agent into a two-model system, using OpenAI's GPT for drafting and Anthropic's Claude for auditing. This hybrid approach aims to improve accuracy by separating generation from verification.

85% relevant

GUIDE: A New Benchmark Reveals AI's Struggle to Understand User Intent in GUI Software

Researchers introduce GUIDE, a benchmark for evaluating AI's ability to understand user behavior and intent in open-ended GUI tasks. Across 10 software applications, state-of-the-art models struggled, highlighting a critical gap between automation and true collaborative assistance.

74% relevant

Study of 280,000 Samples Shows AI Detectors Fail on Short Coursework and STEM Writing, Flagging Real Student Work

A comprehensive study testing 13 AI detectors on 280,000+ samples found they perform unreliably, especially on short assignments and STEM writing, where real student work is often flagged as AI-generated due to formulaic language.

87% relevant

Anthropic Survey of 80,508 Users Reveals AI's Dual Perception: Hope for Work & Growth, Fear of Unreliability & Job Loss

Anthropic's global study of 80,508 users finds people simultaneously hold hope and fear about AI. Top hopes center on work improvement and personal growth, while top concerns are unreliability, job loss, and reduced autonomy.

87% relevant

CATCHES Launches Generative AI with Physics-Based Sizing Technology for Fashion E-Commerce

CATCHES has launched a generative AI platform for fashion e-commerce featuring physics-based sizing technology. The launch is in partnership with luxury brand AMIRI and is powered by NVIDIA's AI infrastructure. This directly targets a core pain point in online apparel retail: fit uncertainty and high return rates.

95% relevant

Install This Claude Code Skill to Remove AI Tells from Your Documentation

The Humanizer skill rewrites Claude-generated text to sound more natural by removing common AI patterns, making your docs and comments more authentic.

90% relevant

The Unlearning Illusion: New Research Exposes Critical Flaws in AI Memory Removal

Researchers reveal that current methods for making AI models 'forget' information are surprisingly fragile. A new dynamic testing framework shows that simple query modifications can recover supposedly erased knowledge, exposing significant safety and compliance risks.

95% relevant

Financial AI Audit Test Reveals LLMs Struggle with Complex Rule-Based Reasoning

Researchers introduce FinRule-Bench, a new benchmark testing how well large language models can audit financial statements against accounting principles. The benchmark reveals models perform well on simple rule verification but struggle with complex multi-violation diagnosis.

79% relevant

Beyond Chain-of-Thought: The Next Frontier in AI Reasoning

New research reveals a fundamental trade-off in AI reasoning between explicit step-by-step thinking and implicit knowledge retrieval. This discovery challenges conventional prompting strategies and suggests more nuanced approaches to unlocking AI's reasoning capabilities.

87% relevant

Amazon's AI Agent Incident Highlights Critical Risks of Unsupervised Automation in Retail

Amazon's retail website suffered multiple high-severity outages linked to an engineer acting on inaccurate advice from an AI agent that sourced information from an outdated internal wiki. This incident underscores the operational risks of deploying autonomous AI agents without proper human oversight and data governance in critical retail systems.

95% relevant

AI Researchers Solve Critical LLM Confidence Problem with Novel Decoupling Technique

Researchers have identified and solved a fundamental conflict in how large language models learn reasoning versus confidence calibration. Their new DCPO framework preserves reasoning accuracy while dramatically reducing overconfidence in incorrect answers, addressing a major reliability concern for AI deployment.

75% relevant