Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

quality control

30 articles about quality control in AI news

How to Install claude-flow MCP and 3 Skills That Transform Claude Code

A production team's setup reveals claude-flow MCP with hierarchical-mesh topology and three essential skills that add structure, parallelism, and quality control.

95% relevant

Wharton Study Finds 'AI Writes, Humans Review' Model Failing in Real Business Contexts

New Wharton research reveals the 'AI writes, humans review' workflow is breaking down in practice, with human reviewers struggling to effectively evaluate AI-generated content. The study suggests current review processes may be insufficient for quality control.

85% relevant

GPT-5.4 LLM Choice Drastically Impacts GPT-ImageGen-2 Output Quality

The quality of images generated by GPT-ImageGen-2 is heavily dependent on the underlying LLM used for reasoning. GPT-5.4 'Thinking' and 'Pro' models produce superior outputs, especially for complex concepts, a non-intuitive finding not documented by OpenAI.

85% relevant

Google Launches Gemini 3.1 Flash TTS with Prompt-Controlled Speech

Google has launched Gemini 3.1 Flash TTS, a text-to-speech model featuring prompt-based voice control and support for over 70 languages. This release expands Google's multimodal AI offerings directly to developers.

93% relevant

Google Launches MCP Server for Chrome DevTools, Enabling AI Browser Control

Google released a Model Context Protocol server that lets AI coding agents directly control Chrome DevTools. This enables automated browser debugging, network request inspection, and performance tracing through tools like Cursor and VS Code.

100% relevant

SteerViT Enables Natural Language Control of Vision Transformer Attention Maps

Researchers introduced SteerViT, a method that modifies Vision Transformers to accept natural language instructions, enabling users to steer the model's visual attention toward specific objects or concepts while maintaining representation quality.

85% relevant

Dreamina Seedance 2.0 Early Access Review: AI Video Tool Adds Scene Direction Controls

An early tester reports that Dreamina Seedance 2.0 provides unprecedented control over AI-generated video, including camera motion, pacing, and visual consistency. The tool shifts from simple clip generation toward AI-native scene direction.

85% relevant

LLM-Driven Heuristic Synthesis for Industrial Process Control: Lessons from Hot Steel Rolling

Researchers propose a framework where an LLM iteratively writes and refines human-readable Python controllers for industrial processes, using feedback from a physics simulator. The method generates auditable, verifiable code and employs a principled budget strategy, eliminating need for problem-specific tuning.

70% relevant

Fish Audio S2 Enables Word-Level Speech Control with Positional Tags, Beats GPT-4o in Human Preference Tests

Fish Audio S2 introduces a 100% open-source TTS model that uses inline positional tags for word-level vocal control, achieving 8/10 wins against GPT-4o and Gemini in human preference tests while generating audio nearly 5x faster than real-time.

95% relevant

The Reasoning Transparency Gap: AI Models Can't Control Their Own Thought Processes

New research reveals AI models can control their final answers 62% of the time but only control their reasoning chains 3% of the time, exposing fundamental limitations in how these systems monitor their own thought processes.

85% relevant

Agentic Control Center for Data Product Optimization: A Framework for Continuous AI-Driven Data Refinement

Researchers propose a system using specialized AI agents to automate the improvement of data products through a continuous optimization loop. It surfaces questions, monitors quality metrics, and incorporates human oversight to transform raw data into actionable assets.

75% relevant

Kling AI 3.0 Arrives with Breakthrough Motion Control for Video Generation

Kling AI has launched version 3.0 featuring advanced motion control capabilities, representing a significant leap in AI-generated video technology. The update promises more precise manipulation of movement within AI-created videos.

85% relevant

Beyond the Chat: How Adaptive Memory Control Unlocks Scalable, Trustworthy AI Clienteling

A new framework, Adaptive Memory Admission Control (A-MAC), solves a critical flaw in AI agents: uncontrolled memory bloat. For luxury retail, this enables scalable, long-term clienteling assistants that remember what matters—client preferences, purchase history, and brand values—while forgetting hallucinations and noise.

60% relevant

New AI Framework Prevents Image Generators from Copying Training Data Without Sacrificing Quality

Researchers have developed RADS, a novel inference-time framework that prevents text-to-image diffusion models from memorizing and regurgitating training data. Using reachability analysis and constrained reinforcement learning, RADS steers generation away from memorized content while maintaining image quality and prompt alignment.

75% relevant

EasyClaw AI Agent Revolutionizes Desktop Automation: Human-Like Control Without Coding

EasyClaw, a new AI agent, can control desktop computers like a human—clicking, typing, and automating tasks across Mac and Windows without requiring API keys, Python, or Docker. This breakthrough promises to democratize automation for non-technical users.

85% relevant

Aura: How Semantic Version Control Could Revolutionize AI-Assisted Software Development

Aura introduces semantic version control for AI coding agents by tracking abstract syntax trees instead of text, enabling precise rollbacks and reducing LLM token costs by 95%. This open-source tool addresses fundamental challenges in AI-generated code management.

75% relevant

DeepSeek's Blackwell Training Exposes Critical Gaps in US Chip Export Controls

Chinese AI startup DeepSeek reportedly trained its latest model on Nvidia's restricted Blackwell chips, challenging US export controls. The development reveals significant loopholes in semiconductor restrictions amid escalating AI competition.

90% relevant

The Cinematic AI Revolution: How Sora 2 Pro, Veo 3.1, and Kling 2.6 Are Democratizing Hollywood-Quality Video Production

OpenAI's Sora 2 Pro, Google's Veo 3.1, and Kling 2.6 represent a quantum leap in AI video generation, transforming text and images into cinematic-quality videos in minutes. These models offer Hollywood-level production values with smooth motion and clean lip sync, available through subscription models without per-video fees.

85% relevant

Swiss AI Lab Ships Pixel-Based Agents That Control Real Phones

A Swiss AI lab has developed agents that interact with smartphones by processing screen pixels and simulating touch, eliminating the need for app-specific APIs or integrations. This approach mirrors human interaction and could generalize across any app interface.

93% relevant

From Prompting to Control Planes: A Self-Hosted Architecture for AI System Observability

A technical architect details a custom-built, self-hosted observability stack for multi-agent AI systems using n8n, PostgreSQL, and OpenRouter. This addresses the critical need for visibility into execution, failures, and costs in complex AI workflows.

88% relevant

PartRAG Revolutionizes 3D Generation with Retrieval-Augmented Part-Level Control

Researchers introduce PartRAG, a breakthrough framework that combines retrieval-augmented generation with diffusion transformers for precise part-level 3D creation and editing from single images. The system achieves superior geometric accuracy while enabling localized modifications without regenerating entire objects.

70% relevant

Google's Lyria3: The Next Evolution in AI-Generated Music Composition

Google has unveiled Lyria3, its latest AI music generation model that promises unprecedented audio quality and creative control. This advancement represents a significant leap in musical AI capabilities with potential implications for creators and the music industry.

85% relevant

Agent Harnessing: The Infrastructure That Makes AI Agents Work

A detailed technical guide argues that the model is not the hard part of building AI agents. The six-component harness — context management, memory, tools, control flow, verification, and coordination — is what separates production-grade agents from those that fail silently.

88% relevant

DeMellier grows by leaning into craftsmanship and alternative materials as

DeMellier founder Mireia Llusia-Lindh explains how focusing on craftsmanship, alternative materials, and controlled growth is driving demand, with Lyst searches up 97% YoY. The strategy echoes broader shifts at Kering and Bottega Veneta as the luxury sector loses 70 million customers due to value concerns.

73% relevant

NVIDIA Open-Sources Motion Diffusion Model for Humanoid Robots

NVIDIA open-sourced Kimono, a motion diffusion model for humanoid robots, trained on 700 hours of motion capture data. It generates 3D human and robot motions from text prompts, supports keyframe and end-effector control, and runs on Unitree G1.

85% relevant

RAG vs Fine-Tuning: A Practical Guide for Choosing the Right LLM

The article provides a clear, decision-oriented comparison between Retrieval-Augmented Generation (RAG) and fine-tuning for customizing LLMs in production, helping practitioners choose the right approach based on data freshness, cost, and output control needs.

100% relevant

A Reference Architecture for Agentic Hybrid Retrieval in Dataset Search

A new research paper presents a reference architecture for 'agentic hybrid retrieval' that orchestrates BM25, dense embeddings, and LLM agents to handle underspecified queries against sparse metadata. It introduces offline metadata augmentation and analyzes two architectural styles for quality attributes like governance and performance.

84% relevant

Your AI Agent Is Only as Good as Its Harness — Here’s What That Means

An article from Towards AI emphasizes that the reliability and safety of an AI agent depend more on its controlling 'harness'—the system of protocols, tools, and observability layers—than on the underlying model. This concept is reportedly worth $2 billion but remains poorly understood by many developers.

100% relevant

Sabi Cap: 100k-Sensor EEG Hat Decodes Internal Speech at 30 WPM

Sabi released the Sabi Cap, a wearable EEG beanie with 70k-100k biosensors and a brain foundation model trained on 100k hours of neural data. It decodes internal speech to text at ~30 WPM and enables cursor control via intention.

97% relevant

Pinterest Details 'Request-Level Deduplication' to Scale Massive

Pinterest's engineering team published a detailed technical breakdown of 'request-level deduplication'—a family of techniques that eliminate redundant processing of user data across thousands of candidate items in their recommendation system. This approach was critical to scaling their Foundation Model by 100x while controlling infrastructure costs.

98% relevant