Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

compression

30 articles about compression in AI news

TACO Framework Cuts Agent Token Overhead 10% via Self-Evolving Compression

Researchers introduced TACO, a framework that enables terminal agents to automatically discover and refine context compression rules from their own interaction trajectories. This approach cuts token overhead by approximately 10% on benchmarks like TerminalBench and SWE-Bench Lite while preserving task accuracy.

87% relevant

Apple Silicon Achieves Near-Lossless LLM Compression at 3.5 Bits-Per-Weight, Claims Independent Tester

Independent AI researcher Matthew Weinbach reports achieving near-lossless compression of large language models on Apple Silicon, storing models at 3.5 bits-per-weight while maintaining within 1-2% quality of bf16 precision.

87% relevant

Prompt Compression in Production Task Orchestration: A Pre-Registered Randomized Trial

A new arXiv study shows that aggressive prompt compression can increase total AI inference costs by causing longer outputs, while moderate compression (50% retention) reduces costs by 28%. The findings challenge the 'compress more' heuristic for production AI systems.

76% relevant

Google Research's TurboQuant Achieves 6x LLM Compression Without Accuracy Loss, 8x Speedup on H100

Google Research introduced TurboQuant, a novel compression algorithm that shrinks LLM memory footprint by 6x without retraining or accuracy drop. Its 4-bit version delivers 8x faster processing on H100 GPUs while matching full-precision quality.

95% relevant

IAT: Instance-As-Token Compression for Historical User Sequence Modeling

Researchers propose Instance-As-Token (IAT), which compresses all features of each historical interaction into a unified embedding token, then applies standard sequence modeling. This approach outperforms state-of-the-art methods and has been deployed in e-commerce advertising, shopping mall marketing, and live-streaming e-commerce with substantial business metric improvements.

93% relevant

Tamp Compression Proxy Cuts Claude Code Token Usage 52% — Zero Code Changes

Run a local proxy that automatically compresses Claude Code's API calls, cutting token usage in half without modifying your workflow.

87% relevant

Structured Distillation for Personalized Agent Memory: 11x Compression with Minimal Recall Loss

New research introduces structured distillation to compress AI agent conversation history by 11x (371→38 tokens/exchange) while preserving 96% retrieval effectiveness. This enables storing thousands of exchanges in a single prompt while maintaining verbatim source access.

86% relevant

CompACT AI Tokenizer Revolutionizes Robotic Planning with 8-Token Compression

Researchers have developed CompACT, a novel AI tokenizer that compresses visual observations into just 8 tokens for robotic planning systems. This breakthrough enables 40x faster planning while maintaining competitive accuracy, potentially transforming real-time robotic control applications.

85% relevant

NVIDIA's Memory Compression Breakthrough: How Forgetting Makes LLMs Smarter

NVIDIA researchers have developed Dynamic Memory Sparsification, a technique that compresses LLM working memory by 8× while improving reasoning capabilities. This counterintuitive approach addresses the critical KV cache bottleneck in long-context AI applications.

85% relevant

Pinterest's Request-Level Deduplication

Pinterest's engineering blog details 'request-level deduplication,' a critical efficiency technique for modern recommendation systems. By eliminating redundant processing of massive user sequences, they achieve 10-50x storage compression and significant training speedups, while solving novel training challenges like batch correlation.

94% relevant

Google's AI Infrastructure Strategy: What Retail Leaders Should Watch in 2026

Google's evolving AI infrastructure and compute strategy, including data center investments and model compression techniques, will directly impact how retail brands deploy and scale AI applications by 2026. The company's focus on efficiency and real-time capabilities signals a shift toward more accessible, powerful retail AI tools.

80% relevant

arXiv Survey Maps KV Cache Optimization Landscape: 5 Strategies for Million-Token LLM Inference

A comprehensive arXiv review categorizes five principal KV cache optimization techniques—eviction, compression, hybrid memory, novel attention, and combinations—to address the linear memory scaling bottleneck in long-context LLM inference. The analysis finds no single dominant solution, with optimal strategy depending on context length, hardware, and workload.

95% relevant

LittleBit-2: How Geometric Alignment Unlocks Ultra-Efficient AI Below 1-Bit

Researchers have developed LittleBit-2, a framework that achieves state-of-the-art performance in sub-1-bit LLM compression by solving latent geometry misalignment. The method uses internal latent rotation and joint iterative quantization to align model parameters with binary representations without inference overhead.

75% relevant

Sam Altman Predicts 'One-Person Billion-Dollar Companies' as AI Reshapes Business Scale

OpenAI CEO Sam Altman predicts the emergence of 'one-person billion-dollar companies' powered by AI, citing a specific example from a private CEO discussion group. This follows his earlier forecast of 10-person billion-dollar firms, suggesting AI is accelerating the compression of business scale.

87% relevant

Recursive Multi-Agent Systems Top Hugging Papers; Eywa Bridges LLMs and Scientific Models

Recursive Multi-Agent Systems leads Hugging Papers with 242 upvotes. Eywa and OneManCompany signal a move from chat-based to structural agent collaboration.

89% relevant

RoundPipe: Full Fine-Tune 32B Models on a Single 24GB GPU

RoundPipe fine-tunes 32B models on a single 24GB GPU with 1.5-2.2× speedups via round-robin pipeline dispatch.

85% relevant

Claude Security Public Beta Launches in Claude Code on Web

Anthropic launched Claude Security in public beta for Claude Code on web, letting developers validate and fix vulnerabilities without leaving the editor.

100% relevant

Embedding distance predicts VLM typographic attack success (r=-0.93)

A new study shows that embedding distance between image text and harmful prompt strongly predicts attack success rate (r=-0.71 to -0.93). The researchers introduce CWA-SSA optimization to recover readability and bypass safety alignment without model access.

82% relevant

Meta Tuna-2: Encoder-Free Multimodal Model Beats VAE-Based Rivals

Meta released Tuna-2, an encoder-free multimodal model that understands and generates images from raw pixels. It beats encoder-based models on fine-grained perception benchmarks, challenging the dominant VAE/vision encoder paradigm.

90% relevant

DeepMind’s New VAE Matches Stable Diffusion at 10x Resolution

DeepMind’s new VAE produces 1024x1024 images with quality comparable to Stable Diffusion’s 256x256 output, potentially replacing the standard VAE in generative pipelines. This cuts the token count by 10x, enabling faster generation and lower memory usage.

85% relevant

Paper Details Full-Stack MFM Acceleration: Quant, Spec Decode, HW Co-Design

A research paper details a full-stack approach for accelerating multimodal foundation models, combining hierarchy-aware mixed-precision quantization, structural pruning, speculative decoding, model cascading, and a specialized hardware accelerator. Demonstrated on medical and code generation tasks.

72% relevant

Agent Harnessing: The Infrastructure That Makes AI Agents Work

A detailed technical guide argues that the model is not the hard part of building AI agents. The six-component harness — context management, memory, tools, control flow, verification, and coordination — is what separates production-grade agents from those that fail silently.

88% relevant

DeepSeek V4-Pro: 1.6T parameters, open weights, undercuts rivals 10x

DeepSeek unveiled V4-Pro and V4-Flash, its largest open-weight models with up to 1.6 trillion parameters and a 1M-token context window. The new hybrid attention architecture cuts compute for long contexts by 73–90%, enabling prices far below OpenAI, Google, and Anthropic.

100% relevant

New AI Model Decomposes User Behavior into Multiple Spatiotemporal States

Researchers propose ADS-POI, which represents users with multiple parallel latent sub-states evolving at different spatiotemporal scales. This outperforms state-of-the-art on Foursquare and Gowalla benchmarks, offering more robust next-POI recommendations.

95% relevant

Layers on Layers — How You Can Improve Your Recommendation Systems

An IBM article critiques monolithic recommendation engines for trying to do too much with one score. It proposes a layered architecture—candidate generation, ranking, and business logic—to improve performance and adaptability. This is a direct, practical framework for engineering teams.

82% relevant

Catching Drift Before It Catches You

The author details implementing the open-source Evidently AI library to monitor a Kafka-powered movie recommender for data drift. This is a hands-on guide to a fundamental MLOps task for maintaining live AI systems.

96% relevant

Prefill-as-a-Service Paper Claims to Decouple LLM Inference Bottleneck

A research paper proposes a 'Prefill-as-a-Service' architecture to separate the heavy prefill computation from the lighter decoding phase in LLM inference. This could enable new deployment models where resource-constrained devices handle only the decoding step.

85% relevant

Akshay Pachaar Inverts LLM Agent Architecture with 'Harness' Design

AI engineer Akshay Pachaar outlined a novel 'harness' architecture for LLM agents that externalizes intelligence into memory, skills, and protocols. He is building a minimal, didactic open-source implementation of this design.

89% relevant

Claude Code Reverse-Engineered: 98.4% of Codebase is Operational Harness

A reverse-engineering analysis of Claude Code reveals only 1.6% of its codebase is AI decision logic, with the rest being operational infrastructure. This challenges current agent design paradigms by prioritizing a robust deterministic harness over complex model routing.

100% relevant

A Practical Guide to Building Real-Time Recommendation Systems

This article provides a practical overview of building real-time recommendation systems, covering core components like data ingestion, feature stores, and model serving. It matters because real-time personalization is becoming a baseline expectation in digital commerce.

78% relevant