token optimization
30 articles about token optimization in AI news
arXiv Survey Maps KV Cache Optimization Landscape: 5 Strategies for Million-Token LLM Inference
A comprehensive arXiv review categorizes five principal KV cache optimization techniques—eviction, compression, hybrid memory, novel attention, and combinations—to address the linear memory scaling bottleneck in long-context LLM inference. The analysis finds no single dominant solution, with optimal strategy depending on context length, hardware, and workload.
Meta's REFRAG: The Optimization Breakthrough That Could Revolutionize RAG Systems
Meta's REFRAG introduces a novel optimization layer for RAG architectures that dramatically reduces computational overhead by selectively expanding compressed embeddings instead of tokenizing all retrieved chunks. This approach could make large-scale RAG deployments significantly more efficient and cost-effective.
Google's Gemma 4B Model Runs on Nintendo Switch at 1.5 Tokens/Second
A developer successfully ran Google's 4-billion parameter Gemma language model on a Nintendo Switch, achieving 1.5 tokens/second inference. This demonstrates the increasing feasibility of running small LLMs on consumer-grade edge hardware.
Code-Review-Graph Cuts Claude Token Usage 8.2x with Local Knowledge Graph
A developer released 'code-review-graph,' an open-source tool that uses Tree-sitter to build a persistent structural map of a codebase. This allows Claude to read only relevant files, cutting average token usage by 8.2x across six real repositories.
Gemma 4 26B A4B Hits 45.7 tokens/sec Decode Speed on MacBook Air via MLX Community
A community benchmark shows the Gemma 4 26B A4B model running at 45.7 tokens/sec decode speed on a MacBook Air using the MLX framework. This highlights rapid progress in efficient local deployment of mid-size language models on consumer Apple Silicon.
CLAUDE.md Promises 63% Reduction in Claude Output Tokens with Drop-in Prompt File
A new prompt engineering file called CLAUDE.md claims to reduce Claude's output token usage by 63% without code changes. The drop-in file aims to make Claude's code generation more efficient by structuring its responses.
DACT: A New Framework for Drift-Aware Continual Tokenization in Generative Recommender Systems
Researchers propose DACT, a framework to adapt generative recommender systems to evolving user behavior and new items without costly full retraining. It identifies 'drifting' items and selectively updates token sequences, balancing stability with plasticity. This addresses a core operational challenge for real-world, dynamic recommendation engines.
Fireworks AI Launches 'Fire Pass' with Kimi K2.5 Turbo at 250 Tokens/Second
Fireworks AI has launched a new 'Fire Pass' subscription offering access to Kimi K2.5 Turbo at speeds up to 250 tokens/second. The service includes a free trial followed by a $7 weekly subscription.
ReDiPrune: Training-Free Token Pruning Before Projection Boosts MLLM Efficiency 6x, Gains 2% Accuracy
Researchers propose ReDiPrune, a plug-and-play method that prunes visual tokens before the vision-language projector in multimodal LLMs. On EgoSchema with LLaVA-NeXT-Video-7B, it achieves a +2.0% accuracy gain while reducing computation by over 6× in TFLOPs.
Tamp Compression Proxy Cuts Claude Code Token Usage 52% — Zero Code Changes
Run a local proxy that automatically compresses Claude Code's API calls, cutting token usage in half without modifying your workflow.
Stop Claude Code's Web Fetches from Burning 700K Tokens on HTML Junk
A new MCP server, token-enhancer, strips scripts, nav bars, and ads from web pages before they hit Claude's context, cutting token waste by 90%+.
Stop Wasting Tokens in Your CLAUDE.md: The Layered Configuration System
Separate global, project, and file-type rules into different CLAUDE.md files to cut token waste and make Claude Code more effective.
Graph Tokenization: A New Method to Apply Transformers to Graph Data
Researchers propose a framework that converts graph-structured data into sequences using reversible serialization and BPE tokenization. This enables standard Transformers like BERT to achieve state-of-the-art results on graph benchmarks, outperforming specialized graph models.
Decoding the First Token Fixation: How LLMs Develop Structural Attention Biases
New research reveals how large language models develop 'attention sinks'—disproportionate focus on the first input token—through a simple circuit mechanism that emerges early in training. This structural bias has significant implications for model interpretability and performance.
HyperTokens Break the Forgetting Cycle: A New Architecture for Continual Multimodal AI Learning
Researchers introduce HyperTokens, a transformer-based system that generates task-specific tokens on demand for continual video-language learning. This approach dramatically reduces catastrophic forgetting while maintaining fixed memory costs, enabling AI models to learn sequentially without losing previous knowledge.
CompACT AI Tokenizer Revolutionizes Robotic Planning with 8-Token Compression
Researchers have developed CompACT, a novel AI tokenizer that compresses visual observations into just 8 tokens for robotic planning systems. This breakthrough enables 40x faster planning while maintaining competitive accuracy, potentially transforming real-time robotic control applications.
Headroom AI: The Open-Source Context Optimization Layer That Could Revolutionize Agent Efficiency
Headroom AI introduces a zero-code context optimization layer that compresses LLM inputs by 60-90% while preserving critical information. This open-source proxy solution could dramatically reduce costs and improve performance for AI agents.
OpenAI's GPT-5.4: The Million-Token Context Window That Changes Everything
OpenAI's upcoming GPT-5.4 will feature a groundbreaking 1 million token context window, matching competitors like Gemini and Claude. The model introduces an 'Extreme reasoning mode' for complex tasks and represents a shift toward monthly updates.
Support Tokens: The Hidden Mathematical Structure Making LLMs More Robust
Researchers have discovered a surprising mathematical constraint in transformer attention mechanisms that reveals a 'support token' structure similar to support vector machines. This insight enables a simple but powerful training modification that improves LLM robustness without sacrificing performance.
Diffusion Architecture Breaks Speed Barrier: Inception's Mercury 2 Hits 1,000 Tokens/Second
Inception's Mercury 2 achieves unprecedented text generation speeds of 1,000 tokens per second using diffusion architecture borrowed from image AI. This represents a 10x speed advantage over leading models like Claude 4.5 Haiku and GPT-5 Mini without requiring custom hardware.
Beyond the Token Limit: How Claude Opus 4.6's Architectural Breakthrough Enables True Long-Context Reasoning
Anthropic's Claude Opus 4.6 represents a fundamental shift in large language model architecture, moving beyond simple token expansion to create genuinely autonomous reasoning systems. The breakthrough enables practical use of million-token contexts through novel memory management and hierarchical processing.
GR4AD: Kuaishou's Production-Ready Generative Recommender for Ads Delivers 4.2% Revenue Lift
Researchers from Kuaishou present GR4AD, a generative recommendation system designed for high-throughput ad serving. It introduces innovations in tokenization (UA-SID), decoding (LazyAR), and optimization (RSPO) to balance performance with cost. Online A/B tests on 400M users show a 4.2% ad revenue improvement.
Meta-Harness Framework Automates AI Agent Engineering, Achieves 6x Performance Gap on Same Model
A new framework called Meta-Harness automates the optimization of AI agent harnesses—the system prompts, tools, and logic that wrap a model. By analyzing raw failure logs at scale, it improved text classification by 7.7 points while using 4x fewer tokens, demonstrating that harness engineering is a major leverage point as model capabilities converge.
The Hidden Cost Crisis: How Developers Are Slashing LLM Expenses by 80%
A developer's $847 monthly OpenAI bill sparked a cost-optimization journey that reduced LLM spending by 81% without sacrificing quality. This reveals widespread inefficiencies in AI implementation and practical strategies for smarter token management.
Anthropic Tests Sonnet-to-Opus 'Phone a Friend' for Cost-Effective AI
Anthropic is experimenting with a system where its Claude 3.5 Sonnet model can automatically invoke the more capable Claude 3 Opus for difficult tasks. This 'phone a friend' approach aims to improve final output quality while reducing overall token consumption and cost.
MARS Method Boosts LLM Throughput 1.7x With No Architecture Changes
Researchers introduced MARS, a training-free method that allows autoregressive LLMs to generate multiple tokens per forward pass, boosting throughput by 1.5-1.7x without architectural modifications or accuracy loss.
Atomic Chat's TurboQuant Enables Gemma 4 Local Inference on 16GB MacBook Air
Atomic Chat's new TurboQuant algorithm aggressively compresses the KV cache, allowing models requiring 32GB+ RAM to run on 16GB MacBook Airs at 25 tokens/sec, advancing local AI deployment.
Claude Code's Hidden /compact Flag: How to Use It for Faster, Cheaper Iteration
Claude Code has a hidden /compact flag that dramatically reduces token usage for faster, cheaper development iterations.
Image Prompt Packaging Cuts Multimodal Inference Costs Up to 91%
A new method called Image Prompt Packaging (IPPg) embeds structured text directly into images, reducing token-based inference costs by 35.8–91% across GPT-4.1, GPT-4o, and Claude 3.5 Sonnet. Performance outcomes are highly model-dependent, with GPT-4.1 showing simultaneous accuracy and cost gains on some tasks.
DeepSeek's HISA: Hierarchical Sparse Attention Cuts 64K Context Indexing Cost
DeepSeek researchers introduced HISA, a hierarchical sparse attention method that replaces flat token scanning. It removes a computational bottleneck at 64K context lengths without requiring any model retraining.