huggingface
14 articles about huggingface in AI news
HuggingFace Launches Daily Papers SKILL.md for AI Agents to Read, Search, and Fetch Research Papers
HuggingFace released Daily Papers SKILL.md, a tool enabling AI agents to read paper content as markdown, search papers, find linked models/datasets, and fetch papers via API.
How AI Agents Are Learning to Scrape the Web and Fine-Tune Models in One Go
A developer has integrated web scraping capabilities into HuggingFace's fine-tuning skill, enabling AI agents to collect data from protected platforms and automatically train custom models. This breakthrough addresses a major bottleneck in AI development workflows.
Steal the 'Long-Running Claude' Scaffolding: CLAUDE.md, CHANGELOG.md, and the Ralph Loop
Anthropic's research reveals a four-part scaffolding—CLAUDE.md, CHANGELOG.md, a test oracle, and the Ralph loop—that lets you give Claude a multi-day task and walk away. Here’s how to apply it.
OpenCSF: A 1.5TB Free Computer Science Library Emerges from Unstructured Web Data
A new open-source dataset called OpenCSF has been compiled, containing 1.5TB of computer science materials scraped from public web sources. It provides a massive, free corpus for AI training and research in software engineering and CS education.
OmniForcing Enables Real-Time Joint Audio-Visual Generation at 25 FPS with 0.7s Latency
Researchers introduced OmniForcing, a method that distills a bidirectional LTX-2 model into a causal streaming generator for joint audio-visual synthesis. It achieves ~25 FPS with 0.7s latency, a 35× speedup over offline diffusion models while maintaining multi-modal fidelity.
NVIDIA NeMo Retriever Achieves #1 on ViDoRe v3 with New Agentic Pipeline
NVIDIA's NeMo Retriever team has developed a generalizable agentic retrieval pipeline that topped the ViDoRe v3 leaderboard and placed second on BRIGHT. The system moves beyond semantic similarity to dynamically adapt search strategies for complex, multi-domain data.
AI Learns to Use Tools Without Expensive Training: The Rise of In-Context Reinforcement Learning
Researchers have developed In-Context Reinforcement Learning (ICRL), a method that teaches large language models to use external tools through demonstration examples during reinforcement learning. This approach eliminates costly supervised fine-tuning while enabling models to gradually transition from few-shot to zero-shot tool usage capabilities.
NVIDIA's Nemotron-Terminal: A Systematic Pipeline for Scaling Terminal-Based AI Agents
NVIDIA researchers introduce Nemotron-Terminal, a comprehensive data engineering pipeline designed to scale terminal-based large language model agents. The system bridges the gap between raw terminal data and high-quality training datasets, addressing key challenges in agent reliability and generalization.
Meshcraft Democratizes 3D Creation: Multi-Engine AI Platform Bridges Text-to-3D Gap
Meshcraft emerges as a web-based platform offering text-to-3D and image-to-3D generation with selectable AI engines. The tool provides both free and premium options, addressing quality bottlenecks in 3D generation through engine optimization rather than image model refinement.
From Prototype to Production: Streamlining LLM Evaluation for Luxury Clienteling & Chatbots
NVIDIA's new NeMo Evaluator Agent Skills dramatically simplifies testing and monitoring of conversational AI agents. For luxury retail, this means faster, more reliable deployment of high-quality clienteling assistants and customer service chatbots.
Utonia AI Breakthrough: A Single Transformer Model Unifies All 3D Point Cloud Data
Researchers have developed Utonia, a single self-supervised transformer that learns unified 3D representations across diverse point cloud data types including LiDAR, CAD models, indoor scans, and video-lifted data. This breakthrough enables unprecedented cross-domain transfer and emergent behaviors in 3D AI.
DeepVision-103K: The Math Dataset That Could Revolutionize AI's Visual Reasoning
Researchers have introduced DeepVision-103K, a comprehensive mathematical dataset with 103,000 verifiable visual instances designed to train multimodal AI models. Covering K-12 topics from geometry to statistics, this dataset addresses critical gaps in AI's visual reasoning capabilities.
DualPath Architecture Shatters KV-Cache Bottleneck, Doubling LLM Throughput for AI Agents
Researchers have developed DualPath, a novel architecture that eliminates the KV-cache storage bottleneck in agentic LLM inference. By implementing dual-path loading with RDMA transfers, the system achieves nearly 2× throughput improvements for both offline and online scenarios.
NVIDIA's Memory Compression Breakthrough: How Forgetting Makes LLMs Smarter
NVIDIA researchers have developed Dynamic Memory Sparsification, a technique that compresses LLM working memory by 8× while improving reasoning capabilities. This counterintuitive approach addresses the critical KV cache bottleneck in long-context AI applications.