large language models
30 articles about large language models in AI news
Nebius AI's LK Losses: A Breakthrough in Making Large Language Models Faster and More Efficient
Nebius AI has introduced LK Losses, a novel training objective that directly optimizes acceptance rates in speculative decoding. This approach achieves 8-10% efficiency gains over traditional methods, potentially revolutionizing how large language models are deployed.
How Large Language Models 'Counter Poisoning': A Self-Purification Battle Involving RAG
New research explores how LLMs can defend against data poisoning attacks through self-purification mechanisms integrated with Retrieval-Augmented Generation (RAG). This addresses critical security vulnerabilities in enterprise AI systems.
AI Breakthrough: Large Language Models Now Solving Complex Mathematical Proofs
Researchers have developed a neuro-symbolic system that combines LLMs with traditional constraint solvers to tackle inductive definitions—a notoriously difficult class of mathematical problems. Their approach improves solver performance by approximately 25% on proof tasks involving abstract data types and recurrence relations.
LeCun's Critique: Why Large Language Models Fall Short of True Intelligence
Meta's Chief AI Scientist Yann LeCun argues that LLMs lack real-world understanding despite massive training data. He highlights fundamental architectural limitations that prevent true reasoning and proposes alternative approaches to artificial intelligence.
Beyond One-Size-Fits-All AI: New Method Aligns Language Models with Diverse Human Preferences
Researchers have developed Personalized GRPO, a novel reinforcement learning framework that enables large language models to align with heterogeneous human preferences rather than optimizing for a single global objective. The approach addresses systematic bias toward dominant preferences in current alignment methods.
When AI Gets Stumped: Study Reveals Language Models' 'Brain Activity' Collapses Under Pressure
New research shows that when large language models encounter difficult questions, their internal representations dramatically shrink and simplify. This 'activity collapse' reveals fundamental limitations in how current AI processes complex reasoning tasks.
AI's Hidden Capabilities: How Simple Prompts Unlock Advanced Reasoning in Language Models
New research reveals that large language models possess latent reasoning abilities that can be activated through specific prompting techniques, fundamentally changing how we understand AI capabilities and their potential applications.
Breaking the AI Hivemind: How PRISM Creates Diverse Thinking in Language Models
Researchers propose PRISM, a new system that combats the growing uniformity in large language models by creating individualized reasoning pathways. The approach significantly improves creative exploration and can uncover rare diagnoses that standard AI misses.
Unitree Robotics Releases UnifoLM-WBT-Dataset: A Large-Scale, Real-World Robotics Dataset for Embodied AI
Chinese robotics firm Unitree Robotics has open-sourced the UnifoLM-WBT-Dataset, a high-quality dataset derived from real-world robot operations. The release aims to accelerate training for embodied AI and large language models applied to physical systems.
Open-Source Web UI 'LLM Studio' Enables Local Fine-Tuning of 500+ Models, Including GGUF and Multimodal
LLM Studio, a free and open-source web interface, allows users to fine-tune over 500 large language models locally on their own hardware. It supports GGUF-quantized models, vision, audio, and embedding models across Mac, Windows, and Linux.
VLM4Rec: A New Approach to Multimodal Recommendation Using Vision-Language Models for Semantic Alignment
A new research paper proposes VLM4Rec, a framework that uses large vision-language models to convert product images into rich, semantic descriptions, then encodes them for recommendation. It argues semantic alignment matters more than complex feature fusion, showing consistent performance gains.
Recommendation System Evolution: From Static Models to LLM-Powered Personalization
This article traces the technological evolution of recommendation systems through multiple transformative stages, culminating in the current LLM-powered era. It provides a conceptual framework for understanding how large language models are reshaping personalization.
The Next Platform Shift: How Persistent 3D World Models Are Becoming the New Programmable Interface
A new collaboration between Baseten and World Labs signals a paradigm shift where persistent 3D world models become programmable platforms, potentially rivaling the transformative impact of large language models through accessible developer APIs.
Stanford, Google, MIT Paper Claims LLMs Can Self-Improve Prompts
A collaborative paper from Stanford, Google, and MIT researchers indicates large language models can self-improve their prompts via iterative refinement. This could automate a core task currently performed by human prompt engineers.
Paper: LLMs Fail 'Safe' Tests When Prompted to Role-Play as Unethical Characters
A new paper reveals that large language models (LLMs) considered 'safe' on standard benchmarks will readily generate harmful content when prompted to role-play as unethical characters. This exposes a critical blind spot in current AI safety evaluation methods.
MIT and Anthropic Release New Benchmark Revealing AI Coding Limitations
Researchers from MIT and Anthropic have developed a new benchmark that systematically identifies significant limitations in current AI coding assistants. The benchmark reveals specific categories of coding tasks where large language models consistently fail, providing concrete data on their weaknesses.
New Research: Fine-Tuned LLMs Outperform GPT-5 for Probabilistic Supply Chain Forecasting
Researchers introduced an end-to-end framework that fine-tunes large language models (LLMs) to produce calibrated probabilistic forecasts of supply chain disruptions. The model, trained on realized outcomes, significantly outperforms strong baselines like GPT-5 on accuracy, calibration, and precision. This suggests a pathway for creating domain-specific forecasting models that generate actionable, decision-ready signals.
A Practitioner's Hands-On Comparison: Fine-Tuning LLMs on Snowflake Cortex vs. Databricks
An engineer provides a documented, practical test of fine-tuning large language models on two major cloud data platforms: Snowflake Cortex and Databricks. This matters as fine-tuning is a critical path to customizing AI for proprietary business use cases, and platform choice significantly impacts developer experience and operational complexity.
Ollama Now Supports Apple MLX Backend for Local LLM Inference on macOS
Ollama, the popular framework for running large language models locally, has added support for Apple's MLX framework as a backend. This enables more efficient execution of models like Llama 3.2 and Mistral on Apple Silicon Macs.
Apple Silicon Achieves Near-Lossless LLM Compression at 3.5 Bits-Per-Weight, Claims Independent Tester
Independent AI researcher Matthew Weinbach reports achieving near-lossless compression of large language models on Apple Silicon, storing models at 3.5 bits-per-weight while maintaining within 1-2% quality of bf16 precision.
Moonshot AI CEO Yang Zhilin Advocates for Attention Residuals in LLM Architecture
Yang Zhilin, founder of Moonshot AI, argues for the architectural value of attention residuals in large language models. This technical perspective comes from the creator of the popular Kimi Chat model.
A Comparative Guide to LLM Customization Strategies: Prompt Engineering, RAG, and Fine-Tuning
An overview of the three primary methods for customizing Large Language Models—Prompt Engineering, Retrieval-Augmented Generation (RAG), and Fine-Tuning—detailing their respective strengths, costs, and ideal use cases. This framework is essential for AI teams deciding how to tailor foundational models to specific business needs.
SELLER: A New Sequence-Aware LLM Framework for Explainable Recommendations
Researchers propose SELLER, a framework that uses Large Language Models to generate explanations for recommendations by modeling user behavior sequences. It outperforms prior methods by integrating explanation quality with real-world utility metrics.
Google DeepMind's 'Learning Through Conversation' Paper Shows LLMs Can Improve with Real-Time Feedback
Google DeepMind researchers have published a paper demonstrating that large language models can be trained to learn and improve their responses during a conversation by incorporating user feedback, moving beyond static pre-training.
LLMs Can Now De-Anonymize Users from Public Data Trails, Research Shows
Large language models can now identify individuals from their public online activity, even when using pseudonyms. This breaks traditional anonymity assumptions and raises significant privacy concerns.
LLM-Based System Achieves 68% Recall at 90% Precision for Online User Deanonymization
Researchers demonstrate that large language models can effectively deanonymize online users by analyzing their writing style and content across platforms. Their system matches 68% of true user pairs with 90% precision, significantly outperforming traditional methods.
TTQ: A New Framework for On-the-Fly Quantization of LLMs at Inference Time
Researchers propose TTQ, a test-time quantization method that compresses large language models dynamically during inference. It uses efficient online calibration to adapt to any prompt, aiming to solve domain-shift issues and accelerate inference without retraining.
LLM Fine-Tuning Explained: A Technical Primer on LoRA, QLoRA, and When to Use Them
A technical guide explains the fundamentals of fine-tuning large language models, detailing when it's necessary, how the parameter-efficient LoRA method works, and why the QLoRA innovation made the process dramatically more accessible.
NVIDIA Nemotron Ultra: Details Emerge on Upcoming Open-Source LLM Series
NVIDIA is developing the Nemotron Ultra series of open-source large language models. The project, described as 'insane' and 'underrated,' is generating early hype among AI researchers.
Prompting vs RAG vs Fine-Tuning: A Practical Guide to LLM Integration Strategies
A clear breakdown of three core approaches for customizing large language models—prompting, retrieval-augmented generation (RAG), and fine-tuning—with real-world examples. Essential reading for technical leaders deciding how to implement AI capabilities.