llms

30 articles about llms in AI news

Anthropic Paper: 'Emotion Concepts and their Function in LLMs' Published

Anthropic has released a new research paper titled 'Emotion Concepts and their Function in LLMs.' The work investigates the role and representation of emotional concepts within large language model architectures.

95% relevant

Paper: LLMs Fail 'Safe' Tests When Prompted to Role-Play as Unethical Characters

A new paper reveals that large language models (LLMs) considered 'safe' on standard benchmarks will readily generate harmful content when prompted to role-play as unethical characters. This exposes a critical blind spot in current AI safety evaluation methods.

85% relevant

New Research: Fine-Tuned LLMs Outperform GPT-5 for Probabilistic Supply Chain Forecasting

Researchers introduced an end-to-end framework that fine-tunes large language models (LLMs) to produce calibrated probabilistic forecasts of supply chain disruptions. The model, trained on realized outcomes, significantly outperforms strong baselines like GPT-5 on accuracy, calibration, and precision. This suggests a pathway for creating domain-specific forecasting models that generate actionable, decision-ready signals.

80% relevant

LLMs Show Weak Agreement with Human Essay Graders, Overvalue Short Essays and Penalize Minor Errors

A new arXiv study finds LLMs like GPT and Llama have weak agreement with human essay scores. They systematically over-score short, underdeveloped essays and under-score longer essays with minor grammatical errors.

77% relevant

QuatRoPE: New Positional Embedding Enables Linear-Scale 3D Spatial Reasoning in LLMs, Outperforming Quadratic Methods

Researchers propose QuatRoPE, a novel positional embedding method that encodes 3D object relations with linear input scaling. Paired with IGRE, it improves spatial reasoning in LLMs while preserving their original language capabilities.

79% relevant

Learning to Disprove: LLMs Fine-Tuned for Formal Counterexample Generation in Lean 4

Researchers propose a method to train LLMs for formal counterexample generation, a neglected skill in mathematical AI. Their symbolic mutation strategy and multi-reward framework improve performance on three new benchmarks.

77% relevant

ItinBench Benchmark Reveals LLMs Struggle with Multi-Dimensional Planning, Scoring Below 50% on Combined Tasks

Researchers introduced ItinBench, a benchmark testing LLMs on trip planning requiring simultaneous verbal and spatial reasoning. Models like GPT-4o and Gemini 1.5 Pro showed inconsistent performance, highlighting a gap in integrated cognitive capabilities.

100% relevant

From Token to Item: New Research Proposes Item-Aware Attention to Enhance LLMs for Recommendation

Researchers propose an Item-Aware Attention Mechanism (IAM) that restructures how LLMs process product data for recommendations. It separates attention into intra-item (content) and inter-item (collaborative) layers to better model item-level relationships. This addresses a key limitation in current LLM-based recommenders.

76% relevant

llmfit Tool Scans System Specs to Match 497 LLMs from 133 Providers to Local Hardware

llmfit analyzes RAM, CPU, and GPU to recommend which of 497 LLMs will run locally without OOM crashes. It scores models on quality, speed, fit, and context, and pulls them directly via Ollama.

85% relevant

How to Run Claude Code with Local LLMs Using This Open-Source Script

A new open-source script lets you connect Claude Code to local LLMs via llama.cpp, giving you full privacy and offline access.

100% relevant

DEAF Benchmark Reveals Audio MLLMs Rely on Text, Not Sound, Scoring Below 50% on Acoustic Faithfulness

Researchers introduce DEAF, a 2,700-stimulus benchmark testing Audio MLLMs' acoustic processing. Evaluation of seven models shows a consistent pattern of text dominance, with models scoring below 50% on acoustic faithfulness metrics.

99% relevant

New Research Proposes Lightweight Framework for Adapting LLMs to Complex Service Domains

A new arXiv paper introduces a three-part framework to efficiently adapt LLMs for technical service agents. It addresses latent decision logic, response ambiguity, and high training costs, validated on cloud service tasks. This matters for any domain needing robust, specialized AI agents.

72% relevant

Fine-Tuning Isn’t a Winning Move Anymore — Data-First LLMs Win

A new perspective argues that fine-tuning LLMs is becoming a secondary tactic. The primary competitive advantage now lies in a 'data-first' strategy: curating, generating, and structuring proprietary data to build superior models from the ground up.

72% relevant

Why I Skipped LLMs to Extract Data From 100,000 Wills: A System Design Story

An engineer details a deterministic, high-accuracy document processing pipeline for legal wills using Azure's Content Understanding model, rejecting LLMs due to hallucination risk and cost. A masterclass in pragmatic AI system design.

85% relevant

The Pareto Set of Metrics for Production LLMs: What Separates Signal from Instrumentation

A framework for identifying the essential 20% of metrics that deliver 80% of the value when monitoring LLMs in production. Focuses on practical observability using tools like Langfuse and OpenTelemetry to move beyond raw instrumentation.

72% relevant

CRYSTAL Benchmark Reveals Universal Step-Disorder in MLLMs: No Model Preserves >60% of Reasoning Steps in Correct Order

Researchers introduce CRYSTAL, a 6,372-instance benchmark evaluating multimodal reasoning through verifiable steps. It reveals systematic failures in 20 tested MLLMs, including universal cherry-picking and disordered reasoning chains.

100% relevant

New Research Diagnoses LLMs' Struggle with Multiple Knowledge Updates in Context

A new arXiv paper reveals a persistent bias in LLMs when facts are updated multiple times within a long context. Models increasingly favor the earliest version, failing to track the latest state—a critical flaw for dynamic knowledge tasks.

78% relevant

Algorithmic Bridging: How Multimodal LLMs Can Enhance Existing Recommendation Systems

A new approach called 'Algorithmic Bridging' proposes combining multimodal conversational LLMs with conventional recommendation systems to boost performance while reusing existing infrastructure. This hybrid method aims to leverage the natural language understanding of LLMs without requiring full system replacement.

100% relevant

A Systematic Study of Pseudo-Relevance Feedback with LLMs: Key Design Choices for Search

New research systematically analyzes how to best use LLMs for pseudo-relevance feedback in search, finding that the method for using feedback is critical and that LLM-generated text can be a cost-effective feedback source. This provides clear guidance for improving retrieval systems.

84% relevant

New Research Shows How LLMs and Graph Attention Can Build Lightweight Strategic AI

A new arXiv paper proposes a hybrid AI framework for the Game of the Amazons that integrates LLMs with graph attention networks. It achieves strong performance in resource-constrained settings by using the LLM as a noisy supervisor and the graph network as a structural filter.

98% relevant

Understanding the Interplay between LLMs' Utilisation of Parametric and Contextual Knowledge: A keynote at ECIR 2025

A keynote at ECIR 2025 will present research on how Large Language Models (LLMs) balance their internal, parametric knowledge with external, contextual information. This is critical for deploying reliable AI in knowledge-intensive tasks where models must correctly use provided context, not just their training data.

70% relevant

RF-Mem: A Dual-Path Memory Retrieval System for Personalized LLMs

Researchers propose RF-Mem, a memory retrieval system for LLMs that mimics human cognitive processes. It adaptively switches between fast 'familiarity' and deep 'recollection' paths to personalize responses efficiently, outperforming existing methods under constrained budgets.

77% relevant

Open-Source Hack Enables Free Claude Code Execution with Local LLMs

Developers have discovered a method to run Anthropic's Claude Code using local LLMs without API costs or data leaving their machines. By redirecting API calls through environment variables, users can leverage open-source models like Qwen3.5 for private, cost-free coding assistance.

85% relevant

Tsinghua Breakthrough: LLMs with Search Freedom Outperform Expensive Fine-Tuning for Temporal Data

Tsinghua University researchers demonstrate that giving standard LLMs autonomous search capabilities for temporal data achieves 88.7% accuracy, surpassing specialized fine-tuned models by 10.7%. This challenges costly training approaches for time-sensitive tasks.

95% relevant

New AI Benchmark Exposes Critical Gap in Causal Reasoning: Why LLMs Struggle with Real-World Research Design

Researchers have introduced CausalReasoningBenchmark, a novel evaluation framework that separates causal identification from estimation. The benchmark reveals that while LLMs can identify high-level strategies 84% of the time, they correctly specify full research designs only 30% of the time, highlighting a critical bottleneck in automated causal inference.

70% relevant

Stanford, Google, MIT Paper Claims LLMs Can Self-Improve Prompts

A collaborative paper from Stanford, Google, and MIT researchers indicates large language models can self-improve their prompts via iterative refinement. This could automate a core task currently performed by human prompt engineers.

87% relevant

Nature Astronomy Paper Argues LLMs Threaten Scientific Authorship, Sparking AI Ethics Debate

A paper in Nature Astronomy posits a novel criterion for scientific contribution: if an LLM can easily replicate it, it may not be sufficiently novel. This directly challenges the perceived value of incremental, LLM-augmented research.

85% relevant

A Practitioner's Hands-On Comparison: Fine-Tuning LLMs on Snowflake Cortex vs. Databricks

An engineer provides a documented, practical test of fine-tuning large language models on two major cloud data platforms: Snowflake Cortex and Databricks. This matters as fine-tuning is a critical path to customizing AI for proprietary business use cases, and platform choice significantly impacts developer experience and operational complexity.

84% relevant

Fine-Tuning LLMs While You Sleep: How Autoresearch and Red Hat Training Hub Outperformed the HINT3 Benchmark

Automated fine-tuning tools now let you run hundreds of training experiments overnight for under $50. Here's how Autoresearch and Red Hat's platform outperformed HINT3, and the tools you can use today.

100% relevant

DeepMind Veteran David Silver Launches Ineffable Intelligence with $1B Seed at $4B Valuation, Betting on RL Over LLMs for Superintelligence

David Silver, a foundational figure behind DeepMind's AlphaGo and AlphaZero, has launched a new London AI lab, Ineffable Intelligence. The startup raised a $1 billion seed round at a $4 billion valuation to pursue superintelligence through novel reinforcement learning, explicitly rejecting the LLM paradigm.

100% relevant