Timeline
Paper (2604.20065) argues LLM agents will reshape personalization, proposing 'governable personalization'.
Columbia professor publishes argument that LLMs are fundamentally limited for scientific discovery due to their interpolation-based architecture.
New mechanistic studies confirm LLMs exhibit sycophancy as core reasoning behavior, not a superficial bug
Research shows LLMs can de-anonymize users from public data trails, breaking traditional anonymity assumptions
Researchers proposed training framework for formal counterexample generation in Lean 4, addressing neglected skill in mathematical AI.
Research reveals LLMs can 'self-purify' against poisoned data in RAG systems, identifying and down-ranking falsehoods
Researchers developed LOGIGEN, a logic-driven framework for generating verifiable training data for autonomous agents