information science
30 articles about information science in AI news
Anthropic Launches Dedicated Science Blog to Chronicle AI Research and Applications
Anthropic has launched a new Science Blog to publish its research and case studies on using AI to accelerate scientific discovery, aligning with its mission to increase the pace of scientific progress.
Fine-Tune Phi-3 Mini with Unsloth: A Practical Guide for Product Information Extraction
A technical tutorial demonstrates how to fine-tune Microsoft's compact Phi-3 Mini model using the Unsloth library for structured information extraction from product descriptions, all within a free Google Colab notebook.
New System Recovers Hidden Information to Reproduce Academic Code
Researchers have developed a system that recovers the hidden information required for computers to successfully reproduce academic code. The work addresses the reproducibility crisis in computational research.
Google's TITANS Architecture: A Neuroscience-Inspired Revolution in AI Memory
Google's TITANS architecture represents a fundamental shift from transformer limitations by implementing cognitive neuroscience principles for adaptive memory. This breakthrough enables test-time learning and addresses the quadratic scaling problem that has constrained AI development.
How a 50-Year-Old Computer Science Concept Just Outperformed Anthropic's Claude Code
A small startup has outperformed Anthropic's flagship Claude Code using a novel architecture based on persistent memory systems. This breakthrough demonstrates how classic computer science principles can solve modern AI limitations in context retention and reasoning.
ML-Master 2.0 Hits 56.44% on MLE-Bench in 24-Hour Agentic Science Run
Researchers from Shanghai Jiao Tong University demonstrated ML-Master 2.0, an autonomous research agent that operated continuously for 24 hours on the MLE-Bench, achieving a 56.44% medal rate. The breakthrough centers on Hierarchical Cognitive Caching for state management, not reasoning, enabling long-horizon scientific workflows.
Neuroscience Visualization: Time-Lapse Video Shows Lab-Cultured Neurons Forming Connections
A researcher shared a time-lapse video of actual neurons in a lab dish forming new connections. This raw visualization provides a direct, non-AI view of biological computation.
BioBridge AI Merges Protein Science with Language Models for Breakthrough Biological Reasoning
Researchers introduce BioBridge, a novel AI framework that combines protein language models with general-purpose LLMs to enable enhanced biological reasoning. The system achieves state-of-the-art performance on protein benchmarks while maintaining general language understanding capabilities.
The AI Trap: How Professors Are Fighting Back Against Student Over-Reliance on Language Models
University professors are deploying 'trap words' in digital assignments to catch students who blindly use AI for complex cognitive tasks. While science departments embrace these tools, literature professors report a collapse in students' ability to synthesize information independently.
DOE Seeks Input on AI Infrastructure for Federal Lands
The U.S. Department of Energy has published a Request for Information (RFI) to solicit input on developing AI and high-performance computing infrastructure on DOE-owned lands. This marks a significant step in the federal government's strategy to directly address the national AI compute shortage.
OpenAI Readies Next-Gen Model Launch, Claims 'Significant Step Forward'
OpenAI is in final preparations to launch its next generation of AI models, which the company claims represents a 'very significant step forward' with revolutionary potential for science and the economy. The launch could happen imminently, possibly within the week.
Mirendil: Ex-Anthropic Scientists Launch $1B Venture to Build AI That Thinks Like a Scientist
Former Anthropic researchers are raising $175M at a $1B valuation for Mirendil, a startup aiming to build AI systems for long-term scientific reasoning. The goal is to accelerate breakthroughs in biology and materials science, aligning with a broader industry push toward autonomous AI researchers.
The Unlearning Illusion: New Research Exposes Critical Flaws in AI Memory Removal
Researchers reveal that current methods for making AI models 'forget' information are surprisingly fragile. A new dynamic testing framework shows that simple query modifications can recover supposedly erased knowledge, exposing significant safety and compliance risks.
Temporal Freedom: How Unrestricted Data Access Could Revolutionize LLM Performance
Researchers at Tsinghua University have discovered that allowing Large Language Models to freely search through temporal data significantly outperforms traditional rigid pipeline approaches and costly retrieval methods. This breakthrough suggests a paradigm shift in how we structure AI information access.
DishBrain Breakthrough: Lab-Grown Neurons Master Classic Video Game Doom
Scientists have successfully trained in vitro brain cells to play the classic video game Doom, marking a significant advancement in biological computing and neural interface technology. This breakthrough demonstrates how living neurons can process information and adapt to perform complex tasks.
Neural Paging: The Memory Management Breakthrough for Next-Gen AI Agents
Researchers propose Neural Paging, a hierarchical architecture that decouples symbolic reasoning from information management in AI agents. This approach dramatically reduces computational complexity for long-horizon reasoning tasks, moving from quadratic to linear scaling with context window size.
AI Context Files: The Hidden Blueprint of Modern Software Development
Researchers have conducted the first empirical study analyzing how developers create AI context files in open-source projects. The study reveals emerging patterns in how programmers structure information for AI assistants, offering insights into the evolving relationship between developers and AI tools.
AI Engineer Henry Ndubuaku Releases Open-Source 'Maths, CS & AI Compendium' Textbook
AI engineer Henry Ndubuaku has published a free, open-source textbook compiling mathematics, computer science, and AI concepts. The resource emphasizes intuitive understanding over notation and has reportedly helped users land roles at DeepMind, OpenAI, and Nvidia.
Hinton Rebrands AI Hallucinations as 'Confabulations'
Geoffrey Hinton redefines AI hallucinations as 'confabulations,' arguing that intelligence reconstructs reality into plausible stories rather than storing facts like a database.
Claude Now Tutors Kids for Free, Matching $100/hr Private Lessons
Claude can now teach kids any school subject like a $100/hour private tutor from Khan Academy, for free. This brings high-quality, personalized AI tutoring to anyone with internet access.
ItemRAG: A New RAG Approach for LLM-Based Recommendation That Retrieves
ItemRAG shifts RAG for LLM-based recommenders from user-history retrieval to fine-grained item-level retrieval, using co-purchase and semantic data to prioritize informative items. Experiments show consistent outperformance over existing methods, especially for cold-start items.
LLM Agents Will Reshape Personalization
Researchers propose that LLM-based assistants are reconfiguring how user representations are produced and exposed, requiring a shift toward inspectable, portable, and revisable user models across services. They identify five research fronts for the future of recommender systems.
Chief AI & Technology Officer Role Gains Traction in Luxury Sector
The luxury sector is formalizing AI leadership by establishing Chief AI and Technology Officer positions. This move reflects the industry's transition from ad-hoc AI initiatives to integrated, strategic technology governance at the highest level.
LLMs Can De-Anonymize Users from Public Data, Study Warns
Large Language Models can now piece together a person's identity from their public online trail, rendering pseudonyms ineffective. This raises significant privacy and security concerns for internet users.
PRL-Bench: LLMs Score Below 50% on End-to-End Physics Research Tasks
Researchers introduced PRL-Bench, a benchmark built from 100 recent Physical Review Letters papers, testing LLMs on end-to-end physics research. Top models scored below 50%, exposing a significant capability gap for autonomous scientific discovery.
Claude AI Generates Weekly Meal Plans with Nutrition Goals
A prompt library demonstrates Claude's ability to create personalized weekly meal plans that meet specific nutrition targets, potentially saving users hundreds on groceries and dietitian fees.
Four Seasons Kuala Lumpur Deploys AI to Personalize Luxury Event Experiences
The Four Seasons Kuala Lumpur is introducing AI to create personalized event experiences, from tailored menus to dynamic ambiance. This is part of a broader trend where luxury hotels are testing AI as a tool for deeper guest engagement and service differentiation.
Google DeepMind Researcher: LLMs Can Never Achieve Consciousness
A Google DeepMind researcher has publicly argued that large language models, by their algorithmic nature, can never become conscious, regardless of scale or time. This stance challenges a core speculative narrative in AI discourse.
Paper Proposes 'Artificial Scientist' as New AGI Definition
A new paper defines AGI as an 'artificial scientist'—a system that adapts as generally as a human scientist under computational limits. This reframes the goal from passing benchmarks to autonomous planning, causal learning, and exploration.
GeoAgentBench: New Dynamic Benchmark Tests LLM Agents on 117 GIS Tools
A new benchmark, GeoAgentBench, evaluates LLM-based GIS agents in a dynamic sandbox with 117 tools. It introduces a novel Plan-and-React agent architecture that outperforms existing frameworks in multi-step spatial tasks.