Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

scaling laws

30 articles about scaling laws in AI news

Roman Yampolskiy: 'AGI is a Question of Cost, Not Time' as Scaling Laws Hold

AI safety researcher Roman Yampolskiy argues that achieving AGI is now a matter of computational and financial resources, not theoretical possibility, citing the continued validity of scaling laws and early signs of recursive self-improvement.

87% relevant

UniMixer: A Unified Architecture for Scaling Laws in Recommendation Systems

A new arXiv paper introduces UniMixer, a unified scaling architecture for recommender systems. It bridges attention-based, TokenMixer-based, and factorization-machine-based methods into a single theoretical framework, aiming to improve parameter efficiency and scaling return on investment (ROI).

96% relevant

Beyond the Benchmark: New Model Separates AI Hype from True Capability

A new 'structured capabilities model' addresses a critical flaw in AI evaluation: benchmarks often confuse model size with genuine skill. By combining scaling laws with latent factor analysis, it offers the first method to extract interpretable, generalizable capabilities from LLM test results.

72% relevant

Kimi Team's 'Attention Residuals' Replace Fixed Summation with Softmax Attention, Boosts GPQA-Diamond by +7.5%

Researchers propose Attention Residuals, a content-dependent alternative to standard residual connections in Transformers. The method improves scaling laws, matches a baseline trained with 1.25x more compute, and adds under 2% inference overhead.

97% relevant

Lloyds Banking Group Details 'Atlas' ML Platform for Scaling AI in a

A technical blog post details how Lloyds Banking Group rebuilt its internal Machine Learning platform, Atlas, on a cloud-native architecture to overcome scaling limits and meet stringent regulatory requirements. This is a blueprint for operationalizing AI in high-stakes, governed industries.

88% relevant

Scaling Law Plateau Not Universal: More Tokens Boost Reasoning AI Performance

Empirical evidence indicates the 'second scaling law'—performance gains from increased computation—does not fully plateau for many reasoning tasks. Benchmark results may be artificially limited by token budgets, not model capability.

85% relevant

Robotics' Scaling Breakthrough: How SONIC's 42M-Parameter Model Achieves Perfect Real-World Transfer

Researchers have demonstrated that robotics can scale like language models, with SONIC training a 42M-parameter model on 100M human motion frames. The system achieved 100% success transferring to real robots without fine-tuning, marking a paradigm shift in robotic learning.

95% relevant

Anthropic Warns Upcoming LLMs Could Cause 'Serious Damage'

Anthropic has issued a stark warning that its upcoming large language models could cause 'serious damage.' The company states there is 'no end in sight' to capability scaling and proliferation risks.

85% relevant

US Data Center Power Demand Hits 15 GW, Grid Constraints Emerge

US data center power demand reached 15 gigawatts in 2023, up from 11 GW in 2022. This rapid growth highlights a widening bottleneck: compute infrastructure is scaling faster than power delivery systems can support.

75% relevant

daVinci-LLM 3B Model Matches 7B Performance, Fully Open-Sourced

The daVinci-LLM team has open-sourced a 3 billion parameter model trained on 8 trillion tokens. Its performance matches typical 7B models, challenging the scaling law focus on parameter count.

95% relevant

Gamma 31B Model Reportedly Outperforms Qwen 3.5 397B, Highlighting Efficiency Leap

A developer's social media post claims the Gamma 31B model outperforms the much larger Qwen 3.5 397B. If verified, this would represent a dramatic efficiency gain in large language model scaling.

85% relevant

QuatRoPE: New Positional Embedding Enables Linear-Scale 3D Spatial Reasoning in LLMs, Outperforming Quadratic Methods

Researchers propose QuatRoPE, a novel positional embedding method that encodes 3D object relations with linear input scaling. Paired with IGRE, it improves spatial reasoning in LLMs while preserving their original language capabilities.

79% relevant

The Two-Year AI Leap: How Model Efficiency Is Accelerating Beyond Moore's Law

A viral comparison reveals AI models achieving dramatically better results with identical parameter counts in just two years, suggesting efficiency improvements are outpacing hardware scaling. This development challenges assumptions about AI progress and has significant implications for deployment costs and capabilities.

85% relevant

Anthropic's RSP v3.0: From Hard Commitments to Adaptive Governance in AI Safety

Anthropic has released Responsible Scaling Policy 3.0, shifting from rigid safety commitments to a more flexible, adaptive framework. The update introduces risk reports, external review mechanisms, and unwinds previous requirements the company says were distorting safety efforts.

80% relevant

The Billion-Dollar Blind Spot: Why AI's Evaluation Crisis Threatens Progress

AI researcher Ethan Mollick highlights a critical imbalance: while billions fund model training, only thousands support independent benchmarking. This evaluation gap risks creating powerful but poorly understood AI systems with potentially dangerous flaws.

85% relevant

NHN Deploys 7,656-GPU AI Cluster in Seoul

NHN launched a 7,656-GPU cluster in Seoul, South Korea, for domestic enterprise AI workloads. The cluster targets inference and training, competing with Naver and Kakao.

90% relevant

Billionaire Sues Tiny Michigan Township to Force OpenAI Data Center Through

Billionaire Steven Roth's Related Digital sued Saline Township, Michigan, after it rejected a 21M sq ft OpenAI data center, forcing approval via 'exclusionary zoning' claim.

60% relevant

Recursive Multi-Agent Systems Top Hugging Papers; Eywa Bridges LLMs and Scientific Models

Recursive Multi-Agent Systems leads Hugging Papers with 242 upvotes. Eywa and OneManCompany signal a move from chat-based to structural agent collaboration.

89% relevant

GPT-5.5 Pro Leapfrogs on Epoch Benchmark; Base Model Beats Prior Pro

A tweet from @kimmonismus reveals GPT-5.5 Pro shows significant Epoch benchmark gains, and the non-Pro GPT-5.5 surpasses GPT-5.4 Pro, suggesting major efficiency improvements at OpenAI.

99% relevant

Talkie: Vintage LLM Trained on 260B Pre-1931 English Tokens

Talkie is a new 'vintage language model' trained on 260 billion tokens of historical English text from before 1931, developed by a team including Alec Radford, co-author of the original GPT paper. It offers a unique linguistic artifact for NLP research.

85% relevant

Google Hits 75% AI-Generated Code, Up From 50% in Fall 2025

Google reports 75% of all new code is now AI-generated and engineer-approved, a sharp increase from 50% last fall. This indicates a massive, accelerating shift in software development practices at the tech giant.

85% relevant

Anthropic Hiring Data Center Leasing Principals in Europe & Australia

Anthropic is actively hiring for data center leasing roles in Europe and Australia, revealing a strategic push to build out its own compute infrastructure as it scales its AI models.

100% relevant

Anthropic CEO Dario Amodei: China Will Match Mythos AI Within a Year

Anthropic CEO Dario Amodei stated China will replicate the capabilities of Anthropic's advanced 'Mythos' AI project within 12 months. He also sees no near-term slowdown in AI progress.

85% relevant

Paper Proposes 'Artificial Scientist' as New AGI Definition

A new paper defines AGI as an 'artificial scientist'—a system that adapts as generally as a human scientist under computational limits. This reframes the goal from passing benchmarks to autonomous planning, causal learning, and exploration.

85% relevant

Ethan Mollick: AI Agent Discontinuity in 2026 Resets Work Impact Studies

Ethan Mollick states that the rise of practical, agentic AI systems in 2026 created a genuine discontinuity in AI ability, invalidating earlier studies on AI's work impact that were based solely on chatbot capabilities.

85% relevant

Canada's AI Compute Gap: Google Cloud Montreal Offers 2017-Era Chips

A technical developer's attempt to rent modern AI compute in Canada revealed a stark infrastructure gap, with major providers offering chips as old as 2017, undermining national AI ambitions.

85% relevant

Claude Mythos Scores 73% on Expert CTF, Completes Full 32-Step Network Attack

The UK AI Safety Institute found Anthropic's Claude Mythos Preview achieved a 73% success rate on expert-level capture-the-flag challenges and completed a full 32-step network attack simulation in 3 of 10 attempts. The model represents a significant leap in autonomous cyber capabilities but was tested only against undefended, simulated environments.

98% relevant

OpenAI Forecasts $121B in AI Hardware Costs for 2028

OpenAI is forecasting its own AI research hardware costs will reach $121 billion in 2028, according to a WSJ report. This figure highlights the extreme capital intensity required to compete at the frontier of AI.

85% relevant

MiniMax M2.7 Model Deploys on NVIDIA NIM Endpoints with OpenClaw Support

Chinese AI firm MiniMax has made its M2.7 model available through NVIDIA's GPU-accelerated NIM endpoints. This deployment includes support for the OpenClaw and NemoClaw frameworks, integrating it into a major AI development ecosystem.

93% relevant

xAI's Grok 4.2 at 0.5T Params, Colossus 2 Training Models up to 10T

A tweet from AI researcher Rohan Paul states xAI's current Grok 4.2 model uses 0.5 trillion parameters. In parallel, the Colossus 2 project is training a suite of seven models ranging from 1 trillion to 10 trillion parameters.

85% relevant