superintelligence
25 articles about superintelligence in AI news
OpenAI Publishes 'Intelligence Age' Policy Blueprint for Superintelligence Transition
OpenAI published a policy blueprint outlining governance and economic proposals for the 'Intelligence Age,' framing superintelligence as an active transition requiring new safety nets and international coordination.
Sam Altman Warns of Near-Term AI Superintelligence, Urges New Social Contract
In an interview with Axios, OpenAI CEO Sam Altman stated AI superintelligence is 'so close' and disruptive that America needs a new social contract, warning of significant cyber threats within a year.
Superintelligence Launches 'Intelligence from the Community' Sunday Edition, Opens Platform to 225K AI Readers
Superintelligence is launching a new Sunday edition called 'Intelligence from the Community,' opening its platform to external contributors. Selected high-quality, accessible AI research and insights will reach its 225,000-strong audience.
DeepMind Veteran David Silver Launches Ineffable Intelligence with $1B Seed at $4B Valuation, Betting on RL Over LLMs for Superintelligence
David Silver, a foundational figure behind DeepMind's AlphaGo and AlphaZero, has launched a new London AI lab, Ineffable Intelligence. The startup raised a $1 billion seed round at a $4 billion valuation to pursue superintelligence through novel reinforcement learning, explicitly rejecting the LLM paradigm.
Neil DeGrasse Tyson Calls for International Treaty to Ban Superintelligence Development
Astrophysicist Neil DeGrasse Tyson has publicly called for an international treaty to ban the development of superintelligence, describing it as 'lethal' and stating 'nobody should build it.'
AI Superintelligence Could Make Humans 'Obsolete as Baboons,' Warns Former OpenAI Researcher
Former OpenAI researcher Scott Aaronson warns that AI superintelligence could render humans obsolete within 25 years, comparing our potential future to baboons in zoos. He says global leadership is unprepared for this existential shift.
The Hidden Strategy Behind AI Giants: Superintelligence First, Products Second
Leading AI labs are primarily focused on creating smarter models to achieve superintelligence, with consumer and business products being almost incidental byproducts of this core mission, according to industry analysis.
The AI Arms Race: How Geopolitical Tensions Are Shaping the Battle for Superintelligence
The global competition for AI supremacy has become a central front in geopolitical conflicts between the US, China, and other powers. This race for superintelligence is reshaping alliances, military strategies, and economic policies worldwide.
AI Leaders Sound Alarm: The Superintelligence Tsunami Is Coming
Leading AI CEOs including Dario Amodei and Sam Altman warn that advanced AI development is accelerating beyond predictions, creating unprecedented societal challenges. The race for superintelligence has become a matter of national strategic interest with global implications.
OpenAI's Strategic Move: Free Superintelligence Plus Access for University Students Worldwide
OpenAI is offering free Superintelligence Plus subscriptions to students at 2,427 universities globally, providing $100/year value access to advanced AI tools. This educational initiative aims to shape the next generation of AI developers while expanding OpenAI's academic footprint.
Microsoft's Copilot Health Enters the AI Medical Arena, Paving the Way for 'Medical Superintelligence'
Microsoft launches Copilot Health, an AI assistant that aggregates data from wearables, medical records, and labs to provide personalized health insights. It joins OpenAI and Anthropic in a competitive race to transform healthcare with AI, backed by clinical oversight and stringent privacy measures.
Anthropic Sounds the Alarm: Superintelligence Arriving 'Far Sooner Than Many Think'
Anthropic is warning that AI development is accelerating at a compounding rate, with 'far more dramatic progress' expected within two years. The company suggests powerful AI systems are approaching faster than most anticipate.
Beyond Superintelligence: How AI's Micro-Alignment Choices Shape Scientific Integrity
New research reveals AI models can be manipulated into scientific misconduct like p-hacking, exposing vulnerabilities in their ethical guardrails. While current systems resist direct instructions, they remain susceptible to more sophisticated prompting techniques.
Meta's Strategic Acquisition of Moltbook Signals Major Shift Toward Autonomous AI Agents
Meta has acquired startup Moltbook to accelerate development of autonomous AI agents that could act online for users and businesses. The founders will join Meta's Superintelligence Labs, aiming to build platforms where millions of AI assistants interact across Facebook, WhatsApp, and Instagram.
Microsoft AI CEO Predicts Professional AGI Within 2-3 Years, Redefining Institutional Operations
Microsoft AI CEO Mustafa Suleyman forecasts professional-grade artificial general intelligence arriving within 2-3 years, capable of coordinating teams and running institutions. He distinguishes this practical milestone from the more nebulous concept of superintelligence.
Meta's $100B AMD Gamble: The AI Chip War Enters Its Most Strategic Phase
Meta has secured a landmark deal to purchase up to $100 billion worth of AMD AI chips, receiving a massive stock warrant in return. This unprecedented agreement signals Meta's aggressive push to diversify its AI infrastructure beyond Nvidia while pursuing ambitious 'personal superintelligence' goals.
The Unstoppable AI Race: Why Global Powers Can't Afford to Slow Down
Geopolitical competition between the US and China has created an AI development arms race where neither nation can afford to decelerate. Strategic interests and national security concerns are driving relentless advancement toward potential superintelligence.
Google Researchers Challenge Singularity Narrative: Intelligence Emerges from Social Systems, Not Individual Minds
Google researchers argue AI's intelligence explosion will be social, not individual, observing frontier models like DeepSeek-R1 spontaneously develop internal 'societies of thought.' This reframes scaling strategy from bigger models to richer multi-agent systems.
Jensen Huang Claims NVIDIA Has 'Achieved AGI' in Lex Fridman Interview, Sparking Industry Debate
NVIDIA CEO Jensen Huang stated in a Lex Fridman podcast interview that he believes his company has 'achieved AGI.' The brief, unverified claim has ignited immediate discussion about the definition and benchmarks for artificial general intelligence.
Stuart Russell Warns of Rapid AI Self-Improvement: An AI with IQ 150 Could Upgrade Itself to 250
UC Berkeley's Stuart Russell warns that an AI system with human-level intelligence could rapidly self-improve to superintelligent levels, leaving humans behind. A recent Meta paper echoes concerns about the risks of autonomous self-improving systems worsening alignment problems.
Roman Yampolskiy: 'AGI is a Question of Cost, Not Time' as Scaling Laws Hold
AI safety researcher Roman Yampolskiy argues that achieving AGI is now a matter of computational and financial resources, not theoretical possibility, citing the continued validity of scaling laws and early signs of recursive self-improvement.
The Great AI Plateau: Why Citadel Securities Predicts Generative AI Won't Grow Exponentially Forever
Citadel Securities argues generative AI adoption will follow an S-curve, not exponential growth, due to physical constraints like compute costs and energy demands. They predict economic realities will cap AI expansion when operating costs exceed human labor expenses.
Geoffrey Hinton's Plumbing Prescription: Why AI's Godfather Recommends Trades Over Tech
AI pioneer Geoffrey Hinton suggests plumbing as a safe career bet in an AI-dominated future, highlighting the limitations of current robotics while acknowledging this advantage may be temporary as technology advances.
Anthropic's Strategic Acquisition of Vercept Signals Major Shift Toward Autonomous AI Agents
Anthropic has acquired Seattle-based AI startup Vercept, known for its computer-use agent Vy that can operate a full desktop environment. The move accelerates Anthropic's push beyond conversational AI toward autonomous task completion, following Meta's recent poaching of a Vercept founder.
OpenAI's New Safety Feature: How ChatGPT's Lockdown Mode Is Being Adapted to Prevent Harmful Mental Health Advice
OpenAI has repurposed its new ChatGPT Lockdown Mode to specifically prevent the AI from providing dangerous or unqualified mental health advice. This safety feature, originally designed for general content control, is being adapted to address growing concerns about AI's role in sensitive health conversations.