Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

uk policy

22 articles about uk policy in AI news

Hassabis: UK Talent, Less Competition Key to DeepMind's London Base

Demis Hassabis stated DeepMind remained in London because the UK offered world-class AI talent with less intense competition for hiring than Silicon Valley. This strategic choice highlights a key factor in the early AI talent wars.

75% relevant

Nscale's $2 Billion Bet: How a UK AI Infrastructure Startup Became Europe's New Tech Titan

UK-based AI infrastructure company Nscale has secured a massive $2 billion Series C round, valuing it at $14.6 billion. The funding will accelerate global deployment of vertically integrated AI data centers, with former Meta executives Sheryl Sandberg and Nick Clegg joining the board.

75% relevant

The AI Policy Gap: Why Governments Are Struggling to Keep Pace with Rapid Technological Change

AI expert Ethan Mollick warns that rapid AI advancements combined with knowledge gaps and uncertain futures are leading to reactive, scattered policy responses rather than coherent governance frameworks.

85% relevant

OpenAI Proposes 4-Day Week, Robot Tax Amid Rising Anti-AI Violence

Following violent attacks on CEO Sam Altman, OpenAI has published a policy paper proposing a new social contract, including a four-day workweek and AI dividends, to address rising public anxiety over AI's societal impact.

95% relevant

Claude Mythos Scores 73% on Expert CTF, Completes Full 32-Step Network Attack

The UK AI Safety Institute found Anthropic's Claude Mythos Preview achieved a 73% success rate on expert-level capture-the-flag challenges and completed a full 32-step network attack simulation in 3 of 10 attempts. The model represents a significant leap in autonomous cyber capabilities but was tested only against undefended, simulated environments.

98% relevant

Anthropic Abandons Core Safety Commitment Amid Intensifying AI Race

Anthropic has quietly removed a key safety pledge from its Responsible Scaling Policy, no longer committing to pause AI training without guaranteed safety protections. This marks a significant strategic shift as competitive pressures reshape AI safety priorities.

95% relevant

AI-Generated Street View Imagery Sparks New Privacy Concerns

AI models can now generate photorealistic street views of private homes, making them publicly visible on mapping platforms. This forces a re-evaluation of privacy controls in the age of synthetic media.

85% relevant

German Media's AI 'Stupidity' Cover Sparks Debate on National Tech Pessimism

A DER SPIEGEL magazine cover asking 'How much is AI making us all stupid?' has drawn criticism for exemplifying Germany's pessimistic 'Angst'-driven narrative around technology, contrasting with calls for a more opportunity-focused discourse.

75% relevant

Stanford 2026 AI Index: Models Beat Human Baselines, U.S.-China Gap Narrows

The 423-page Stanford 2026 AI Index Report reveals frontier AI models now match or exceed human baselines on hard coding, science, and math tests. Global AI adoption has hit ~53% in just three years, while the U.S.-China capability gap shrinks.

97% relevant

Anthropic Withholds 'Mythos' AI Model Citing Unspecified Risk Concerns

Anthropic has reportedly chosen to withhold a new AI model, internally called 'Mythos', from public release. The decision is based on an internal assessment of potential risks, though specific capabilities or benchmarks were not disclosed.

89% relevant

Anthropic Signs AI Safety MOU with Australian Government, Aligning with National AI Plan

Anthropic has signed a Memorandum of Understanding with the Australian Government to collaborate on AI safety research. The partnership aims to support the implementation of Australia's National AI Plan.

85% relevant

China's First Fully Automated Humanoid Robot Factory Goes Live in Foshan, Targets 10,000+ Units Annually

China's first fully automated humanoid robot production line has launched in Foshan, capable of building one complete robot every ~30 minutes. The facility aims for over 10,000 units per year, with five more sites planned.

97% relevant

Anthropic Seeks Chemical Weapons Expert for AI Safety Team, Signaling Focus on CBRN Risks

Anthropic is hiring a Chemical, Biological, Radiological, and Nuclear (CBRN) weapons expert for its AI safety team. The role focuses on assessing and mitigating catastrophic risks from frontier AI models.

87% relevant

Von der Leyen's Nuclear Stance Exposes Europe's Deep Energy Divide

European Commission President Ursula von der Leyen, a German politician, has publicly declared nuclear energy essential for Europe's electricity supply while her own country completed its nuclear phase-out just last year. This contradiction highlights the fragmented energy policies across EU member states as Europe struggles to balance decarbonization goals with energy security.

85% relevant

AI Now Surpasses Human Experts in Technical Domains, Study Finds

New research mapping AI capabilities to human expertise reveals frontier models have already surpassed domain experts in technical and scientific benchmarks. The study forecasts AI will reach top-performer human levels by late 2027.

75% relevant

Global TV Liberation: How Open Source Collaboration Is Disrupting Streaming

An open-source project called Free-TV/IPTV has compiled free live TV channels from over 60 countries into a single M3U playlist. With 88 contributors maintaining the repository, this GitHub project offers HD streams from major platforms without subscriptions.

85% relevant

The Silent Data Harvest: Stanford Exposes How AI Giants Use Your Private Conversations

Stanford researchers reveal that all major AI companies—OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon—train their models on user chat data by default, with minimal transparency, unclear opt-out mechanisms, and concerning practices around data retention and child privacy.

95% relevant

Anthropic CEO Warns of Military AI Risks: The Accountability Crisis in Autonomous Warfare

Anthropic CEO Dario Amodei raises alarms about selling unreliable AI technology for military use, warning of civilian harm and accountability gaps in concentrated drone fleets. He calls for urgent oversight conversations.

85% relevant

The AI Ethics Double Standard: Why Anthropic's Principles Cost Them While OpenAI's Didn't

Reports suggest the Department of Defense scuttled a deal with Anthropic over ethical principles, while OpenAI secured a similar agreement. This apparent contradiction raises questions about consistency in government AI procurement and the real-world cost of ethical stances.

85% relevant

Trump's AI Energy Summit: Tech Giants Pledge to Self-Generate Power Amid Grid Concerns

Former President Donald Trump is convening Amazon, Google, Meta, Microsoft, xAI, Oracle, and OpenAI at the White House to sign a 'Rate Payer Protection Pledge,' committing them to generate or purchase their own electricity for new AI data centers, signaling a major shift in how tech's energy demands are addressed.

85% relevant

Sam Altman's Warning: The World Is Unprepared for What's Coming in AI

OpenAI CEO Sam Altman has issued a stark warning that the world is unprepared for the AI developments emerging from leading companies. His comments highlight the growing gap between internal industry knowledge and public readiness for transformative technologies.

85% relevant

India's AI Ambition Takes Center Stage at Global Summit with Tech Titans

India hosts the AI Impact Summit in New Delhi, gathering CEOs from OpenAI, Google, Anthropic, and Reliance to discuss AI's future. The event positions India as a critical player in global AI governance and market expansion.

75% relevant