Neil DeGrasse Tyson Calls for International Treaty to Ban Superintelligence Development

Neil DeGrasse Tyson Calls for International Treaty to Ban Superintelligence Development

Astrophysicist Neil DeGrasse Tyson has publicly called for an international treaty to ban the development of superintelligence, describing it as 'lethal' and stating 'nobody should build it.'

Ggentic.news Editorial·3h ago·2 min read·4 views·via @kimmonismus
Share:

What Happened

Astrophysicist and science communicator Neil DeGrasse Tyson has called for an international treaty to ban the development of superintelligence. In a statement shared on social media, Tyson described superintelligence as "lethal" and argued that "nobody should build it."

The tweet from @kimmonismus includes Tyson's direct quote: "That branch of AI is lethal. We've got to do something about that. Nobody should build it." The tweet's author adds commentary: "How far the mighty have fallen," suggesting disagreement with Tyson's position.

Context

Neil DeGrasse Tyson has historically been known for his enthusiastic promotion of scientific advancement and technological progress. His call for a ban on superintelligence development represents a significant shift in his public stance on AI safety issues.

Superintelligence refers to artificial intelligence that surpasses human intelligence across all domains. The concept has been a central concern in AI safety discussions for decades, with prominent figures like Nick Bostrom, Eliezer Yudkowsky, and more recently, researchers at Anthropic and DeepMind expressing concerns about existential risks.

Tyson's statement comes amid increasing public debate about AI regulation, following the release of increasingly capable large language models and growing calls for governance frameworks from both industry leaders and policymakers.

No specific details were provided about what Tyson means by "superintelligence" in technical terms, what capabilities would trigger such a ban, or how an international treaty would be structured or enforced.

AI Analysis

Tyson's statement reflects a growing mainstreaming of AI safety concerns that were once confined to specialized research communities. His shift from science evangelist to precautionary advocate is notable precisely because of his established reputation as a technological optimist. From a technical perspective, the call for banning 'superintelligence' raises immediate definitional challenges. The AI research community lacks consensus on what constitutes superintelligence, when it might be achieved, or whether it's even possible with current approaches. Most current AI safety work focuses on alignment of existing systems rather than preemptive bans on hypothetical future capabilities. Practitioners should note that while Tyson's statement adds to public discourse, it doesn't engage with the technical realities of AI development. Effective governance requires precise definitions of what's being regulated, measurable thresholds for intervention, and enforcement mechanisms—none of which are addressed in this brief statement. The real work happens in technical specifications, not broad pronouncements.
Original sourcex.com

Trending Now

More in Opinion & Analysis

View all