The AI Policy Gap: Why Governments Are Struggling to Keep Pace with Rapid Technological Change

The AI Policy Gap: Why Governments Are Struggling to Keep Pace with Rapid Technological Change

AI expert Ethan Mollick warns that rapid AI advancements combined with knowledge gaps and uncertain futures are leading to reactive, scattered policy responses rather than coherent governance frameworks.

Feb 27, 2026·6 min read·36 views·via @emollick
Share:

The AI Policy Gap: Why Governments Are Struggling to Keep Pace with Rapid Technological Change

In a recent social media post, Wharton professor and AI researcher Ethan Mollick highlighted a critical challenge facing policymakers worldwide: the growing disconnect between the breakneck pace of artificial intelligence development and society's ability to govern it effectively. Mollick's observation that we should "expect mostly reactive, ad hoc & scattered policy responses" points to a fundamental structural problem in how we approach AI governance.

The Perfect Storm of AI Governance Challenges

Mollick identifies five key factors creating this governance crisis:

1. Exponential Improvement Trajectory
AI capabilities aren't improving linearly—they're advancing at an accelerating rate that defies traditional forecasting models. The jump from GPT-3 to GPT-4 demonstrated capabilities that few predicted, and subsequent models continue this pattern. This creates what researchers call a "moving target" problem for regulators: by the time policies are drafted, debated, and implemented, the technology has already evolved beyond the scope of the original concerns.

2. The Knowledge Deficit
There exists a profound asymmetry between what AI developers know about their systems' capabilities and what policymakers understand. This isn't merely a technical knowledge gap—it's a structural information problem. AI companies often treat their most advanced capabilities as proprietary secrets, while policymakers lack the technical staff and testing infrastructure to independently evaluate claims about AI safety, capabilities, and limitations.

3. Radical Uncertainty About Future Impacts
Unlike previous technological revolutions where the general trajectory was somewhat predictable, AI presents what economists call "Knightian uncertainty"—we don't even know what we don't know. Will AI create mass unemployment or unprecedented productivity? Will it concentrate power or democratize capabilities? The lack of consensus on even basic questions about AI's trajectory makes coherent policy planning exceptionally difficult.

4. Private Sector Control Over Guardrails
As Mollick notes, "guardrails are decided by AI labs." This represents a fundamental shift in technological governance. Historically, safety standards for transformative technologies—from automobiles to pharmaceuticals—were developed through public processes involving government agencies, academic experts, and industry stakeholders. With AI, the most important safety decisions are being made internally by private companies, often with minimal transparency or public accountability.

5. Pervasive Impact Across Society
AI isn't a niche technology affecting specific sectors—it touches everything from healthcare and education to national security and creative industries. This breadth means that AI policy can't be siloed within traditional regulatory frameworks. It requires coordination across dozens of agencies and jurisdictions, creating bureaucratic complexity that slows response times.

The Consequences of Reactive Governance

The combination of these factors creates what governance scholars call a "wicked problem"—one that's difficult to define, has multiple interdependencies, and resists conventional solutions. The result, as Mollick predicts, is policy that's:

Reactive: Governments respond to crises rather than anticipating them. We see this pattern in the EU's scramble to update the AI Act after ChatGPT's release, or various countries' belated attempts to regulate deepfakes after they've already influenced elections.

Ad Hoc: Instead of comprehensive frameworks, we get piecemeal regulations addressing specific symptoms rather than underlying causes. One agency regulates AI in hiring, another in healthcare, with little coordination between them.

Scattered: Different jurisdictions adopt wildly different approaches with minimal international coordination. The EU's risk-based approach contrasts sharply with the U.S.'s sector-specific guidance, while China's focus on ideological control creates yet another model.

Case Studies in Policy Lag

Several recent developments illustrate Mollick's thesis:

The Generative AI Surprise: When ChatGPT launched in November 2022, it caught nearly every regulator off guard. Existing AI governance frameworks, including the EU's then-in-progress AI Act, were designed primarily around traditional machine learning systems, not generative models with emergent capabilities. The result was a frantic rewriting of regulations that many experts believe still doesn't adequately address the technology's unique risks.

The Frontier Model Dilemma: Leading AI labs are now developing what they term "frontier models"—systems at the cutting edge of capability. These models present novel safety challenges, from potential misuse to unpredictable emergent behaviors. Yet there's no agreed-upon framework for evaluating or governing them before deployment. The voluntary commitments secured by the White House represent progress, but they lack enforcement mechanisms and don't bind new market entrants.

International Coordination Gaps: While UNESCO and the OECD have developed AI principles, and the UN has established an AI advisory body, binding international agreements remain elusive. The Bletchley Declaration from the UK's AI Safety Summit represented a symbolic step forward, but concrete governance mechanisms are still lacking.

Toward More Proactive AI Governance

Breaking the cycle of reactive policymaking requires addressing the root causes Mollick identifies:

Building Institutional Capacity: Governments need dedicated AI expertise within regulatory agencies. The UK's establishment of an AI Safety Institute represents one model, but similar capacity is needed across all sectors AI affects.

Improving Transparency: Mechanisms for independent evaluation of AI systems are crucial. Some researchers advocate for "AI FDA" models where systems undergo third-party safety testing before deployment in high-risk contexts.

Adaptive Regulation: Rather than attempting to write comprehensive rules for a rapidly evolving technology, regulators might adopt more flexible approaches like standards-based regulation or sandbox environments where innovations can be tested under supervision.

International Cooperation: Given AI's global nature, fragmented national approaches create compliance headaches for companies and enforcement challenges for regulators. Enhanced cooperation through existing forums like the GPAI or new multilateral initiatives is essential.

Public Participation: AI governance has been dominated by technical experts and industry voices. Broader societal dialogue about AI's values and priorities could help ground policy in democratic principles rather than purely technical considerations.

The Path Forward

Mollick's warning comes at a critical juncture. We're still in the early stages of AI's integration into society, which means there's still time to build more robust governance structures. However, the window for proactive policy is closing as AI becomes more embedded in critical infrastructure.

The challenge isn't merely technical or regulatory—it's fundamentally about how societies make collective decisions about transformative technologies. Do we continue with the current pattern of reacting to each new AI development after it causes problems? Or can we develop anticipatory governance that guides development toward beneficial outcomes while mitigating risks?

The answer will determine not just how we govern AI, but what kind of future AI helps create.

Source: Ethan Mollick (@emollick) on social media, May 2024

AI Analysis

Mollick's observation captures a fundamental structural challenge in technology governance that extends beyond AI to other rapidly advancing fields like biotechnology and quantum computing. The core insight is that our traditional governance mechanisms—built around predictable technological trajectories, clear expertise boundaries, and manageable timeframes for policy development—are ill-suited for technologies that evolve exponentially, have opaque capabilities, and affect every sector simultaneously. What makes this analysis particularly significant is its timing. We're at an inflection point where early voluntary frameworks are proving inadequate, but comprehensive regulation remains elusive. The gap between technological capability and governance capacity creates what risk analysts call an 'overshoot' scenario—where technology advances faster than our ability to understand or control its impacts. This isn't merely an academic concern; it has practical implications for everything from market stability to national security. The most troubling implication is that reactive governance tends to favor incumbent players who can navigate complex regulatory landscapes, potentially stifling innovation while failing to address systemic risks. It also creates regulatory arbitrage opportunities where companies deploy technologies in jurisdictions with the weakest oversight. Ultimately, the pattern Mollick identifies suggests we may be missing our best opportunity to shape AI's development trajectory, settling instead for managing its consequences after the fact.
Original sourcetwitter.com

Trending Now