Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Ethan Mollick: No Major GenAI Work Impact in Large Firms During 2025

Ethan Mollick: No Major GenAI Work Impact in Large Firms During 2025

Wharton professor Ethan Mollick argues that studies showing no generative AI productivity impact in 2025 are misleading, as adoption was experimental and agentic tools were unavailable. The real impact will be measurable in 2027.

GAla Smith & AI Research Desk·5h ago·5 min read·12 views·AI-Generated
Share:
Ethan Mollick: No Major Generative AI Work Impact in Large Firms During 2025

In a recent social media post, Wharton professor and leading AI researcher Ethan Mollick made a stark assessment of generative AI's real-world impact to date: "There were likely no major work impacts of GenAI in any large firm throughout 2025."

This statement cuts against the grain of relentless hype surrounding AI's transformative potential. Mollick provides three clear reasons for this lack of measurable impact:

  1. No Agentic Tools: Truly autonomous AI agents that can execute multi-step workflows were not yet widely deployed or mature enough for enterprise use.
  2. Adoption Takes Time: Integrating new technology into complex, legacy corporate systems and workflows is a slow process measured in years, not quarters.
  3. The Experimentation Phase: Throughout 2025, organizations were primarily in a discovery and piloting mode, testing use cases rather than deploying AI at scale for core operations.

Mollick's key conclusion is that studies measuring AI's productivity impact in 2025 are largely irrelevant for forecasting the future. "Studies that show no impact in 2025 don't tell us much about 2027," he writes, indicating the lag between technological capability and organizational absorption.

The Current Inflection Point

While dismissing 2025's impact, Mollick suggests the situation is now changing. The implication is that the foundational experimentation of the past two years is setting the stage for the integrated, agentic systems now beginning to emerge. The delay highlights a critical gap between AI research breakthroughs and their translation into reliable business tools.

This perspective aligns with a growing body of evidence on technology diffusion. Historically, transformative technologies like electricity or personal computers took decades to show significant productivity gains at the macroeconomic level, a phenomenon known as the "productivity paradox." Generative AI appears to be following a similar, albeit accelerated, trajectory.

What This Means for Practitioners

For AI engineers and technical leaders, Mollick's assessment is a call for realism and patience. It underscores that:

  • Deployment is the Hard Part: Building a model is one challenge; embedding it safely and effectively into a human-centric workflow is another.
  • The Toolchain is Still Immature: The lack of "agentic tools" points to a missing middleware layer—reliable platforms for planning, tool use, and verification that enterprises can trust.
  • Impact Will Be Lagged: The ROI on current AI investments may not materialize in annual reports until 2026-2027, as scaled implementations go live.

gentic.news Analysis

Mollick's sobering take provides necessary context against a backdrop of extreme optimism. This aligns with our previous reporting on the slow enterprise adoption of AI coding assistants (gentic.news, Feb 2026), which found that while developers experimented with tools like GitHub Copilot, few companies had standardized workflows or measured clear productivity lifts at the team level.

His focus on the absence of "agentic tools" is particularly salient. Throughout 2025, research labs like OpenAI, Anthropic, and Google DeepMind raced to develop reasoning and planning capabilities (see our coverage on OpenAI's "Strawberry" project, Oct 2025). However, as Mollick implies, these advanced research prototypes had not yet hardened into the robust, auditable, and secure platforms that risk-averse large firms require. The transition from research demo to enterprise-grade tool is a non-trivial engineering challenge that accounts for much of the delay.

Furthermore, this analysis connects to the broader investment trend we've tracked. Venture capital flooded into AI infrastructure and foundational models in 2024-2025, but a significant portion of that capital is now shifting toward application-layer and integration companies (gentic.news, Jan 2026). The market is recognizing that the next wave of value creation lies in bridging the last mile between powerful models and usable business processes—exactly the gap Mollick identifies.

In essence, Mollick is describing the end of the first phase of generative AI (speculation and experimentation) and the beginning of the second phase (implementation and integration). The major work impacts will belong to those who navigate this transition.

Frequently Asked Questions

Why did generative AI have no major work impact in large firms in 2025?

According to Ethan Mollick, three primary factors prevented major impact: a lack of mature, autonomous "agentic" AI tools capable of multi-step work; the inherently slow process of adopting new technology into complex corporate systems; and the fact that most large firms were still in an experimental, piloting phase rather than deploying AI at scale in core business processes.

What are "agentic tools" in AI?

Agentic tools refer to AI systems that can autonomously perform complex, multi-step tasks by planning, using external tools (like APIs or software), and iterating based on results. Unlike chatbots that respond to single prompts, agentic AI can execute a workflow—for example, analyzing a dataset, creating a report, and emailing it to a list of recipients—with minimal human intervention. Their maturity and reliability are seen as a prerequisite for significant productivity gains.

When will we see the real productivity impact of generative AI?

Ethan Mollick suggests that 2027 is a more realistic timeframe to expect measurable, major work impacts from generative AI in large firms. This accounts for the typical lag between technology experimentation, integration into reliable platforms, and finally, scaled deployment within business workflows where its effects can be systematically measured.

Does this mean the hype around AI is wrong?

Not necessarily. Mollick's point is about timing and measurement, not potential. He argues that the lack of measurable impact in 2025 does not mean the technology is ineffective, but rather that its adoption cycle is longer than popular narrative sometimes suggests. The hype may be early, not incorrect. The real test will be whether the integration and tool-building efforts of 2025-2026 lead to the tangible impacts predicted for 2027 and beyond.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Mollick's assessment is a crucial reality check for the industry. It shifts the conversation from speculative potential to practical deployment timelines. The key insight for practitioners is that the bottleneck has moved from model capability to integration readiness. The most valuable work in the near term won't be in creating larger models, but in building the middleware—the agentic frameworks, evaluation suites, and security layers—that allow existing models to operate reliably within enterprise constraints. This directly impacts technical strategy. Teams should prioritize projects that solve specific integration pain points (e.g., data grounding, audit trails, policy compliance) over chasing the latest benchmark leaderboard. The market will reward robust engineering over pure research prowess in this phase. Furthermore, Mollick's 2027 forecast creates a tangible timeline: solutions that mature and prove themselves in 2026 will be poised to capture the value of scaled adoption in 2027.

Mentioned in this article

Enjoyed this article?
Share:

Related Articles

More in Opinion & Analysis

View all