Ethan Mollick's 'AI Weirdness Axiom': Why Treating AI Like Standard IT Products Reduces Reliability
In a recent social media post, Wharton professor and AI researcher Ethan Mollick articulated a core principle about implementing artificial intelligence systems:
"Axiom: The form of AI that we ended up with is deeply weird in ways that we don't fully get. Attempts to pretend AI is less weird & apply it like a standard IT product will inevitably result in less useful & far less reliable AI implementations than those that embrace weirdness."
What Mollick Is Arguing
Mollick's statement isn't about a specific technical breakthrough or product launch, but rather a fundamental observation about how organizations should approach AI deployment. His argument contains two key components:
- AI is fundamentally different from traditional software in ways we don't yet fully understand
- Implementation approaches matter—treating AI like conventional IT leads to worse outcomes
The Context of 'AI Weirdness'
Mollick's observation builds on several documented characteristics of modern AI systems, particularly large language models:
- Non-deterministic behavior: Unlike traditional software where the same input always produces the same output, AI systems can produce different results from identical prompts
- Emergent capabilities: Abilities that weren't explicitly programmed or trained for can appear at certain scale thresholds
- Unpredictable failure modes: AI systems can fail in ways that don't follow traditional software bug patterns
- Human-like but not human: Systems that appear to understand but operate on fundamentally different principles
Practical Implications for Implementation
Mollick suggests that organizations face a choice in how they implement AI:
Standard IT Approach (which he argues against):
- Treat AI as just another software component
- Apply traditional testing and validation methods
- Expect predictable, repeatable behavior
- Minimize or hide the system's unpredictable aspects
'Embrace Weirdness' Approach:
- Acknowledge and design for AI's unpredictable nature
- Implement different testing and monitoring strategies
- Build human oversight into the workflow
- Create systems that can handle unexpected outputs gracefully
Why This Matters for Practitioners
Mollick's axiom has direct implications for AI engineers and technical leaders:
- Testing strategies need to evolve beyond unit tests to include probabilistic testing and continuous monitoring
- System design should incorporate fallback mechanisms and human review loops
- User expectations must be managed differently than with traditional software
- Reliability metrics may need to be redefined for AI systems
The Broader Conversation
Mollick's post taps into an ongoing discussion in the AI community about how to responsibly deploy systems we don't fully understand. Other researchers have noted similar challenges:
- The difficulty of creating comprehensive test suites for unpredictable systems
- The tension between wanting reliable software and working with inherently probabilistic models
- The organizational challenge of getting traditional IT departments to adapt to AI's different requirements
Mollick's position suggests that attempts to 'tame' AI into behaving like conventional software are not just futile but counterproductive—they result in systems that are both less capable and less reliable than those designed with AI's unique characteristics in mind.


