Excel Agent Showdown: ChatGPT Builds Working Strategy Game with 'Smart' Enemy, Claude Creates Board, Copilot Fails

Excel Agent Showdown: ChatGPT Builds Working Strategy Game with 'Smart' Enemy, Claude Creates Board, Copilot Fails

When prompted to create a working strategy game in Excel with graphics, ChatGPT built a functional game with formulas and a 'smart' enemy AI, Claude created a board but acted as game master, and Microsoft Copilot failed to produce a game.

14h ago·2 min read·24 views·via @emollick
Share:

Excel Agent Showdown: ChatGPT Builds Working Strategy Game with 'Smart' Enemy, Claude Creates Board, Copilot Fails

What Happened

AI researcher Ethan Mollick conducted an informal test of three major AI coding assistants—Claude (Anthropic), ChatGPT (OpenAI), and Microsoft Copilot—by giving them the same prompt: "make me a working strategy game in excel, it should have some form of graphics."

The results revealed significant differences in how each AI agent approached the task:

  • ChatGPT successfully built a working strategy game with formulas and implemented a "smart" enemy AI opponent
  • Claude created a game board but didn't build a complete game, instead positioning itself as a game master that would respond to player moves
  • Microsoft Copilot created only a board with no functional game mechanics

Context

This test highlights the varying capabilities of current AI coding assistants when faced with complex, multi-step creative tasks that require both programming logic and visual design elements within a constrained environment like Microsoft Excel.

Excel represents a particularly challenging platform for game development due to its spreadsheet-based architecture, requiring creative use of formulas, conditional formatting, and potentially VBA (Visual Basic for Applications) to create interactive experiences.

The fact that ChatGPT implemented a "smart" enemy suggests it went beyond basic game mechanics to include opponent AI logic, which would require more sophisticated programming than simply creating a static game board.

This informal comparison follows similar benchmarking efforts by researchers and developers testing AI capabilities across different domains, though this particular test appears to be more qualitative than quantitative, focusing on functional outcomes rather than standardized metrics.

AI Analysis

This informal test reveals several important technical distinctions between current AI coding assistants. ChatGPT's ability to create a functional game with enemy AI suggests stronger capabilities in multi-step reasoning and implementation of game logic within constrained environments. The 'smart' enemy implementation implies the model understood not just how to create game mechanics, but how to program opponent behavior—a more complex task requiring state management and decision algorithms. Claude's approach of creating a board and positioning itself as game master represents a different architectural choice—perhaps prioritizing interactive guidance over complete automation. This could reflect differences in training data or reinforcement learning preferences, where Claude may be optimized for collaborative coding rather than end-to-end solution generation. Microsoft Copilot's failure to produce a working game is notable given its deep integration with Microsoft's ecosystem. This suggests either limitations in its current capabilities for complex creative tasks or differences in how it interprets and executes multi-step instructions compared to standalone models. The test highlights that despite similar marketing positioning, these AI assistants have meaningful differences in their practical capabilities for non-standard programming tasks.
Original sourcex.com

Trending Now