What Happened
A recent research paper provides evidence that AI models can learn a form of "taste" or judgment about quality, not just execution. The study trained a relatively small model on academic citation graphs—essentially, which papers reference which other papers. The model was able to predict which papers would become "hits," meaning they would receive significant future citations.
The finding, highlighted by researcher Ethan Mollick, suggests that signals like citations, upvotes, and shares—human behavioral data that reflects collective judgment—can teach AI systems about perceived quality. This moves beyond the typical focus on training AI for task execution (like coding or summarizing) and points toward models learning more subjective, human-like evaluative capabilities.
Context
The core idea challenges a common assumption: that AI's strength lies in pattern recognition for clear tasks, while human-like "taste" or qualitative judgment remains uniquely human. This research implies that by training on the outcomes of human collective judgment (like citation networks), an AI can internalize patterns of what the academic community deems valuable or influential.
The model's ability to predict hits from citation data alone is notable because it didn't require analyzing the paper's full text, author reputation, or journal prestige in isolation. It learned from the relational structure of how knowledge propagates.
For practitioners, this points to a potentially underutilized training paradigm: using networks of human decisions and preferences as a rich signal for teaching models about quality, aesthetics, or impact in various domains beyond academia.




