Mirendil: Ex-Anthropic Scientists Launch $1B Venture to Build AI That Thinks Like a Scientist
A significant new player has entered the arena of AI-driven scientific discovery. According to a report by The Information, a team of former researchers from AI safety lab Anthropic is raising $175 million at a staggering $1 billion valuation for a new startup named Mirendil. The company's ambitious mission is to develop advanced artificial intelligence systems capable of "long-term scientific reasoning" to accelerate breakthroughs in complex fields like biology and materials science.
This move signals a major strategic shift within the AI industry, where capital and talent are increasingly flowing from foundational model development toward highly specialized, application-driven ventures aimed at solving humanity's most pressing challenges.
The Team and the Vision
Mirendil is being led by Behnam Neyshabur and Harsh Mehta, both former research scientists at Anthropic, a company renowned for its work on AI safety and the Claude language models. Their departure to tackle scientific discovery highlights a growing belief among top AI talent that the technology is now mature enough to move beyond chatbots and content generation into the rigorous domain of hypothesis-driven research.
While specific technical details remain under wraps, the company's stated goal is to build AI systems that can engage in "long-term scientific reasoning." This suggests a focus on moving beyond today's AI, which excels at pattern recognition and short-horizon tasks, toward systems that can plan, reason over extended timelines, and navigate the iterative, often failure-rich process of scientific experimentation. The initial target domains—biology and materials science—are fields ripe for acceleration, where discovering a new protein structure or a novel battery material can have transformative global impacts.
Part of a Broader Industry Wave
Mirendil's launch is not an isolated event. It is a high-profile manifestation of a trend that has been building for over a year. The concept of using AI as a tool for, or even as an autonomous agent of, scientific discovery has become a central theme for industry leaders.
Most notably, OpenAI CEO Sam Altman has publicly stated that a key goal for his company is to develop "autonomous AI researchers" by 2028. This vision imagines AI systems that can independently formulate research questions, design and run computational or even physical experiments, analyze results, and propose new directions—essentially automating the core loop of scientific inquiry.
Other initiatives, like Google DeepMind's work on AlphaFold for protein folding, its GNoME project for materials discovery, and the rise of "self-driving labs," have already proven the concept. Mirendil appears to be aiming for the next level: creating a general-purpose AI scientist that can be applied across multiple disciplines, not just a single, pre-defined task.
The $1 Billion Question: Why Now?
The reported $1 billion pre-launch valuation is a powerful signal of investor confidence. It reflects a confluence of factors making AI-for-science a compelling bet:
- Technical Readiness: Large language models (LLMs) and other AI architectures have demonstrated remarkable reasoning capabilities, code generation skills, and the ability to synthesize vast scientific corpora. The foundational pieces for a research assistant—or researcher—are in place.
- Economic Imperative: The cost and time required for traditional scientific R&D, especially in wet-lab fields like drug discovery, are astronomical. Any technology that can significantly compress this timeline represents a potential multi-trillion-dollar opportunity.
- Grand Challenges: From climate change and sustainable energy to aging populations and pandemic preparedness, society faces problems that demand faster scientific innovation. AI is seen as a potential force multiplier for the global research community.
Implications and Challenges
The rise of ventures like Mirendil carries profound implications. Success could lead to an unprecedented acceleration in the pace of discovery, potentially delivering new medicines, clean energy solutions, and advanced materials within years instead of decades. It could democratize high-level research, giving smaller labs and institutions access to AI "co-pilots" that rival the cognitive firepower of large, well-funded teams.
However, the path is fraught with technical and philosophical challenges. Achieving true "long-term reasoning" requires overcoming well-known AI limitations in planning, maintaining logical consistency over long chains of thought, and integrating reliable knowledge bases. There are also critical questions about bias in training data, the reproducibility of AI-driven discoveries, and how to properly credit AI contributions in scientific work.
Furthermore, the concentration of such powerful technology in private, venture-backed companies raises questions about access, equity, and the alignment of research goals with public good versus shareholder return.
The Road Ahead
As Mirendil emerges from stealth, the AI and scientific communities will be watching closely. Its progress will be a key benchmark for the "AI for science" movement. Can a startup, even one flush with capital and talent, crack the code of autonomous scientific discovery? Or will this require the sustained, large-scale efforts of tech giants and governments?
The company's roots in Anthropic, with its strong culture of AI safety, also suggest that Mirendil will likely prioritize building reliable and interpretable systems—a crucial consideration when the outputs could inform real-world medical or engineering applications.
In launching Mirendil, Neyshabur, Mehta, and their team are placing a bold bet that the next frontier for artificial intelligence is not just in conversing or creating, but in comprehending and advancing the fundamental laws of nature. Their journey will test whether AI can truly learn to think like a scientist, and in doing so, redefine what is possible in human knowledge itself.
Source: Report via The Information, originally highlighted by @kimmonismus on X.




