A new AI startup, Engramme, has entered the stealth development phase with an ambitious goal: to build what it calls "Large Memory Models" (LMMs). Founded by neuroscientist and AI researcher Gabriel Kreiman, the company's stated mission is to create AI systems designed to connect to a user's entire digital footprint—emails, messages, documents, calendar events, and call logs—and automatically surface the most relevant context for any given moment.
The core premise, as shared by Kreiman, is augmentation over replacement. Instead of building another chatbot or code assistant, Engramme is focusing on creating a persistent, personal memory layer for AI. The system would theoretically operate in the background, learning the patterns of a user's digital life and recalling necessary information proactively.
What Engramme Is Proposing
Based on the initial announcement, Engramme's LMM concept involves:
- Continuous Indexing: Connecting to and indexing data from a user's various digital services (e.g., Gmail, Slack, Google Calendar, Notion).
- Proactive Recall: The model would attempt to understand the user's current context—such as opening an email, joining a video call, or starting a new document—and surface relevant past information without requiring a search query or explicit prompt.
- User-Controlled Data: The founding philosophy emphasizes that the data belongs to the user, with the AI acting as a recall tool "at the right moment."
This approach contrasts with current retrieval-augmented generation (RAG) systems, which typically require a user to formulate a question. Engramme's vision is for the AI to anticipate the need.
The Technical and Philosophical Foundation
Founder Gabriel Kreiman is a professor at Harvard Medical School and Boston Children's Hospital, where his lab studies the neural mechanisms of vision, memory, and cognition. His academic work focuses on how the brain recognizes objects, forms memories, and makes predictions. This neuroscience background directly informs Engramme's thesis: that the most valuable AI application may not be generating new content, but efficiently retrieving and recontextualizing existing personal information—a core function of biological memory.
The term "Large Memory Model" itself is a deliberate parallel to Large Language Models (LLMs). While LLMs are trained on vast, static public corpora to generate language, an LMM would be trained or fine-tuned on a continuous, private stream of personal data to optimize for relevance and recall.
The Immense Challenges Ahead
The vision, while compelling, faces significant technical, product, and privacy hurdles:
- Privacy & Security: Building a system that has access to a person's entire digital life is the ultimate privacy challenge. Engramme will need a flawless security model and likely an entirely on-device or strongly encrypted architecture to gain user trust.
- Relevance Engine: Determining what is "relevant" is an extraordinarily difficult AI problem. An error-prone system that surfaces irrelevant or distracting information would be worse than no system at all.
- Data Integration: Creating secure, reliable connectors to the myriad of potential data sources (each with its own API and rate limits) is a massive engineering undertaking.
- Market Position: The space for AI-powered personal organization is becoming crowded. Engramme will need to clearly differentiate its "proactive memory" approach from existing AI assistants, note-taking apps with AI features (like Mem, Notion AI), and enterprise copilots that are adding personal context features.
gentic.news Analysis
Engramme's announcement taps into a growing but fraught trend in AI: the push for highly personalized, context-aware agents. This follows numerous efforts by larger companies to integrate personal data into AI. Microsoft's Copilot is increasingly context-aware within the Microsoft 365 ecosystem, and Google's Gemini models are deeply integrated with Google Workspace data. However, these are largely platform-locked. Engramme's bet appears to be on creating a platform-agnostic, user-centric memory layer, a technically ambitious and risky position.
This development aligns with a key theme we identified in our 2025 year-end review: The Shift from Generation to Orchestration. As LLM generation becomes commoditized, the next battleground is in systems that can reliably reason over and act upon personal and real-time data. Engramme's LMM concept is a pure expression of this trend.
Gabriel Kreiman's neuroscience pedigree is a significant asset, suggesting the product will be informed by fundamental research on how memory actually works, rather than just engineering convenience. However, the gap between understanding hippocampal function and building a reliable, scalable digital product is vast. The most immediate competitors may not be other startups, but the personal context features being baked into existing operating systems and productivity suites by Apple, Microsoft, and Google. For Engramme to succeed, it must execute on privacy and relevance at a level these giants have so far been unable or unwilling to achieve.
Frequently Asked Questions
What is a Large Memory Model (LMM)?
As proposed by Engramme, a Large Memory Model (LMM) is an AI system designed to continuously index a user's personal digital data (emails, messages, documents) and proactively surface relevant information based on the user's current context, aiming to function as an augmented external memory.
Who is the founder of Engramme?
Engramme was founded by Gabriel Kreiman, a neuroscientist and professor at Harvard Medical School and Boston Children's Hospital. His research focuses on the neural basis of vision, memory, and cognition, which directly informs the company's approach to building AI that mimics human memory functions.
How is this different from Google Assistant or Microsoft Copilot?
While existing assistants can access some personal data when explicitly asked, Engramme's proposed LMM is designed for proactive, automatic recall. The goal is to surface needed information without a prompt, by learning the patterns of a user's digital life. It also aims to be platform-agnostic, pulling context from across many services, not just a single ecosystem like Google Workspace or Microsoft 365.
What are the biggest challenges for Engramme?
The primary challenges are user privacy and security (requiring a potentially on-device architecture), building a reliable relevance engine that doesn't misfire, the engineering complexity of integrating with numerous data sources, and competing against context features being built into major platforms by tech giants.









