In a case that demonstrates the democratizing potential of AI tools in medicine, a non-biologist has reportedly designed and implemented a custom mRNA cancer vaccine for his terminally ill dog using commercially available large language models (LLMs).
What Happened
Three years ago, Paul Conyngham noticed lumps on his dog Rosie. According to his account shared on social media, veterinary professionals dismissed concerns for nearly a year before a terminal cancer diagnosis was finally made. Rather than accept the prognosis, Conyngham—who runs an AI consulting business—embarked on a DIY medical research project using AI tools.
Conyngham claims to have used ChatGPT, Gemini, and Grok to design a custom mRNA cancer vaccine for Rosie from scratch. The reported pipeline involved:
- Full genome sequencing at Australia's Garvan Institute
- AlphaFold 2 to model Rosie's mutated c-KIT protein
- A million-candidate ligand screen (which reportedly hit dead ends due to patent walls and regulatory timelines)
- Pivot to vaccines when initial approaches proved unfeasible
The AI-Assisted Protocol
According to Conyngham's account, the AI tools helped him:
- Architect a 7-epitope mRNA construct for the vaccine
- Write 100 pages of ethics approval documents
- Design a multimodal treatment protocol combining the vaccine with:
- A tyrosine kinase inhibitor
- A PD-1 checkpoint inhibitor
The reported outcome: tumors began shrinking within six weeks of treatment initiation, with Rosie showing near-normal condition after two months.
Context and Caveats
This case represents an extreme example of citizen science enabled by increasingly capable AI systems. While the results appear promising in this single-animal case, several important caveats apply:
- No peer-reviewed publication or formal clinical data is available
- The treatment was administered outside traditional regulatory pathways
- Single-animal outcomes don't establish efficacy or safety
- The exact role of AI versus human expertise remains unclear
Conyngham claims he is now "convinced this can scale" and is reportedly building on this experience for future applications.
gentic.news Analysis
This case sits at the intersection of several trends we've been tracking. First, it demonstrates the continued erosion of domain expertise barriers as LLMs become more capable of synthesizing complex scientific information. This follows our coverage of similar citizen science applications in bioinformatics, though this represents one of the most medically ambitious applications we've seen from a non-expert.
Second, the use of multiple AI systems (ChatGPT, Gemini, Grok) for different aspects of the project aligns with the emerging pattern of "AI ensemble" approaches we noted in our analysis of multimodal AI workflows. Practitioners are increasingly using different models for their specialized strengths rather than relying on a single system.
Third, the pivot from ligand screening to mRNA vaccine design reflects the practical constraints facing DIY bio projects. While AI can generate candidate molecules, the path to actual synthesis and testing remains bottlenecked by intellectual property, regulatory requirements, and access to laboratory infrastructure. This limitation has been a consistent theme in our reporting on democratized drug discovery.
From a medical ethics perspective, this case raises familiar questions about patient safety, informed consent in veterinary medicine, and the appropriate boundaries of citizen science. However, it also suggests a potential future where AI-enabled diagnostic and treatment design tools could augment veterinary care in resource-constrained settings.
For AI practitioners, the key takeaway is the demonstration of LLMs as integrative tools for navigating complex scientific literature and protocol design. The 100-page ethics document generation alone represents a significant reduction in the administrative burden of medical research.
Frequently Asked Questions
Can I use ChatGPT to design treatments for my pet?
No. This case represents an extraordinary exception rather than a recommended approach. Designing medical treatments requires extensive expertise, regulatory oversight, and clinical validation. Attempting similar DIY treatments without proper training and oversight could cause serious harm. Always consult licensed veterinary professionals for animal healthcare.
How accurate are AI models for medical research?
Current LLMs can synthesize and explain existing medical literature with reasonable accuracy but cannot conduct original research or guarantee treatment efficacy. They are prone to "hallucinations" (generating plausible but incorrect information) and lack the clinical judgment of trained professionals. They should be used as research assistants, not as primary decision-makers.
What are the regulatory barriers to AI-designed treatments?
All medical treatments, whether AI-designed or not, must undergo rigorous clinical trials and regulatory approval processes (FDA in the US, EMA in Europe, etc.). These processes ensure safety and efficacy through controlled studies with appropriate statistical power. Bypassing these regulations is illegal and potentially dangerous.
Could this approach work for human patients?
The same technical approach—using AI to design personalized mRNA vaccines—is being explored by legitimate research institutions and pharmaceutical companies. However, these efforts follow established regulatory pathways with extensive safety monitoring. The timeline from design to human trials typically takes years, not weeks, due to necessary safety precautions.





