Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Fake Claude Code download link topping Google search results leads to Trojan malware infection on a Windows desktop…
Open SourceScore: 72

Trojan Masquerading as Claude Code Tops Google Search, Infects Users

A Trojan impersonating Claude Code ranked #1 on Google. Windows Defender caught it as Trojan:Win32/Kepavll!rfn. The victim had 30 years of internet experience.

·22h ago·3 min read··8 views·AI-Generated·Report error
Share:
Source: reddit.comvia reddit_claudeCorroborated
Is there a Trojan masquerading as Claude Code in Google search results?

A Trojan disguised as Anthropic's Claude Code appeared as the first Google search result, infecting a user who downloaded it. Windows Defender flagged the malware as Trojan:Win32/Kepavll!rfn, highlighting a supply-chain attack vector against AI developers.

TL;DR

Trojan posing as Claude Code found on Google · Windows Defender flagged Trojan:Win32/Kepavll!rfn · Victim with 30 years experience fell for it

A Trojan disguised as Anthropic's Claude Code appeared as the first Google search result on May 11, 2026. Windows Defender flagged the malware as Trojan:Win32/Kepavll!rfn after a user downloaded it from a site mimicking the official Claude Code homepage.

Key facts

  • Trojan appeared as #1 Google result for 'claude code' on May 11, 2026
  • Malware flagged as Trojan:Win32/Kepavll!rfn by Windows Defender
  • Victim has been online since 1996 and has 30 years of internet experience
  • Claude Code has direct file system and shell access
  • Anthropic has not yet commented on the incident

A Reddit user reported that a search for "claude code" on Google returned a malicious site as the top result, impersonating Anthropic's official Claude Code download page. The site replicated the design language of the real Anthropic homepage, including the same layout and color scheme, making it difficult to distinguish from the legitimate source.

The victim, who has been online since 1996 and works on a Mac, downloaded the installer on a rarely used Windows PC. Windows Defender immediately flagged the file as Trojan:Win32/Kepavll!rfn. The user noted they had previously installed Claude Code via a PowerShell command on another machine and assumed the Google result was safe.

This is a supply-chain attack targeting the growing ecosystem of AI developer tools. Claude Code, released in 2025, is an Anthropic agentic coding tool with direct file system and shell access — meaning a trojanized version could exfiltrate source code, API keys, and credentials with the same permissions users grant the legitimate tool.

Google's Responsibility

The fact that a malicious site ranked first for a branded search query suggests either a sophisticated SEO poisoning campaign or a gap in Google's ad quality controls. Google has invested heavily in AI safety [per Google's own blog posts] but has not yet commented on this specific incident. The company competes directly with Anthropic through its Gemini models and CodeWiki product, adding an awkward dimension to the story.

Broader Pattern

This incident follows a trend of attackers targeting AI developers. In April 2026, researchers at [MIT] demonstrated that anyone with a laptop could poison major AI models, including those from Anthropic and Google. The Claude Code trojan represents the exploitation side of that vulnerability — not poisoning the model but poisoning the distribution channel.

Anthropic has not released an official statement about the malicious site. The company's Claude Code product has been rapidly adopted, appearing in 670 articles on this publication alone, making it a high-value target for impersonation.

Key Takeaways

  • A Trojan impersonating Claude Code ranked #1 on Google.
  • Windows Defender caught it as Trojan:Win32/Kepavll!rfn.
  • The victim had 30 years of internet experience.

What to watch

Enabling Claude Code to work more autonomously \ Anthropic

Watch for Anthropic's official response and whether Google removes the malicious site from search results. If the trojan spreads further, expect a security advisory from Anthropic and possibly a broader investigation by Google Trust & Safety into SEO poisoning targeting AI tools.


Sources cited in this article

  1. Google's
Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from 1 verified source, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

This incident reveals a critical vulnerability in the AI software supply chain that is often overlooked: distribution channel integrity. While the industry focuses on model poisoning and prompt injection, attackers are taking the simpler path of SEO poisoning and brand impersonation. The fact that a user with three decades of online experience fell for the attack underscores how convincing the impersonation was. For Google, this is a particularly awkward moment. The company is both a search gatekeeper and a direct competitor to Anthropic through its Gemini models. If Google's search algorithm is serving malicious copies of a competitor's product, it raises questions about whether the company is doing enough to protect users — or whether it has an incentive to look the other way. The timing is also notable: Google launched CodeWiki on May 10, 2026, a direct competitor to Claude Code's documentation features. Anthropic's silence on the matter is concerning. The company has been aggressive in pushing Claude Code as a secure, enterprise-ready tool. A trojanized version circulating widely could damage trust in the entire product category. Expect Anthropic to issue a security advisory within 48 hours, and possibly partner with Google to take down the malicious site. The broader lesson is that AI agent tools with file system and shell access are now prime targets for supply-chain attacks, and companies need to invest in distribution channel security, not just model security.
Compare side-by-side
Anthropic vs Google
Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Open Source

View all
Google logo and Gemma 4 branding on a dark gradient background, representing the new open-weight AI model family…
Open SourceBreakthrough
100

Google Releases Gemma 4 Family Under Apache 2.0, Featuring 2B to 31B Models with MoE and Multimodal Capabilities

Google has released the Gemma 4 family of open-weight models, derived from Gemini 3 technology. The four models, ranging from 2B to 31B parameters and including a Mixture-of-Experts variant, are available under a permissive Apache 2.0 license and feature multimodal processing.

engadget.com/Apr 2, 2026/3 min read/Widely Reported
product launchopen sourcegoogle
A sleek interface shows a waveform graph with a transcription panel, highlighting Cohere's ASR model achieving top…
Open Source
95

Cohere Transcribe: 2B-Parameter Open-Source ASR Model Achieves 5.42% WER, Topping Hugging Face Leaderboard

Cohere released Transcribe, a 2B-parameter open-source speech recognition model. It claims a 5.42% average word error rate, beating OpenAI Whisper v3 and topping the Hugging Face Open ASR Leaderboard.

the-decoder.com/Mar 27, 2026/3 min read/Widely Reported
open-sourcespeech-aibenchmarks
Students and instructors collaborate around a workstation in a modern classroom at ENS Paris-Saclay, with code and…
Open Source
65

ENS Paris-Saclay Publishes Full-Stack LLM Course: 7 Sessions Cover torchtitan, TorchFT, vLLM, and Agentic AI

Edouard Oyallon released a comprehensive open-access graduate course on training and deploying large-scale models. It bridges theory and production engineering using Meta's torchtitan and torchft, GitHub-hosted labs, and covers the full stack from distributed training to agentic AI.

admin/Mar 27, 2026/3 min read
open sourcellmsai engineering