What Happened
Security researcher Hasan (@hasantoxr) has reported that a scan of the public internet revealed over 40,000 instances of OpenClaw servers are exposed and accessible without authentication. Of these, more than 12,000 servers were found to be vulnerable, allowing potential attackers to easily steal API keys and personal data.
OpenClaw is an open-source project that provides a web UI for interacting with various large language models (LLMs), similar to tools like Open WebUI or Ollama WebUI. It is commonly self-hosted by developers and researchers to run local or private AI models.
The core vulnerability stems from default configurations or misconfigurations where the OpenClaw server is deployed without any access controls (like authentication or firewall rules) on a public-facing IP address. This leaves the administrative interface and its associated data—including potentially sensitive API keys for services like OpenAI, Anthropic, or Google Gemini that users may have configured—open to anyone who finds the IP address.
Context & Researcher's Analysis
In the accompanying thread, the researcher states they conducted this investigation to compare the security posture of self-hosted solutions versus hosted AI providers. The implication is that while self-hosting offers data privacy, a misconfigured deployment introduces severe risks that managed, hosted providers mitigate by default through their security infrastructure.
The researcher compiled a list or analysis (linked in the tweet) rating the security of various hosted AI platforms. This suggests the thread likely details which providers have stronger default security settings, authentication requirements, and data isolation practices, providing a practical guide for users concerned about API key and data leakage.
Immediate Implications
For individuals and organizations self-hosting OpenClaw or similar AI interfaces:
- Check Exposure: Immediately verify that any self-hosted AI service is not exposed to the public internet unless protected by strong authentication (e.g., a login portal, VPN, or IP allowlisting).
- Audit Configurations: Review deployment scripts and Docker configurations to ensure they do not default to binding on
0.0.0.0(all interfaces) without accompanying access controls. - Rotate API Keys: Assume compromised credentials for any API keys that were stored in or accessible by an exposed instance and rotate them immediately.
- Consider Hosted Alternatives: For users without the expertise to maintain secure self-hosted deployments, the researcher's comparison list may point to more secure, managed alternatives.
The scale of the exposure—tens of thousands of instances—highlights a common problem in the democratization of AI tools: ease of deployment often outpaces security awareness, leading to widespread, preventable vulnerabilities.






