Developer and Datasette creator Simon Willison has released a new open-source command-line utility, scan-for-secrets, designed to help developers check folders and log files for accidentally leaked sensitive strings like API keys before sharing them.
The tool is available via uvx, the universal package runner from the Python uv toolchain. Users can immediately run it with uvx scan-for-secrets --help to see usage instructions.
What the Tool Does
scan-for-secrets scans directories for files that may contain secret strings—primarily API keys, tokens, or passwords that developers might inadvertently commit to version control or include in log dumps. The primary use case Willison highlights is preparing log files for sharing: before sending a batch of logs to a colleague or posting them for debugging, you can run this tool to ensure no credentials are exposed.
While the initial announcement is brief, the tool appears to be a focused, single-purpose utility in Willison's style: solving a specific, practical problem for developers with minimal setup.
Technical Details & Usage
The tool is distributed as a Python package installable via uvx, which fetches and runs it without requiring a permanent installation. This follows the pattern of Willison's other tools like ttok and strip-tags, which are also distributed via uvx.
Basic usage involves pointing the tool at a directory:
uvx scan-for-secrets ./path/to/logs
It will recursively scan files in that directory, likely using pattern matching for common secret formats (like OpenAI API keys starting with sk- or GitHub tokens). The --help flag will show available options, which may include configuring scan patterns, excluding file types, or output formats.
Context in Willison's Workflow
This tool fits into a broader trend of lightweight, composable CLI tools for AI development hygiene. As AI engineers increasingly work with APIs from OpenAI, Anthropic, Google, and others, managing and accidentally leaking credentials becomes a higher-stakes problem. Log files, especially from LLM applications, often contain full prompts and responses, which can include keys if error messages print configuration objects.
Willison has previously built tools for token counting (ttok), HTML tag stripping (strip-tags), and SQLite utilities (sqlite-utils), often releasing them quickly in response to immediate needs in his own workflow. scan-for-secrets appears to be another iteration of this pattern—a utility extracted from a real problem encountered during development.
gentic.news Analysis
This release is a minor but indicative tool in the expanding ecosystem of developer utilities for AI safety and hygiene. While not a breakthrough in static analysis (tools like trufflehog, gitleaks, and GitHub's own secret scanning exist), its value is in simplicity and immediate accessibility via uvx. It lowers the barrier to performing a basic security check before sharing data.
For AI engineers, the tool addresses a specific vulnerability in the development loop: debugging. When an LLM application fails, developers often examine logs containing prompts, responses, and sometimes full error traces. If those logs are shared—with teammates, in community forums, or as part of issue tracking—credentials can leak. A quick, pre-share scan becomes a sensible habit.
Willison's choice to distribute via uvx is also notable. It reflects the growing adoption of uv (developed by Astral, the company behind Ruff) as a faster, modern Python package installer and project manager. By leveraging uvx, the tool requires zero installation overhead, aligning with the "just run it" philosophy that suits infrequent security tasks.
In the broader landscape, this tool sits at the intersection of two trends: the proliferation of API-based AI services (and thus API keys) and the push for better tooling around AI application development. While major cloud providers and platforms have built-in secret detection, a local, scriptable tool gives developers control and can be integrated into custom pipelines. For teams building with LLMs, adding a scan-for-secrets step before exporting logs or datasets could become a standard practice to prevent costly credential leaks.
Frequently Asked Questions
What is scan-for-secrets?
scan-for-secrets is a Python command-line tool created by Simon Willison that scans folders and files for accidentally exposed secret strings, such as API keys and passwords. It's designed to be run quickly before sharing log files or other data dumps to ensure no credentials are leaked.
How do I install and run scan-for-secrets?
You don't need to install it permanently. If you have uv installed, you can run it directly using its package runner: uvx scan-for-secrets --help. This will fetch the tool and show usage instructions. You can then scan a directory with uvx scan-for-secrets ./your-directory.
How is this different from tools like trufflehog or gitleaks?
Tools like trufflehog and gitleaks are more comprehensive, enterprise-focused secret scanners often integrated into CI/CD pipelines. scan-for-secrets appears to be a lighter, more focused utility for a specific manual task: checking a batch of files (like logs) before you share them. It's likely simpler to run for a one-off check.
Is this tool specifically for AI developers?
While useful for any developer, it has particular relevance for AI engineers who frequently work with multiple API keys (e.g., for OpenAI, Anthropic, etc.) and whose application logs might contain these keys in error messages or debug output. It helps prevent accidental leaks when sharing logs for debugging model behavior or API issues.







