What It Does — Two Engines, One MCP Server
The Oxylabs MCP server (@oxylabs/mcp-server) exposes two distinct scraping engines through a single Claude Code interface. This dual-architecture is its key differentiator.
Web Scraper API (4 tools): This is a traditional proxy-based scraper. It uses Oxylabs' network of over 195 countries to handle IP rotation, CAPTCHAs, and JavaScript rendering. Its tools (universal_scraper, google_search_scraper, amazon_search_scraper, amazon_product_scraper) are for raw HTML-to-Markdown extraction.
AI Studio (4 tools): This is an AI-powered extraction engine. Tools like ai_scraper and ai_crawler are designed for Retrieval-Augmented Generation (RAG) workflows, pulling structured data (JSON/Markdown) directly from pages. The ai_browser_agent enables remote browser automation.
Setup — How to Install and Configure
Install the server via Claude Desktop or directly in your project:
# Install via Claude Desktop (Recommended)
npm install -g @oxylabs/mcp-server
Then, add it to your Claude Desktop claude_desktop_config.json:
{
"mcpServers": {
"oxylabs": {
"command": "npx",
"args": ["-y", "@oxylabs/mcp-server"],
"env": {
"OXYLABS_USERNAME": "YOUR_USERNAME",
"OXYLABS_PASSWORD": "YOUR_PASSWORD"
}
}
}
}
You need an Oxylabs account. Crucially, you get two separate free trials: 2,000 results for the Web Scraper API and 1,000 credits for AI Studio.
When To Use It — The Specific Use Cases
Independent benchmarks from AIMultiple show Oxylabs has the fastest stress-test completion time (31.7s average), beating Bright Data (48.7s) and Nimble (182.3s). However, its accuracy scored 75%, placing it below competitors.
Use Oxylabs MCP when:
- Speed is critical over perfect accuracy: Scraping large volumes of public data for trend analysis or initial research where some noise is acceptable.
- You need both raw and AI-processed data: Start with the
universal_scraperfor broad collection, then use theai_scraperto extract specific fields from the results. - You're already an Oxylabs customer: The integration is seamless if you use their proxy infrastructure.
- Cost is a primary constraint: The AI Studio entry point is $12/month, lower than Firecrawl ($19) and far below Nimble ($2,500).
Avoid it for: Mission-critical data extraction where 100% accuracy is required (Bright Data scored 100%) or when you need a vast toolset (Bright Data offers 60+ tools).
The Bottom Line for Claude Code Users
This server is a specialized tool. Its dual-engine approach lets you prompt Claude to "scrape this product page and extract the price and specs into JSON" using the ai_scraper tool, or "get the raw HTML from these 100 URLs" using the universal_scraper. The free trials make it easy to test against your specific use case. For most developers building reliable data pipelines, Bright Data's MCP server (higher accuracy) or Firecrawl's (open source) remain the default recommendations. But for high-volume, speed-first tasks, Oxylabs has a unique niche.
gentic.news Analysis
This release is part of a surge in specialized MCP servers for Claude Code, following the recent availability of servers for major IaC tools like Terraform and Google's official Chrome DevTools MCP. The trend shows the Model Context Protocol ecosystem rapidly maturing beyond general utilities into vertical-specific, enterprise-grade tools. The mention of AI-powered extraction tools (ai_scraper, ai_crawler) directly connects to the week's strong trend in Retrieval-Augmented Generation (RAG) coverage, highlighting how MCP is becoming a primary conduit for RAG workflows within the IDE.
The dual-engine structure is a pragmatic acknowledgment of different scraping needs: brute-force collection versus intelligent extraction. For Claude Code users, this means more precise tool selection via prompt. Instead of a generic "scrape this," you can now direct the AI to use a specific engine, optimizing for speed or structure. However, the 75% benchmark accuracy serves as a crucial reminder: always validate the output of automated data extraction, especially when integrating it into your codebase. This aligns with a cautionary tale about RAG system failures at production scale we covered recently, underscoring the need for robust validation layers.




