Use this file to discover all available pages before exploring further.
Search the web and get clean, structured content from every result in a single API call. Pass a query to /search and Firecrawl returns titles, descriptions, and URLs. Add scrapeOptions to also retrieve full-page markdown, HTML, links, or screenshots for each result.For the full parameter list, see the Search Endpoint API Reference.
Try it in the Playground
Test searching in the interactive playground — no code required.
SDKs will return the data object directly. cURL will return the complete payload.
JSON
{ "success": true, "data": { "web": [ { "url": "https://www.firecrawl.dev/", "title": "Firecrawl - The Web Data API for AI", "description": "The web crawling, scraping, and search API for AI. Built for scale. Firecrawl delivers the entire internet to AI agents and builders.", "position": 1 }, { "url": "https://github.com/firecrawl/firecrawl", "title": "mendableai/firecrawl: Turn entire websites into LLM-ready ... - GitHub", "description": "Firecrawl is an API service that takes a URL, crawls it, and converts it into clean markdown or structured data.", "position": 2 }, ... ], "images": [ { "title": "Quickstart | Firecrawl", "imageUrl": "https://mintlify.s3.us-west-1.amazonaws.com/firecrawl/logo/logo.png", "imageWidth": 5814, "imageHeight": 1200, "url": "https://docs.firecrawl.dev/", "position": 1 }, ... ], "news": [ { "title": "Y Combinator startup Firecrawl is ready to pay $1M to hire three AI agents as employees", "url": "https://techcrunch.com/2025/05/17/y-combinator-startup-firecrawl-is-ready-to-pay-1m-to-hire-three-ai-agents-as-employees/", "snippet": "It's now placed three new ads on YC's job board for “AI agents only” and has set aside a $1 million budget total to make it happen.", "date": "3 months ago", "position": 1 }, ... ] }}
In addition to regular web results, Search supports specialized result types via the sources parameter:
web: standard web results (default)
news: news-focused results
images: image search results
You can request multiple sources in a single call (e.g., sources: ["web", "news"]). When you do, the limit parameter applies per source type — so limit: 5 with sources: ["web", "news"] returns up to 5 web results and up to 5 news results (10 total). If you need different parameters per source (for example, different limit values or different scrapeOptions), make separate calls instead.
Use includeDomains to restrict search results to specific domains, or excludeDomains to remove specific domains from the search. These fields add site: and -site: operators to the query internally, so pass domains only without a protocol or path.
includeDomains and excludeDomains are mutually exclusive. Use one or the other in a single request.
{ "success": true, "data": [ { "title": "Firecrawl - The Ultimate Web Scraping API", "description": "Firecrawl is a powerful web scraping API that turns any website into clean, structured data for AI and analysis.", "url": "https://firecrawl.dev/", "markdown": "# Firecrawl\n\nThe Ultimate Web Scraping API\n\n## Turn any website into clean, structured data\n\nFirecrawl makes it easy to extract data from websites for AI applications, market research, content aggregation, and more...", "links": [ "https://firecrawl.dev/pricing", "https://firecrawl.dev/docs", "https://firecrawl.dev/guides" ], "metadata": { "title": "Firecrawl - The Ultimate Web Scraping API", "description": "Firecrawl is a powerful web scraping API that turns any website into clean, structured data for AI and analysis.", "sourceURL": "https://firecrawl.dev/", "statusCode": 200 } } ]}
from firecrawl import Firecrawlfirecrawl = Firecrawl(api_key="fc-YOUR_API_KEY")# Search with location settings (Germany)search_result = firecrawl.search( "web scraping tools", limit=5, location="Germany")# Process the resultsfor result in search_result.data: print(f"Title: {result['title']}") print(f"URL: {result['url']}")
Use the tbs parameter to filter results by time. Note that tbs only applies to web source results — it does not filter news or images results. If you need time-filtered news, consider using a web source with the site: operator to target specific news domains.
For more precise time filtering, you can specify exact date ranges using the custom date range format:
from firecrawl import Firecrawl# Initialize the client with your API keyfirecrawl = Firecrawl(api_key="fc-YOUR_API_KEY")# Search for results from December 2024search_result = firecrawl.search( "firecrawl updates", limit=10, tbs="cdr:1,cd_min:12/1/2024,cd_max:12/31/2024")
You can combine sbd:1 with time filters to get date-sorted results within a time range. For example, sbd:1,qdr:w returns results from the past week sorted newest first, and sbd:1,cdr:1,cd_min:12/1/2024,cd_max:12/31/2024 returns results from December 2024 sorted by date.
from firecrawl import Firecrawl# Initialize the client with your API keyfirecrawl = Firecrawl(api_key="fc-YOUR_API_KEY")# Set a 30-second timeoutsearch_result = firecrawl.search( "complex search query", limit=10, timeout=30000 # 30 seconds in milliseconds)
For teams with strict data handling requirements, Firecrawl offers Zero Data Retention (ZDR) options for the /search endpoint via the enterprise parameter. ZDR search is available on Enterprise plans — visit firecrawl.dev/enterprise to get started.
This is separate from the zeroDataRetention scrape option, which controls ZDR for scraping operations. See Scrape ZDR for details. The enterprise parameter only applies to the search portion of the request.
With end-to-end ZDR, both Firecrawl and our upstream search provider enforce zero data retention. No query or result data is stored at any point in the pipeline.
With anonymized ZDR, Firecrawl enforces full zero data retention on our side. Our search provider may cache the query, but it is fully anonymized — no identifying information is attached.
If you are using search with content scraping (scrapeOptions), the enterprise parameter covers the search portion while zeroDataRetention in scrapeOptions covers the scraping portion. To get full ZDR across both, set both:
The cost of a search is 2 credits per 10 results, rounded up (1–10 results = 2 credits, 11–20 = 4 credits, and so on). If scraping options are enabled, the standard scraping costs apply to each search result:
Basic scrape: 1 credit per webpage
PDF parsing: 1 credit per PDF page
Enhanced proxy mode: 4 additional credits per webpage
JSON mode: 4 additional credits per webpage
To help control costs:
Set parsers: [] if PDF parsing isn’t required
Use proxy: "basic" instead of "enhanced" when possible, or set it to "auto"
Limit the number of search results with the limit parameter
For more details about the scraping options, refer to the Scrape Feature documentation. Everything except for the FIRE-1 Agent and Change-Tracking features are supported by this Search endpoint.