Documentation Index
Fetch the complete documentation index at: https://firecrawl-mog-search-exclude-include-domains.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
官方 Ruby SDK 由 Firecrawl monorepo 中的 apps/ruby-sdk 维护。
要安装 Firecrawl Ruby SDK,请将其添加到项目中:
添加到 Gemfile:gem "firecrawl-sdk", "~> 1.0"
然后运行: gem install firecrawl-sdk
- 前往 firecrawl.dev 获取 API 密钥
- 将 API 密钥设置为名为
FIRECRAWL_API_KEY 的环境变量,或直接通过 Firecrawl::Client.new(api_key: ...) 传入
下面是一个基于当前 SDK API 的简要示例:
require "firecrawl"
client = Firecrawl::Client.from_env
doc = client.scrape(
"https://firecrawl.dev",
Firecrawl::Models::ScrapeOptions.new(formats: ["markdown"])
)
job = client.crawl(
"https://firecrawl.dev",
Firecrawl::Models::CrawlOptions.new(limit: 5)
)
puts doc.markdown
puts "Crawled pages: #{job.data&.size || 0}"
要抓取单个 URL,请使用 scrape 方法。
doc = client.scrape(
"https://firecrawl.dev",
Firecrawl::Models::ScrapeOptions.new(
formats: ["markdown", "html"],
only_main_content: true,
wait_for: 5000
)
)
puts doc.markdown
puts doc.metadata["title"]
在 scrape 端点中加入带有 prompt 和 schema 的 json 格式,即可提取结构化 JSON:
doc = client.scrape(
"https://example.com/product",
Firecrawl::Models::ScrapeOptions.new(
formats: [
{
"type" => "json",
"prompt" => "Extract the product name and price",
"schema" => {
"type" => "object",
"properties" => {
"name" => { "type" => "string" },
"price" => { "type" => "number" }
}
}
}
]
)
)
puts doc.json
要爬取网站并等待完成,请使用 crawl。它会自动轮询,直到任务完成。
job = client.crawl(
"https://firecrawl.dev",
Firecrawl::Models::CrawlOptions.new(
limit: 50,
max_discovery_depth: 3,
scrape_options: Firecrawl::Models::ScrapeOptions.new(
formats: ["markdown"]
)
)
)
puts "Status: #{job.status}"
puts "Progress: #{job.completed}/#{job.total}"
job.data&.each do |page|
puts page.metadata["sourceURL"]
end
使用 start_crawl 启动任务,无需等待。
response = client.start_crawl(
"https://firecrawl.dev",
Firecrawl::Models::CrawlOptions.new(limit: 100)
)
puts "Job ID: #{response.id}"
使用 get_crawl_status 查看爬取进度。
status = client.get_crawl_status(response.id)
puts "Status: #{status.status}"
puts "Progress: #{status.completed}/#{status.total}"
使用 cancel_crawl 取消正在进行中的爬取任务。
result = client.cancel_crawl(response.id)
puts result
使用 map 发现网站中的链接。
data = client.map(
"https://firecrawl.dev",
Firecrawl::Models::MapOptions.new(
limit: 100,
search: "blog"
)
)
data.links&.each do |link|
puts "#{link["url"]} - #{link["title"]}"
end
使用 search 进行搜索,并可选择设置相关选项。
results = client.search(
"firecrawl web scraping",
Firecrawl::Models::SearchOptions.new(limit: 10)
)
results.web&.each do |result|
puts "#{result["title"]} - #{result["url"]}"
end
通过 batch_scrape 并行抓取多个 URL。
job = client.batch_scrape(
["https://firecrawl.dev", "https://firecrawl.dev/blog"],
Firecrawl::Models::BatchScrapeOptions.new(
options: Firecrawl::Models::ScrapeOptions.new(
formats: ["markdown"]
)
)
)
job.data&.each do |doc|
puts doc.markdown
end
使用 agent 运行 AI 代理。
result = client.agent(
Firecrawl::Models::AgentOptions.new(
prompt: "Find the pricing plans for Firecrawl and compare them"
)
)
puts result.data
使用适用于结构化输出的 JSON schema:
result = client.agent(
Firecrawl::Models::AgentOptions.new(
prompt: "Extract pricing plan details",
urls: ["https://firecrawl.dev"],
schema: {
"type" => "object",
"properties" => {
"plans" => {
"type" => "array",
"items" => {
"type" => "object",
"properties" => {
"name" => { "type" => "string" },
"price" => { "type" => "string" }
}
}
}
}
}
)
)
puts result.data
查看并发数和剩余额度:
concurrency = client.get_concurrency
puts "Concurrency: #{concurrency.concurrency}/#{concurrency.max_concurrency}"
credits = client.get_credit_usage
puts "Remaining credits: #{credits.remaining_credits}"
Ruby SDK 提供了浏览器沙箱辅助工具。
使用抓取任务 ID,在同一回放上下文中运行后续浏览器代码:
interact(...) 会在绑定到抓取任务的浏览器会话中运行代码 (首次使用时会初始化该会话) 。
stop_interactive_browser(...) 会在使用完毕后显式停止交互式会话。
scrape_job_id = "550e8400-e29b-41d4-a716-446655440000"
run = client.interact(
scrape_job_id,
"console.log(page.url());",
language: "node",
timeout: 60
)
puts run["stdout"]
deleted = client.stop_interactive_browser(scrape_job_id)
puts "Deleted: #{deleted["success"]}"
Firecrawl::Client.new 支持以下选项:
| 选项 | 类型 | 默认值 | 描述 |
|---|
api_key | String | FIRECRAWL_API_KEY 环境变量 | 你的 Firecrawl API 密钥 |
api_url | String | https://api.firecrawl.dev (或 FIRECRAWL_API_URL) | API 基础 URL |
timeout | Integer | 300 | HTTP 请求超时时间 (秒) |
max_retries | Integer | 3 | 发生瞬时故障时的自动重试次数 |
backoff_factor | Float | 0.5 | 指数退避系数 (秒) |
client = Firecrawl::Client.new(
api_key: "fc-your-api-key",
api_url: "https://api.firecrawl.dev",
timeout: 300,
max_retries: 3,
backoff_factor: 0.5
)
SDK 会抛出 Firecrawl 模块下的异常。
begin
doc = client.scrape("https://example.com")
rescue Firecrawl::AuthenticationError => e
puts "Auth failed: #{e.message}"
rescue Firecrawl::RateLimitError => e
puts "Rate limited: #{e.message}"
rescue Firecrawl::JobTimeoutError => e
puts "Job #{e.job_id} timed out after #{e.timeout_seconds}s"
rescue Firecrawl::FirecrawlError => e
puts "Error (#{e.status_code}): #{e.message}"
end
如果你是需要 Firecrawl API 密钥的 AI 代理,请参见 firecrawl.dev/agent-onboarding/SKILL.md 了解自动化入门说明。