OpenClaw Skill

scraper

Structured extraction and cleanup for public, user-authorized web pages. Use when the user wants to collect, clean, summarize, or transform content from accessible pages into reusable text or data. Do not use to bypass logins, paywalls, captchas, robots restrictions, or access controls. Local-only output.

Install

$npx clawhub@latest install scraper
All-time installs9
Active installs9
Stars0

Scraper

Turn messy public pages into clean, reusable data.

Core Purpose

Scraper is a safe extraction skill for public, user-authorized pages. It helps the agent:

  • fetch page content from a URL
  • extract readable text
  • strip boilerplate where possible
  • save clean output locally
  • prepare content for later summarization or analysis

Safety Boundaries

  • Only use on public or user-authorized pages
  • Do not bypass logins, paywalls, captchas, robots restrictions, or rate limits
  • Do not request or store credentials
  • Do not perform stealth scraping, account creation, or identity evasion
  • Save outputs locally only

Runtime Requirements

  • Python 3 must be available as python3
  • No external packages required

Local Storage

All outputs are stored locally under:

  • ~/.openclaw/workspace/memory/scraper/jobs.json
  • ~/.openclaw/workspace/memory/scraper/output/

Key Workflows

  • Capture a page: fetch_page.py --url "https://example.com"
  • Extract readable text: extract_text.py --url "https://example.com"
  • Save cleaned content: save_output.py --url "https://example.com" --title "Example"
  • List prior jobs: list_jobs.py

Scripts

ScriptPurpose
init_storage.pyInitialize scraper storage
fetch_page.pyDownload a page with standard headers
extract_text.pyConvert HTML into cleaned plain text
save_output.pySave extracted output and register a job
list_jobs.pyShow past scraping jobs

Persistent memory

Give your OpenClaw agent a memory layer

Mem0 remembers users and context across sessions so you send fewer tokens and get better answers.

Try Mem0Mem0 + OpenClaw guide