// Fetch web content from URLs, extract specific topics using subagents, and save structured summaries as markdown. This skill should be used when other skills or workflows need to retrieve and analyze web documentation. Input is URL(s) and topic names, output is detailed markdown summaries saved to specified paths.
| name | web-reference-fetcher |
| description | Fetch web content from URLs, extract specific topics using subagents, and save structured summaries as markdown. This skill should be used when other skills or workflows need to retrieve and analyze web documentation. Input is URL(s) and topic names, output is detailed markdown summaries saved to specified paths. |
This skill is designed to be called by other skills or workflows. It provides a three-step pipeline:
Use this skill when:
This skill expects:
workdir/<filename>_<entry_id>/references/ref_<N>.mdDirectory naming convention:
public_test from data/public_test.jsonlpublic_test_public_test_1workdir/public_test_public_test_1/references/ref_1.md, ref_2.md, etc.Use the provided script to fetch raw content from URL(s).
Execute:
python3 .claude/skills/web-reference-fetcher/scripts/fetch_url.py <url> --output workdir/<filename>_<entry_id>/references/ref_<N>.md
What it does:
https://r.jina.ai/<url>Script options:
--output <path>: Save fetched content to file (required for standardized workflow)--silent: Suppress progress messagesStandardized output location:
workdir/<filename>_<entry_id>/references/ref_<N>.mdworkdir/public_test_public_test_1/references/ref_1.mdOutput: The script saves the fetched markdown content to the specified path.
Pass the fetched content to a subagent along with extraction requirements.
Subagent invocation pattern:
Use the Task tool to launch a general-purpose subagent:
Prompt template:
以下のウェブコンテンツから、{トピック名}に関する詳細な情報を抽出してください。
コンテンツ:
---
{fetched_markdown_content}
---
抽出要件:
{extraction_requirements}
以下の形式でmarkdownとして出力してください:
# {トピック名}
## {セクション1}
[詳細な説明、テーブル、仕様など]
## {セクション2}
[詳細な説明、手順、コード例など]
...
Extraction requirements should specify:
Save the subagent's output to the specified path.
File operations:
# 1. Fetch content
CONTENT=$(python3 .claude/skills/web-reference-fetcher/scripts/fetch_url.py \
"https://example.com/docs")
# 2. Pass to subagent via Task tool with:
# - Fetched content
# - Topic: "API Authentication Methods"
# - Extraction: "Extract all authentication methods, parameters, and examples"
# 3. Save subagent output
# Path: workdir/references/api_auth.md
# Fetch URL 1
CONTENT1=$(python3 .claude/skills/web-reference-fetcher/scripts/fetch_url.py \
"https://example.com/spec")
# Analyze with subagent for Topic 1
# Save to: workdir/references/specification.md
# Fetch URL 2
CONTENT2=$(python3 .claude/skills/web-reference-fetcher/scripts/fetch_url.py \
"https://example.com/tutorial")
# Analyze with subagent for Topic 2
# Save to: workdir/references/tutorial.md
This skill is designed to be called by other skills. Example integration:
# In another skill (e.g., test-case-handler skill):
## Step 3: Fetch Reference Documentation
For each reference URL in the test case:
1. Invoke web-reference-fetcher skill
2. Pass URL and topic extracted from test case instruction
3. Specify output path: `workdir/references/task_{N}/reference_{M}.md`
4. Use the saved markdown for further processing
抽出要件:
- すべての技術パラメータを表形式で抽出
- 測定手順をステップバイステップで記載
- 計算式と例を含める
- 品質基準と許容範囲を明記
抽出要件:
- すべてのエンドポイントをリスト化
- リクエスト/レスポンス形式を示す
- 認証方法を詳細に説明
- エラーコードと対処方法を含める
抽出要件:
- 手順を番号付きリストで抽出
- 各手順の詳細な説明を含める
- 必要なツールと前提条件を明記
- トラブルシューティング情報を追加
--output to save raw content, --silent for quiet modeFor multiple URLs:
for url in "${urls[@]}"; do
# Fetch each URL
# Launch subagents in parallel if possible
# Save to respective paths
done
Provide detailed formatting instructions to subagents:
フォーマット要件:
- 見出しレベル: H2を最上位とする
- コードブロック: 言語を明示する
- テーブル: Markdown形式、ヘッダー行を含む
- リスト: 階層構造を保持する
To avoid re-fetching:
# Save raw content first
python3 .claude/skills/web-reference-fetcher/scripts/fetch_url.py \
"https://example.com/docs" --output /tmp/cached_content.md
# Use cached content for multiple analyses with different extraction requirements
curl command-line toolBefore completing:
When called by other skills:
This skill is self-contained and can be invoked without knowledge of:
It only requires: