// Multi-agent orchestration system for automatically generating comprehensive Azure DevOps wiki documentation from Python codebase. Creates hierarchical wiki pages matching repository structure with bidirectional linkages between documentation and source files. Use when needing to document entire directories or maintain wiki docs synchronized with code.
| name | wiki-auto-documenter |
| description | Multi-agent orchestration system for automatically generating comprehensive Azure DevOps wiki documentation from Python codebase. Creates hierarchical wiki pages matching repository structure with bidirectional linkages between documentation and source files. Use when needing to document entire directories or maintain wiki docs synchronized with code. |
Automated Azure DevOps wiki documentation generation using multi-agent orchestration. This skill coordinates specialized agents to analyze Python code, generate comprehensive documentation, and publish to Azure DevOps wiki with proper hierarchy and cross-references.
Use this skill when:
python_files/This skill implements a three-tier multi-agent orchestration pattern:
Context Efficiency:
export AZURE_DEVOPS_PAT="your-personal-access-token"
export AZURE_DEVOPS_ORGANIZATION="emstas"
export AZURE_DEVOPS_PROJECT="Program Unify"
vso.wiki_write scopeAll generated documentation MUST follow these markdown formatting conventions for consistency and clarity:
# for headers following proper hierarchy (H1 โ H2 โ H3 โ H4)**bold** for field labels (e.g., Purpose, Parameters, Returns)*italic* for emphasis within textvariable_name, function(), ClassName- (hyphen) for unordered lists1. for ordered lists with proper numbering[text](url) format for inline links[file](./file_name)| with proper alignment| --- | --- |--- (horizontal rule) only for major section separatorsField Labels (use bold):
Code Elements (use backticks):
function_name()ClassNamevariable_namemodule.submodulepath/to/file.pyStructure Requirements:
Agent Type: general-purpose
Model: sonnet (or opus for large codebases >300 files)
Responsibilities:
Decomposition Strategy:
Example Orchestrator Prompt:
You are an ORCHESTRATOR AGENT for automated wiki documentation generation.
PROJECT CONTEXT:
- Repository: unify_2_1_dm_synapse_env_d10 (Azure Synapse PySpark ETL pipelines)
- Medallion architecture: Bronze โ Silver โ Gold layers
- Python files use: SparkOptimiser, TableUtilities, NotebookLogger
- Follow: .claude/CLAUDE.md and .claude/rules/python_rules.md
YOUR TASK:
Generate comprehensive Azure DevOps wiki documentation for: {target_path}
STEPS:
1. Analyze directory structure and count Python files
2. Decompose into 2-8 optimal subtasks (balance workload)
3. Launch code-documenter agents IN PARALLEL (single message, multiple Task calls)
4. Collect JSON responses from all agents
5. Aggregate documentation and build cross-reference map
6. Launch wiki-publisher agent with complete dataset
7. Produce final report with all created wiki URLs
AGENT LAUNCH:
Use Task tool with subagent_type="general-purpose" for all agents.
Launch all code-documenter agents in a SINGLE MESSAGE for parallelization.
EXPECTED OUTPUT:
Comprehensive JSON report with:
- Total files documented
- All wiki page URLs
- Cross-reference map
- Execution metrics
- Any errors or warnings
Agent Type: general-purpose with code analysis specialization
Count: 2-8 (determined by orchestrator based on file distribution)
Model: haiku (faster for code analysis tasks)
Responsibilities:
@synapse_error_print_handler)Documentation Template Per File:
CRITICAL: Follow markdown formatting standards strictly:
**bold** for field labels- for unordered lists# {FileName}
**File Path**: `{relative_path}`
**Repository Link**: [View Source]({azure_devops_blob_url})
**Layer**: {Bronze|Silver|Gold|Utility|Testing|Pipeline}
## Overview
{Brief 2-3 sentence description of file purpose}
## Key Components
### Classes
{For each class found:}
#### `{ClassName}`
**Purpose**: {Extract from class docstring or infer from code - 1-2 sentence description of what this class does}
**Pattern**: {ETL|Utility|Test|Other}
**Description**:
{Extract full class docstring if available, or generate detailed description from code analysis:}
- What problem does this class solve?
- What are its main responsibilities?
- How does it fit into the overall system?
- Any important design patterns or architectural considerations
**Attributes**:
- `{attr_name}`: {type} - {description from docstring or inferred from usage}
**Methods**:
##### `{method_name}({params}) -> {return_type}`
{Extract from method docstring or describe what the method does}
**Parameters**:
- `{param_name}`: {type} - {description from docstring}
**Returns**: {description of return value from docstring}
**Decorators**: {decorator_list}
**Behavior**: {Key behavior or side effects if relevant}
---
### Functions
{For each standalone function:}
#### `{function_name}({params}) -> {return_type}`
**Purpose**: {Extract from function docstring or infer - 1-2 sentence description}
**Description**:
{Extract full function docstring if available, or generate description:}
- What does this function do?
- What are the key inputs and outputs?
- Any important side effects or state changes?
- When should this function be used?
**Parameters**:
- `{param_name}`: {type} - {description from docstring or inferred}
**Returns**: {description of return value from docstring or inferred}
**Decorators**: {decorator_list}
**Behavior**: {Detailed behavior description from docstring or code analysis}
---
## Dependencies
### External Imports
- `{package}`: {purpose}
### Internal Imports
- [`{module}`]({wiki_link_to_module}): {purpose}
### Related Files
- [{related_file_name}]({wiki_link}): {relationship_description}
## Usage Examples
{If ETL class pattern:}
```python
# Bronze to Silver transformation
from {module_path} import {ClassName}
table = {ClassName}(bronze_table_name="bronze_db.b_table_name")
# Extract, Transform, Load executed in __init__
{If utility/function:}
from {module_path} import {function_name}
result = {function_name}(param1, param2)
{If applicable - only for silver/gold layer files:}
Input Layer: {Bronze|Silver}
Output Layer: {Silver|Gold}
Source Table: {source_db}.{source_table}
Target Table: {target_db}.{target_table}
Transformation Logic:
Auto-generated documentation Last updated: {timestamp} Generated by: wiki-auto-documenter skill
**JSON Response Format**:
```json
{
"agent_id": "code-documenter-{n}",
"status": "completed",
"files_documented": [
{
"file_path": "python_files/gold/g_cms_address.py",
"wiki_path": "/gold/g_cms_address",
"markdown_content": "# g_cms_address\n\n...",
"repo_link": "https://dev.azure.com/emstas/Program%20Unify/_git/unify_2_1_dm_synapse_env_d10?path=/python_files/gold/g_cms_address.py&version=GB{branch}",
"metadata": {
"layer": "gold",
"has_etl_pattern": true,
"class_count": 1,
"function_count": 3,
"decorator_count": 3,
"import_count": 7
},
"dependencies": [
"utilities/session_optimiser",
"utilities/table_utilities"
],
"related_files": [
{
"file_path": "python_files/silver/silver_cms/s_cms_address.py",
"relationship": "silver_input",
"wiki_path": "/silver/silver_cms/s_cms_address"
}
]
}
],
"summary": {
"files_processed": 52,
"files_successful": 52,
"files_failed": 0,
"total_classes": 48,
"total_functions": 156
},
"errors": [],
"warnings": ["File X has no docstrings"],
"execution_time_seconds": 45
}
Code Analyzer Helper Script:
The scripts/code_analyzer.py provides reusable functions for Python AST analysis:
analyze_python_file(file_path) - Complete AST analysis with docstring extractionextract_classes(ast_tree) - Parse class definitions with docstringsextract_functions(ast_tree) - Parse function signatures with docstringsextract_docstring(node) - Extract docstring from AST nodeparse_docstring_sections(docstring) - Parse Google/NumPy/reStructuredText docstring formatinfer_purpose_from_code(node) - Generate description when docstring is missingextract_imports(ast_tree) - Parse import statementsdetect_etl_pattern(ast_tree) - Identify Extract/Transform/Load methodsextract_parameter_descriptions(docstring) - Parse parameter docs from docstringextract_return_description(docstring) - Parse return value docs from docstringgenerate_markdown(analysis_result) - Convert analysis to markdown with descriptionsAgent Type: general-purpose
Model: sonnet (needs robust error handling and API management)
Responsibilities:
mcp-code-execution pattern for context-efficient wiki updatesWiki Hierarchy Creation Strategy:
1. Analyze aggregated docs to build directory tree
2. Create pages depth-first (root โ leaf):
a. Create parent directory index page
b. Create child directory index pages
c. Create file documentation pages
3. Add navigation links:
a. Breadcrumbs: /unify_2_1_dm_synapse_env_d10 > /gold > g_cms_address
b. Directory index: "Files in this directory: [file1], [file2]..."
c. Related files: Links to upstream/downstream files
Directory Index Page Template:
CRITICAL: Follow markdown formatting standards strictly:
# {Directory Name}
**Path**: `{directory_path}`
**Total Files**: {file_count}
**Subdirectories**: {subdir_count}
**Repository Folder**: [View in Repository]({repo_folder_link})
## Overview
{Description of what this directory contains}
## Files in This Directory
| File | Description | Key Components |
| ---- | ----------- | -------------- |
| [{file1_name}]({file1_wiki_link}) | {brief_description} | {class_count} classes, {func_count} functions |
| [{file2_name}]({file2_wiki_link}) | {brief_description} | {class_count} classes, {func_count} functions |
## Subdirectories
- [{subdir1}]({subdir1_wiki_link}) - {subdir1_description}
- [{subdir2}]({subdir2_wiki_link}) - {subdir2_description}
## Quick Links
- [Parent Directory]({parent_wiki_link})
- [Repository Folder]({repo_folder_link})
---
*Auto-generated directory index*
*Last updated: {timestamp}*
MCP Code Execution Pattern for Wiki Updates:
# Context-efficient wiki updates using mcp-code-execution pattern
python3 /home/vscode/.claude/skills/mcp-code-execution/scripts/ado_wiki_updater.py
Key Benefits of MCP Pattern:
Script Features:
# Context-efficient operations:
# 1. Lightweight AST analysis (summary only)
# 2. Markdown generation in subprocess
# 3. GET with ETag from headers: response.headers.get('ETag')
# 4. PUT for updates with If-Match header
# 5. PUT for creates (same endpoint)
# 6. Return minimal summary (not full responses)
# Example usage in agent:
updater = WikiUpdater()
summary = updater.update_utilities_wiki()
# Returns: {pages_updated: 20, pages_created: 0, pages_failed: 0, ...}
# NOT the full markdown content or API responses
JSON Response Format:
{
"agent_id": "wiki-publisher",
"status": "completed",
"pages_created": 52,
"pages_updated": 7,
"index_pages_created": 8,
"wiki_urls": [
"https://dev.azure.com/emstas/Program%20Unify/_wiki/wikis/Program-Unify.wiki/274/unify_2_1_dm_synapse_env_d10",
"https://dev.azure.com/emstas/Program%20Unify/_wiki/wikis/Program-Unify.wiki/275/unify_2_1_dm_synapse_env_d10/gold",
"https://dev.azure.com/emstas/Program%20Unify/_wiki/wikis/Program-Unify.wiki/276/unify_2_1_dm_synapse_env_d10/gold/g_cms_address"
],
"hierarchy_map": {
"/unify_2_1_dm_synapse_env_d10": {
"type": "index",
"children": ["gold", "silver", "utilities", "testing", "pipeline_operations"]
},
"/unify_2_1_dm_synapse_env_d10/gold": {
"type": "index",
"files": ["g_cms_address", "g_cms_business", ...]
}
},
"api_metrics": {
"total_requests": 67,
"successful_requests": 67,
"failed_requests": 0,
"retries": 2,
"rate_limit_hits": 0,
"avg_response_time_ms": 340
},
"errors": [],
"warnings": ["Page X already exists, updated instead of created"],
"execution_time_seconds": 120
}
USER REQUEST: "Document python_files/gold/"
โ
[ORCHESTRATOR AGENT LAUNCH]
โโ Read directory tree: python_files/gold/
โโ Count Python files: 50 files
โโ Decide: 2 agents (25 files each) OR 1 agent (manageable size)
โโ Extract current git branch: "staging"
โ
[LAUNCH CODE-DOCUMENTER AGENTS IN PARALLEL]
Single message with multiple Task calls:
โโ Task(subagent_type="general-purpose", ..., agent_id="code-doc-1")
โ โโ Assigned: python_files/gold/g_*.py (files 1-25)
โโ Task(subagent_type="general-purpose", ..., agent_id="code-doc-2")
โโ Assigned: python_files/gold/g_*.py (files 26-50)
โ
[AGENTS WORK IN PARALLEL]
Code-Doc-1: Code-Doc-2:
โโ Read file 1 โโ Read file 26
โโ AST analysis โโ AST analysis
โโ Generate markdown โโ Generate markdown
โโ Identify dependencies โโ Identify dependencies
โโ Create repo links โโ Create repo links
โโ ... โโ ...
โโ Return JSON (25 docs) โโ Return JSON (25 docs)
โ โ
[ORCHESTRATOR AGGREGATES RESULTS]
โโ Collect JSON from agent 1
โโ Collect JSON from agent 2
โโ Merge documentation arrays
โโ Build cross-reference map (gold โ silver relationships)
โโ Generate directory hierarchy structure
โ
[LAUNCH WIKI-PUBLISHER AGENT]
Task(subagent_type="general-purpose", ...)
โโ Receives: Complete documentation set (50 files)
โโ Loads: azure-devops skill (wiki API operations)
โ
[WIKI-PUBLISHER CREATES PAGES]
โโ Create /gold/ index page
โโ Create /gold/g_cms_address page
โโ Create /gold/g_cms_business page
โโ ... (50 file pages)
โโ Add breadcrumb navigation
โโ Add cross-reference links
โโ Update parent /unify_2_1_dm_synapse_env_d10 index
โโ Return JSON with all URLs
โ
[ORCHESTRATOR FINAL REPORT]
โโ Consolidated JSON:
- 50 files documented
- 51 wiki pages created (1 index + 50 files)
- All wiki URLs
- Total execution time: 8 minutes
- Quality metrics
# User command (via custom slash command or direct)
# /wiki-docs python_files/
# Orchestrator analyzes:
# - 209 Python files total
# - Optimal: 7 agents by directory:
# - Agent 1: gold/ (50 files)
# - Agent 2: silver/silver_cms/ (48 files)
# - Agent 3: silver/silver_fvms/ (32 files)
# - Agent 4: silver/silver_nicheRMS/ (5 files)
# - Agent 5: utilities/ (35 files)
# - Agent 6: testing/ (12 files)
# - Agent 7: pipeline_operations/ (27 files)
# Expected completion: ~25-30 minutes
# Output: 209 file pages + ~10 index pages = 219 total wiki pages
# User command
# /wiki-docs python_files/gold/
# Orchestrator analyzes:
# - 50 Python files in gold/
# - Optimal: 2 agents (25 files each)
# Expected completion: ~7-10 minutes
# Output: 50 file pages + 1 index page = 51 total wiki pages
# User command
# /wiki-docs python_files/utilities/ --update
# Orchestrator:
# - Generates new documentation
# - Wiki-Publisher checks existing pages
# - Updates pages (uses PATCH with If-Match header)
# - Preserves page URLs
# Expected completion: ~5-7 minutes
# Output: 35 pages updated + 1 index updated
# User command
# /wiki-docs python_files/gold/ --wiki-base="/documentation/code/"
# Wiki pages created at:
# /documentation/code/gold/ (instead of default /unify_2_1_dm_synapse_env_d10/gold/)
# Useful for organizing multiple projects in same wiki
/wiki-docs <path> [options]
Parameters:
path (required) - Relative path from repo root
Examples: "python_files/", "python_files/gold/", "python_files/utilities/session_optimiser.py"
Options:
--wiki-base=<path> - Base path in wiki (default: "/unify_2_1_dm_synapse_env_d10/")
--branch=<name> - Git branch for repo links (default: current branch)
--recursive=<bool> - Process subdirectories (default: true)
--update - Update existing pages instead of creating new
--dry-run - Generate docs but don't publish to wiki
.py files recursively (unless --recursive=false)python_files/gold/g_cms_address.py/unify_2_1_dm_synapse_env_d10/gold/g_cms_addressfrom wiki_hierarchy_builder import WikiHierarchyBuilder
import os
builder = WikiHierarchyBuilder(
repo_root=os.getcwd(),
wiki_base=f"/{os.path.basename(os.getcwd())}"
)
# Build hierarchy from directory
hierarchy = builder.build_from_path("python_files/gold/")
# Outputs:
# {
# "wiki_path": "/unify_2_1_dm_synapse_env_d10/gold",
# "files": [list of .py files],
# "subdirs": [list of subdirectories],
# "parent_path": "/unify_2_1_dm_synapse_env_d10"
# }
Direct Source Links: Link to file in Azure DevOps repo
[View Source](https://dev.azure.com/emstas/Program%20Unify/_git/unify_2_1_dm_synapse_env_d10?path=/python_files/gold/g_cms_address.py&version=GBstaging)
Cross-Reference Links: Links between related wiki pages
## Related Documentation
- [s_cms_address (Silver Layer)](../silver/silver_cms/s_cms_address) - Upstream source
- [TableUtilities](../utilities/session_optimiser#tableutilities) - Dependency
Breadcrumb Navigation: Hierarchical path links
[Home](/) > [unify_2_1_dm_synapse_env_d10](/unify_2_1_dm_synapse_env_d10) > [gold](/unify_2_1_dm_synapse_env_d10/gold) > g_cms_address
Directory Index Links: Parent directory link to all files
## Files in gold/
- [g_cms_address](/unify_2_1_dm_synapse_env_d10/gold/g_cms_address)
- [g_cms_business](/unify_2_1_dm_synapse_env_d10/gold/g_cms_business)
Code analyzer automatically detects:
g_cms_address โ s_cms_address)Code-documenter agents should extract and parse docstrings using a multi-tier approach:
import ast
def extract_docstring(node):
"""Extract docstring from AST node (class, function, or method)."""
return ast.get_docstring(node) or ""
def analyze_class(class_node):
class_info = {
"name": class_node.name,
"docstring": extract_docstring(class_node),
"methods": []
}
for item in class_node.body:
if isinstance(item, ast.FunctionDef):
method_info = {
"name": item.name,
"docstring": extract_docstring(item)
}
class_info["methods"].append(method_info)
return class_info
Support multiple docstring conventions:
Google Style:
def function(param1: str, param2: int) -> bool:
"""Brief description.
Detailed description of what this function does.
Args:
param1: Description of param1
param2: Description of param2
Returns:
Description of return value
Raises:
ValueError: When validation fails
"""
NumPy Style:
def function(param1: str, param2: int) -> bool:
"""
Brief description.
Detailed description.
Parameters
----------
param1 : str
Description of param1
param2 : int
Description of param2
Returns
-------
bool
Description of return value
"""
reStructuredText Style:
def function(param1: str, param2: int) -> bool:
"""Brief description.
Detailed description.
:param param1: Description of param1
:type param1: str
:param param2: Description of param2
:type param2: int
:return: Description of return value
:rtype: bool
"""
import re
def parse_google_docstring(docstring):
"""Parse Google-style docstring into structured sections."""
sections = {
"description": "",
"parameters": [],
"returns": "",
"raises": []
}
# Extract description (everything before first section)
desc_match = re.match(r'^(.*?)(?:Args:|Parameters:|Returns:|Raises:|$)',
docstring, re.DOTALL)
if desc_match:
sections["description"] = desc_match.group(1).strip()
# Extract Args/Parameters section
args_match = re.search(r'(?:Args:|Parameters:)\s*(.*?)(?:Returns:|Raises:|$)',
docstring, re.DOTALL)
if args_match:
for line in args_match.group(1).strip().split('\n'):
param_match = re.match(r'\s*(\w+):\s*(.*)', line)
if param_match:
sections["parameters"].append({
"name": param_match.group(1),
"description": param_match.group(2)
})
# Extract Returns section
returns_match = re.search(r'Returns:\s*(.*?)(?:Raises:|$)',
docstring, re.DOTALL)
if returns_match:
sections["returns"] = returns_match.group(1).strip()
return sections
When docstrings are not present, infer purpose from:
*Manager, *Builder, *Factory, *Handlerget_*, set_*, create_*, delete_*, process_*@property, @staticmethod, @classmethod, @synapse_error_print_handlerextract, transform, loaddef infer_class_purpose(class_node):
"""Infer class purpose from name and structure."""
name = class_node.name
# Check naming patterns
if name.endswith('Manager'):
return f"Manages {name[:-7].lower()} operations and state"
elif name.endswith('Builder'):
return f"Builds and constructs {name[:-7].lower()} instances"
elif name.endswith('Factory'):
return f"Creates and configures {name[:-7].lower()} objects"
elif name.endswith('Handler'):
return f"Handles {name[:-7].lower()} events and processing"
# Check for ETL pattern
method_names = [m.name for m in class_node.body
if isinstance(m, ast.FunctionDef)]
if set(['extract', 'transform', 'load']).issubset(method_names):
return "ETL class for data transformation between medallion layers"
# Generic fallback
return f"Implements {name} functionality"
def infer_method_purpose(method_node):
"""Infer method purpose from name and signature."""
name = method_node.name
# Special methods
if name == '__init__':
return "Initialize instance with configuration and dependencies"
elif name.startswith('get_'):
return f"Retrieve {name[4:].replace('_', ' ')}"
elif name.startswith('set_'):
return f"Set {name[4:].replace('_', ' ')}"
elif name.startswith('create_'):
return f"Create new {name[7:].replace('_', ' ')}"
elif name.startswith('delete_'):
return f"Delete {name[7:].replace('_', ' ')}"
elif name.startswith('process_'):
return f"Process {name[8:].replace('_', ' ')}"
elif name.startswith('validate_'):
return f"Validate {name[9:].replace('_', ' ')}"
elif name.startswith('is_') or name.startswith('has_'):
return f"Check if {name[3:].replace('_', ' ')}"
return f"Perform {name.replace('_', ' ')} operation"
Combine extracted docstrings with inferred information:
def generate_description(node, docstring, inferred_purpose):
"""Generate complete description from docstring and inference."""
if docstring:
# Use docstring as primary source
parsed = parse_google_docstring(docstring)
return {
"purpose": parsed["description"].split('\n')[0], # First line
"description": parsed["description"],
"parameters": parsed["parameters"],
"returns": parsed["returns"]
}
else:
# Use inferred purpose
return {
"purpose": inferred_purpose,
"description": f"{inferred_purpose}. (No docstring available - inferred from code structure)",
"parameters": [], # Can infer from type hints
"returns": ""
}
The code-documenter agents should:
ast.parse()File Validation:
ast.parse())Documentation Quality:
code)- for unordered lists)Error Handling:
try:
analysis = analyze_python_file(file_path)
except SyntaxError as e:
warnings.append(f"Syntax error in {file_path}: {e}")
# Generate partial documentation with error notice
except Exception as e:
errors.append(f"Failed to analyze {file_path}: {e}")
# Skip file, continue with others
API Response Validation:
Retry Logic:
max_retries = 3
for attempt in range(max_retries):
try:
response = wiki.create_or_update_page(path, content)
if response.status_code in [200, 201]:
break
except requests.exceptions.RequestException as e:
if attempt == max_retries - 1:
errors.append(f"Failed to create page {path} after {max_retries} attempts")
else:
time.sleep(2 ** attempt) # Exponential backoff
Rate Limiting:
Cross-Link Verification:
Agent Completion Check:
Data Aggregation Validation:
Final Report Requirements:
The wiki-publisher agent loads the azure-devops skill for REST API operations:
Load the azure-devops skill for wiki API operations.
Use scripts/ado_wiki_client.py to interact with Azure DevOps Wiki REST API.
The skill provides context-efficient REST API helpers without loading 50+ MCP tools.
See .claude/skills/azure-devops/skill.md for complete documentation.
This skill follows the orchestration patterns from multi-agent-orchestration:
Reference: .claude/skills/multi-agent-orchestration/skill.md
Azure DevOps Wiki REST API client with authentication and error handling.
Key Methods:
create_or_update_page(path, content, comment) - Creates new or updates existing pageget_page(path) - Retrieves page content and metadata (including eTag version)delete_page(path) - Removes wiki pagelist_pages(path) - Lists all pages under pathget_page_stats(path) - Returns page views and edit historyUsage:
from scripts.ado_wiki_client import WikiClient
client = WikiClient.from_env() # Reads from environment variables
response = client.create_or_update_page(
path="/unify_2_1_dm_synapse_env_d10/gold/g_cms_address",
content=markdown_content,
comment="Auto-generated documentation"
)
print(response["remoteUrl"]) # Wiki page URL
Python AST analysis for comprehensive code documentation.
Key Functions:
analyze_file(file_path) -> dict - Complete analysis of Python fileextract_classes(ast_tree) -> list - Parse class definitions with methodsextract_functions(ast_tree) -> list - Parse function signatures and decoratorsextract_imports(ast_tree) -> dict - Parse import statements (external + internal)detect_etl_pattern(class_node) -> bool - Check for Extract/Transform/Load methodsresolve_internal_imports(import_path, repo_root) -> str - Convert import to file pathgenerate_markdown(analysis) -> str - Convert analysis dict to markdownUsage:
from scripts.code_analyzer import analyze_file, generate_markdown, extract_docstring
analysis = analyze_file("python_files/gold/g_cms_address.py")
markdown = generate_markdown(analysis)
# analysis contains:
# {
# "classes": [
# {
# "name": "ClassName",
# "docstring": "Full class docstring...",
# "purpose": "Brief extracted purpose",
# "description": "Detailed description from docstring",
# "methods": [
# {
# "name": "method_name",
# "signature": "method_name(param1: str, param2: int) -> bool",
# "docstring": "Full method docstring...",
# "purpose": "What this method does",
# "parameters": [
# {"name": "param1", "type": "str", "description": "..."},
# {"name": "param2", "type": "int", "description": "..."}
# ],
# "returns": {"type": "bool", "description": "..."},
# "decorators": ["@synapse_error_print_handler"]
# }
# ]
# }
# ],
# "functions": [
# {
# "name": "function_name",
# "signature": "function_name(arg1: str) -> dict",
# "docstring": "Full function docstring...",
# "purpose": "What this function does",
# "parameters": [...],
# "returns": {...},
# "decorators": [...]
# }
# ],
# "imports": {"external": [...], "internal": [...]},
# "has_etl_pattern": True,
# "file_path": "...",
# "line_count": 145
# }
Directory tree to wiki hierarchy conversion.
Key Methods:
build_hierarchy(path) -> dict - Build complete hierarchy from pathget_python_files(directory) -> list - Find all .py files recursivelymap_repo_to_wiki(repo_path, wiki_base) -> str - Convert repo path to wiki pathgenerate_index_content(directory_info) -> str - Create directory index markdowndetect_relationships(file_list) -> dict - Find related files (goldโsilver)Usage:
from scripts.wiki_hierarchy_builder import WikiHierarchyBuilder
import os
builder = WikiHierarchyBuilder(
repo_root=os.getcwd(),
wiki_base=f"/{os.path.basename(os.getcwd())}"
)
hierarchy = builder.build_hierarchy("python_files/gold/")
# Returns complete tree structure with file counts, wiki paths, relationships
**bold** for field labels- for unordered listsbackticks)Symptoms: Orchestrator reports agent status "failed" or no response
Solutions:
Symptoms: Wiki-publisher reports 4xx or 5xx errors
Solutions:
vso.wiki_write scopeSymptoms: Wiki page links return 404 errors
Solutions:
Symptoms: Import statements not showing related file links
Solutions:
resolve_internal_imports() for new patternsExpected Execution Times (209 files, 7 agents):
Optimization Tips:
haiku model for code-documenter agents (faster, cheaper)Skill Version: 2.1 Created: 2025-11-12 Last Updated: 2025-11-13 Maintainer: AI Agent Team Status: Production Ready
Major Enhancement: Markdown Formatting Standards & Quality Control
Impact: All generated documentation now follows consistent markdown standards, improving readability and maintainability across the entire wiki. Ensures professional appearance and compatibility with Azure DevOps wiki rendering.
Major Enhancement: Comprehensive Docstring Extraction & Description Generation
Impact: Documentation now includes meaningful descriptions for every class, method, and function, dramatically improving readability and developer understanding.