23 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	| name | description | 
|---|---|
| create-skill | Use when creating new skills, editing existing skills, or planning skill architecture - provides comprehensive guidance on skill structure, discoverability, token efficiency, and best practices for writing skills that AI can find and use effectively | 
Creating Skills
Overview
Skills are reference guides for proven techniques, patterns, or tools. Good skills are concise, well-structured, discoverable, and help AI instances find and apply effective approaches.
Core principle: Only add context AI doesn't already have. Challenge every token - assume Claude is smart and knows standard practices.
When to Create a Skill
Create when:
- Technique wasn't intuitively obvious to you
 - You'd reference this again across projects
 - Pattern applies broadly (not project-specific)
 - Others would benefit from this knowledge
 
Don't create for:
- One-off solutions
 - Standard practices well-documented elsewhere
 - Project-specific conventions (put in CLAUDE.md instead)
 - Obvious or trivial information
 
Skill Types
Technique
Concrete method with steps to follow
Examples: condition-based-waiting, root-cause-tracing, defensive-programming
Test with: Application scenarios, variation scenarios, missing information tests
Pattern
Way of thinking about problems
Examples: flatten-with-flags, reducing-complexity, information-hiding
Test with: Recognition scenarios, application scenarios, counter-examples
Reference
API docs, syntax guides, tool documentation
Examples: API documentation, command references, library guides
Test with: Retrieval scenarios, application scenarios, gap testing
Skill Structure Requirements
Directory Layout
skill-name/
├── SKILL.md           # Required: Main skill file with frontmatter
├── scripts/           # Optional: Executable code
├── references/        # Optional: Documentation to load as needed
└── assets/            # Optional: Files used in output
Naming Conventions
- Directory name: lowercase with hyphens only (e.g., 
my-skill) - Frontmatter name: must exactly match directory name
 - Tool name: auto-generated as 
skills_{directory_name}with underscores - Use gerund form (verb + -ing): 
processing-pdfs,analyzing-data,creating-skills - Avoid vague names: "Helper", "Utils", "Tools"
 
Frontmatter Requirements
Required Fields
---
name: skill-name
description: Use when [specific triggers/symptoms] - [what it does and how it helps]
---
Constraints
- Max 1024 characters total for frontmatter
 - Only 
nameanddescriptionfields supported - Name: letters, numbers, hyphens only (no parentheses, special chars)
 - Description target: under 500 characters if possible
 
Writing Effective Descriptions
Critical for discovery: AI reads description to decide which skills to load.
Format: Start with "Use when..." to focus on triggering conditions
Include:
- Concrete triggers, symptoms, and situations that signal this skill applies
 - Describe the problem not language-specific symptoms (unless skill is tech-specific)
 - Technology-agnostic triggers unless skill is technology-specific
 - What the skill does and how it helps
 
Always write in third person (injected into system prompt):
Good examples:
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently - replaces arbitrary timeouts with condition polling for reliable async tests
description: Use when using React Router and handling authentication redirects - provides patterns for protected routes and auth state management
description: Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
Bad examples:
# Too abstract, no triggers
description: For async testing
# First person
description: I can help you with async tests when they're flaky
# Vague, no specifics
description: Helps with documents
Core Principles
Concise is Key
Context window is shared with everything else. Only add what AI doesn't already know.
Challenge each piece of information:
- "Does Claude really need this explanation?"
 - "Can I assume Claude knows this?"
 - "Does this paragraph justify its token cost?"
 
Good (concise - ~50 tokens):
## Extract PDF text
Use pdfplumber for text extraction:
```python
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
    text = pdf.pages[0].extract_text()
```
Bad (verbose - ~150 tokens):
## Extract PDF text
PDF (Portable Document Format) files are a common file format that contains text, images, and other content. To extract text from a PDF, you'll need to use a library. There are many libraries available for PDF processing, but we recommend pdfplumber because it's easy to use and handles most cases well. First, you'll need to install it using pip. Then you can use the code below...
Set Appropriate Degrees of Freedom
Match specificity to task fragility and variability.
Analogy: Think of AI as a robot exploring a path:
- Narrow bridge with cliffs: Provide specific guardrails and exact instructions (low freedom)
 - Open field with no hazards: Give general direction and trust AI to find best route (high freedom)
 
High freedom (text-based instructions):
Use when multiple approaches are valid, decisions depend on context, heuristics guide approach.
## Code review process
1. Analyze the code structure and organization
2. Check for potential bugs or edge cases
3. Suggest improvements for readability and maintainability
4. Verify adherence to project conventions
Medium freedom (pseudocode or scripts with parameters):
Use when a preferred pattern exists, some variation is acceptable, configuration affects behavior.
## Generate report
Use this template and customize as needed:
```python
def generate_report(data, format="markdown", include_charts=True):
    # Process data
    # Generate output in specified format
    # Optionally include visualizations
```
Low freedom (specific scripts, few or no parameters):
Use when operations are fragile and error-prone, consistency is critical, specific sequence must be followed.
## Database migration
Run exactly this script:
```bash
python scripts/migrate.py --verify --backup
```
Do not modify the command or add additional flags.
Content Structure
Recommended Template
# Skill Title
Brief overview of the skill's purpose (1-2 sentences with core principle).
## When to Use This Skill
List specific symptoms and use cases:
- Use case 1
- Use case 2
**When NOT to use:**
- Counter-example 1
- Counter-example 2
## Core Pattern (for techniques/patterns)
Before/after code comparison OR quick reference table for scanning
## Quick Reference
Table or bullets for common operations
## Implementation
Step-by-step guidance (inline for simple, link to files for complex)
## Common Mistakes
What goes wrong + fixes
## Real-World Impact (optional)
Concrete results showing effectiveness
Progressive Disclosure
SKILL.md serves as overview that points to detailed materials as needed.
Keep SKILL.md under 500 lines for optimal performance
Pattern 1: High-level guide with references
---
name: pdf-processing
description: Extracts text and tables from PDF files, fills forms, merges documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
---
# PDF Processing
## Quick start
Extract text with pdfplumber:
```python
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
    text = pdf.pages[0].extract_text()
```
## Advanced features
**Form filling**: See `references/forms.md` for complete guide
**API reference**: See `references/api.md` for all methods
**Examples**: See `references/examples.md` for common patterns
Pattern 2: Domain-specific organization
Keep token usage low by organizing content so AI loads only relevant domains.
bigquery-skill/
├── SKILL.md (overview and navigation)
└── references/
    ├── finance.md (revenue, billing metrics)
    ├── sales.md (opportunities, pipeline)
    ├── product.md (API usage, features)
    └── marketing.md (campaigns, attribution)
Pattern 3: Conditional details
Show basic content inline, link to advanced content:
# DOCX Processing
## Creating documents
Use docx-js for new documents. See `references/docx-js.md`.
## Editing documents
For simple edits, modify the XML directly.
**For tracked changes**: See `references/redlining.md`
**For OOXML details**: See `references/ooxml.md`
Avoid deeply nested references - keep all reference files one level deep from SKILL.md. AI may partially read nested files, resulting in incomplete information.
Table of contents for long references - Files >100 lines need TOC at top to enable previewing scope.
Skill Discovery Optimization
Future AI needs to FIND your skill. Optimize for discovery.
Keyword Coverage
Use words AI would search for:
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
 - Symptoms: "flaky", "hanging", "zombie", "pollution"
 - Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
 - Tools: Actual commands, library names, file types
 
Descriptive Naming
Use active voice, verb-first (gerund form):
- ✅ 
creating-skillsnotskill-creation - ✅ 
testing-async-codenotasync-test-helpers - ✅ 
processing-pdfsnotpdf-processor 
Token Efficiency
Target word counts:
- Getting-started workflows: <150 words each
 - Frequently-loaded skills: <200 words total
 - Other skills: <500 words (still be concise)
 
Techniques:
Move details to tool help:
# ❌ BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# ✅ GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.
Use cross-references:
# ❌ BAD: Repeat workflow details
When searching, dispatch subagent with template...
[20 lines of repeated instructions]
# ✅ GOOD: Reference other skill
Always use subagents (50-100x context savings). Use skill-name for workflow.
Compress examples:
# ❌ BAD: Verbose (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication patterns.
[Dispatch subagent with search query: "React Router authentication error handling 401"]
# ✅ GOOD: Minimal (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent → synthesis]
Verification:
wc -w skills/path/SKILL.md
Cross-Referencing Other Skills
Use skill name only, with explicit requirement markers:
- ✅ Good: 
**REQUIRED:** Use skill-name-here - ✅ Good: 
**REQUIRED BACKGROUND:** You MUST understand skill-name-here - ❌ Bad: 
See skills/testing/test-driven-development(unclear if required) 
Why no @ links: Force-loads files immediately, consuming context before needed.
Discovery Workflow
How AI finds and uses your skill:
- Encounters problem ("tests are flaky")
 - Searches descriptions (keyword matching)
 - Finds SKILL (description matches symptoms)
 - Scans overview (is this relevant?)
 - Reads patterns (quick reference table)
 - Loads example (only when implementing)
 
Optimize for this flow - put searchable terms early and often.
Workflows and Feedback Loops
Use Workflows for Complex Tasks
Break complex operations into clear, sequential steps. Provide checklists AI can copy and check off.
Example 1: Research synthesis workflow (no code):
## Research synthesis workflow
Copy this checklist and track your progress:
```
Research Progress:
- [ ] Step 1: Read all source documents
- [ ] Step 2: Identify key themes
- [ ] Step 3: Cross-reference claims
- [ ] Step 4: Create structured summary
- [ ] Step 5: Verify citations
```
**Step 1: Read all source documents**
Review each document in the `sources/` directory. Note the main arguments and supporting evidence.
**Step 2: Identify key themes**
Look for patterns across sources. What themes appear repeatedly? Where do sources agree or disagree?
[Continue with detailed steps...]
Example 2: PDF form filling workflow (with code):
## PDF form filling workflow
Copy this checklist and check off items as you complete them:
```
Task Progress:
- [ ] Step 1: Analyze the form (run analyze_form.py)
- [ ] Step 2: Create field mapping (edit fields.json)
- [ ] Step 3: Validate mapping (run validate_fields.py)
- [ ] Step 4: Fill the form (run fill_form.py)
- [ ] Step 5: Verify output (run verify_output.py)
```
**Step 1: Analyze the form**
Run: `python scripts/analyze_form.py input.pdf`
[Continue with detailed steps...]
Implement Feedback Loops
Common pattern: Run validator → fix errors → repeat
Example: Document editing process
## Document editing process
1. Make your edits to `word/document.xml`
2. **Validate immediately**: `python ooxml/scripts/validate.py unpacked_dir/`
3. If validation fails:
   - Review the error message carefully
   - Fix the issues in the XML
   - Run validation again
4. **Only proceed when validation passes**
5. Rebuild: `python ooxml/scripts/pack.py unpacked_dir/ output.docx`
6. Test the output document
Code Examples
One excellent example beats many mediocre ones
Choose most relevant language:
- Testing techniques → TypeScript/JavaScript
 - System debugging → Shell/Python
 - Data processing → Python
 
Good example characteristics:
- Complete and runnable
 - Well-commented explaining WHY
 - From real scenario
 - Shows pattern clearly
 - Ready to adapt (not generic template)
 
Don't:
- Implement in 5+ languages
 - Create fill-in-the-blank templates
 - Write contrived examples
 
Common Patterns
Template Pattern
Provide templates for output format. Match strictness to needs.
For strict requirements:
## Report structure
ALWAYS use this exact template structure:
```markdown
# [Analysis Title]
## Executive summary
[One-paragraph overview of key findings]
## Key findings
- Finding 1 with supporting data
- Finding 2 with supporting data
## Recommendations
1. Specific actionable recommendation
2. Specific actionable recommendation
```
For flexible guidance:
## Report structure
Here is a sensible default format, but use your best judgment:
```markdown
# [Analysis Title]
## Executive summary
[Overview]
## Key findings
[Adapt sections based on what you discover]
```
Adjust sections as needed for the specific analysis type.
Examples Pattern
For skills where output quality depends on seeing examples:
## Commit message format
Generate commit messages following these examples:
**Example 1:**
Input: Added user authentication with JWT tokens
Output:
```
feat(auth): implement JWT-based authentication
Add login endpoint and token validation middleware
```
**Example 2:**
Input: Fixed bug where dates displayed incorrectly in reports
Output:
```
fix(reports): correct date formatting in timezone conversion
Use UTC timestamps consistently across report generation
```
Follow this style: type(scope): brief description, then detailed explanation.
Conditional Workflow Pattern
Guide through decision points:
## Document modification workflow
1. Determine the modification type:
   **Creating new content?** → Follow "Creation workflow" below
   **Editing existing content?** → Follow "Editing workflow" below
2. Creation workflow:
   - Use docx-js library
   - Build document from scratch
   - Export to .docx format
3. Editing workflow:
   - Unpack existing document
   - Modify XML directly
   - Validate after each change
   - Repack when complete
Flowchart Usage
Use flowcharts ONLY for:
- Non-obvious decision points
 - Process loops where you might stop too early
 - "When to use A vs B" decisions
 
Never use flowcharts for:
- Reference material → Use tables, lists
 - Code examples → Use markdown blocks
 - Linear instructions → Use numbered lists
 
See references/graphviz-conventions.dot for graphviz style rules.
Content Guidelines
Avoid Time-Sensitive Information
Don't include information that will become outdated.
Bad (time-sensitive):
If you're doing this before August 2025, use the old API.
After August 2025, use the new API.
Good (old patterns section):
## Current method
Use the v2 API endpoint: `api.example.com/v2/messages`
## Old patterns
<details>
<summary>Legacy v1 API (deprecated 2025-08)</summary>
The v1 API used: `api.example.com/v1/messages`
This endpoint is no longer supported.
</details>
Use Consistent Terminology
Choose one term and use it throughout:
Good - Consistent:
- Always "API endpoint"
 - Always "field"
 - Always "extract"
 
Bad - Inconsistent:
- Mix "API endpoint", "URL", "API route", "path"
 - Mix "field", "box", "element", "control"
 
File Organization
Self-Contained Skill
defense-in-depth/
  SKILL.md    # Everything inline
When: All content fits, no heavy reference needed
Skill with Reusable Tool
condition-based-waiting/
  SKILL.md    # Overview + patterns
  example.ts  # Working helpers to adapt
When: Tool is reusable code, not just narrative
Skill with Heavy Reference
pptx/
  SKILL.md       # Overview + workflows
  references/
    pptxgenjs.md   # 600 lines API reference
    ooxml.md       # 500 lines XML structure
  scripts/       # Executable tools
When: Reference material too large for inline
Anti-Patterns
❌ Narrative Example
"In session 2025-10-03, we found empty projectDir caused..."
Why bad: Too specific, not reusable
❌ Multi-Language Dilution
example-js.js, example-py.py, example-go.go
Why bad: Mediocre quality, maintenance burden
❌ Code in Flowcharts
step1 [label="import fs"];
step2 [label="read file"];
Why bad: Can't copy-paste, hard to read
❌ Generic Labels
helper1, helper2, step3, pattern4
Why bad: Labels should have semantic meaning
Evaluation and Iteration
Build Evaluations First
Create evaluations BEFORE writing extensive documentation.
Evaluation-driven development:
- Identify gaps: Run tasks without skill, document failures
 - Create evaluations: Build 3+ scenarios testing these gaps
 - Establish baseline: Measure performance without skill
 - Write minimal instructions: Create just enough to pass evaluations
 - Iterate: Execute evaluations, compare baseline, refine
 
Develop Skills Iteratively
Creating new skill:
- Complete task without skill: Work through problem, notice what context you repeatedly provide
 - Identify reusable pattern: What context would be useful for similar tasks?
 - Ask AI to create skill: "Create a skill that captures this pattern we just used"
 - Review for conciseness: Challenge unnecessary explanations
 - Improve information architecture: Organize content effectively
 - Test on similar tasks: Use skill with fresh AI instance
 - Iterate based on observation: Refine based on what worked/didn't
 
Iterating on existing skill:
- Use skill in real workflows: Give AI actual tasks
 - Observe behavior: Note struggles, successes, unexpected choices
 - Request improvements: Share observations with AI helper
 - Review suggestions: Consider reorganization, stronger language, restructuring
 - Apply and test: Update skill, test again
 - Repeat based on usage: Continue observe → refine cycle
 
Creating a New Skill
Step 1: Choose Location
- Project-specific: 
.opencode/skills/skill-name/ - Global: 
~/.opencode/skills/skill-name/ 
Step 2: Create Directory Structure
mkdir -p .opencode/skills/skill-name
mkdir -p .opencode/skills/skill-name/references  # if needed
mkdir -p .opencode/skills/skill-name/scripts     # if needed
Step 3: Create SKILL.md with Frontmatter
Follow requirements in Frontmatter Requirements section above.
Step 4: Write Skill Content
Structure content following Content Structure section above.
Step 5: Add Supporting Files
Organize by type:
scripts/: Executable code the skill might runreferences/: Documentation to referenceassets/: Templates, configs, or output files
Step 6: Validate
Check that:
- Directory name matches frontmatter 
namefield - Description is at least 20 characters
 - Name uses only lowercase letters, numbers, and hyphens
 - YAML frontmatter is valid
 - Supporting file paths are relative, not absolute
 - Word count appropriate for skill type
 
Step 7: Restart OpenCode
Skills are loaded at startup. Restart OpenCode to register your new skill.
Path Resolution
When referencing files in SKILL.md, use relative paths:
Read the API docs in `references/api.md`
Run `scripts/deploy.sh` for deployment
The Agent will resolve these relative to the skill directory automatically.
Skill Creation Checklist
Planning:
- Identified gaps or patterns worth capturing
 - Determined skill type (Technique, Pattern, or Reference)
 - Created evaluation scenarios
 - Established baseline without skill
 
Structure:
- Directory created in correct location
 - Directory name is lowercase with hyphens
 - Name uses gerund form (verb + -ing) if applicable
 - SKILL.md file created
 - Frontmatter includes required fields (name, description)
 - Name in frontmatter matches directory name exactly
 - Description starts with "Use when..." and includes triggers
 - Description written in third person
 - Description under 500 characters
 
Content:
- Overview with core principle (1-2 sentences)
 - "When to Use" section with symptoms and counter-examples
 - Quick reference table or bullets
 - Clear, actionable steps
 - Common mistakes section
 - One excellent code example (not multi-language)
 - Keywords throughout for search
 - Consistent terminology
 - No time-sensitive information
 - Appropriate degree of freedom
 
Progressive Disclosure:
- SKILL.md under 500 lines
 - Supporting files in subdirectories if needed
 - References one level deep (not nested)
 - Table of contents for files >100 lines
 - File references use relative paths
 
Token Efficiency:
- Challenged every paragraph for necessity
 - Word count appropriate for skill type
 - Compressed examples where possible
 - Cross-references instead of repetition
 - No generic or obvious explanations
 
Testing:
- Evaluations pass with skill present
 - Tested on similar tasks with fresh AI instance
 - Observed and refined based on usage
 - Skill appears in tool list as 
skills_{name} 
Deployment:
- OpenCode restarted to load new skill
 - Verified skill is discoverable
 - Documented in project if applicable
 
Reference Files
references/graphviz-conventions.dot: Flowchart style guide and conventionsreferences/persuasion-principles.md: Psychology for effective skill design