Imagine typing six characters into your terminal and watching Claude Code automatically run your test suite, build your application, deploy it to staging, verify the health checks, and report back with a summary — all without you lifting another finger. No copy-pasting scripts. No remembering flags. No context-switching between documentation tabs. Just /deploy staging and you are done.
That is exactly what Skills in Claude Code make possible. If you have been using Claude Code for a while, you have probably noticed slash commands like /commit and /review-pr that seem almost magical in how much they accomplish with a single invocation. Those are Skills — and they represent one of the most powerful, yet least understood, extension points in the entire Claude Code ecosystem.
Here is the thing most developers miss: Skills are not just fancy shortcuts. They are markdown-based instruction sets that fundamentally change how Claude Code behaves when invoked. They inject specialized context, define structured workflows, and can accept arguments — turning Claude Code from a general-purpose AI assistant into a purpose-built tool for your exact workflow. And the best part? You can build your own in about five minutes.
In this guide, we are going to take Skills apart piece by piece. You will learn what they are conceptually, how they work under the hood, what built-in Skills ship with Claude Code, and — most importantly — how to build your own. We will walk through six complete, practical skill examples that you can copy-paste into your project today. By the end, you will have the knowledge to create a library of custom Skills that makes your team dramatically more productive.
What Are Skills in Claude Code?
At their core, Skills are specialized capabilities that extend Claude Code’s functionality through markdown-based instruction sets. When you invoke a Skill via a slash command — say, /commit — Claude Code loads the corresponding markdown file into its context window. That markdown file contains detailed instructions that Claude follows to complete the task. Think of Skills as expert playbooks: each one teaches Claude Code how to be a specialist at a particular job.
This is fundamentally different from just asking Claude Code to “make a commit.” When you type a freeform request, Claude Code uses its general knowledge to figure out what to do. When you invoke a Skill, Claude Code receives a carefully crafted set of instructions — written by someone who has thought deeply about the best way to accomplish that specific task. The Skill might specify which git commands to run, how to format the commit message, what checks to perform before committing, and how to handle edge cases.
Skills vs Custom Commands — What Is the Difference?
If you are already familiar with Claude Code’s custom commands (the markdown files in .claude/commands/), you might be wondering: how are Skills different? The distinction matters, and understanding it will help you decide which mechanism to use for what purpose.
Custom commands are project-specific markdown files that live in your repository’s .claude/commands/ directory. They are straightforward: you write a markdown file, and when someone types the corresponding slash command, Claude Code loads those instructions. They are great for project-specific workflows.
Skills are a more structured, powerful system. They have frontmatter metadata (name, description, argument schemas), support typed arguments, can be composed with other Skills, and exist at multiple levels — built-in, user-level, and project-level. Skills are invoked internally through the Skill tool, which provides a standardized interface for loading and executing them.
| Feature | Skills | Custom Commands | CLAUDE.md Instructions |
|---|---|---|---|
| Location | ~/.claude/skills/ or .claude/skills/ |
.claude/commands/ |
CLAUDE.md in project root |
| Invocation | Slash command (/skill-name) |
Slash command (/command-name) |
Always loaded automatically |
| Arguments | Typed arguments with schema | Free-text $ARGUMENTS |
Not applicable |
| Metadata | Frontmatter (name, description, args) | Filename only | None |
| Composability | Can call other Skills | Limited | Not applicable |
| Scope | Built-in, user, or project | Project only | Project only |
| Best For | Reusable, structured workflows | Simple project-specific tasks | Persistent context and rules |
How Skills Work Internally
Understanding the internals is not just academic — it helps you write better Skills. So let us trace exactly what happens from the moment you type a slash command to the moment Claude Code starts executing instructions.
The Invocation Flow
When you type /deploy staging in Claude Code, here is the sequence of events:
Step 1: Command Parsing. Claude Code recognizes the slash prefix and parses the input into a skill name (deploy) and arguments (staging). It searches for a matching skill across all registered locations — built-in skills first, then user skills in ~/.claude/skills/, then project skills in .claude/skills/.
Step 2: Skill Loading. The matching markdown file is read from disk. The frontmatter is parsed to extract metadata — the skill’s name, description, and argument schema. The body of the markdown file contains the actual instructions.
Step 3: Argument Injection. If the skill defines arguments, the user’s input is matched against the schema. The $ARGUMENTS placeholder in the skill body is replaced with the actual argument value (in this case, staging).
Step 4: Context Injection. The processed markdown content is injected into Claude’s context as instructions. This is the critical step — Claude Code now has a detailed playbook for what to do next. The Skill tool handles this injection internally.
Step 5: Execution. Claude Code follows the injected instructions, using its available tools (Bash, Read, Write, Edit, Grep, etc.) to carry out each step. The instructions might tell it to read files, run commands, make edits, or even invoke other Skills.
Skill Resolution Order
When multiple skills share the same name, Claude Code uses a priority order to decide which one to load:
- Built-in skills — shipped with Claude Code itself. These take highest priority.
- User skills — located in
~/.claude/skills/. These are personal to the user and apply across all projects. - Project skills — located in
.claude/skills/within the repository. These are specific to the project and shared with all team members who clone the repo.
commit), the built-in version will take precedence. Choose unique names for your custom skills to avoid conflicts.
The Skill Tool
Under the hood, Skills are invoked through a dedicated Skill tool. This is part of Claude Code’s internal tool system — the same system that includes the Bash tool, Read tool, Edit tool, and others. When the system detects a slash command that matches a skill, it invokes the Skill tool with the skill name and any arguments. The Skill tool then handles loading, parsing, and context injection.
This architecture matters because it means Skills are first-class citizens in Claude Code’s tool ecosystem. They are not a hack or a workaround — they are a core extension mechanism designed to be reliable, composable, and consistent.
Built-in Skills You Can Use Right Now
Claude Code ships with several built-in Skills that handle common development workflows. You have probably already used some of them without even realizing they were Skills. Let us look at the most important ones.
The /commit Skill
This is arguably the most-used built-in skill. When you type /commit, Claude Code does not just run git commit. It follows a detailed workflow:
- Runs
git statusto see what has changed - Runs
git diffto understand the actual changes - Reads recent commit messages to match the repository’s style
- Analyzes the changes and drafts a meaningful commit message
- Stages relevant files (avoiding sensitive files like
.env) - Creates the commit with a properly formatted message
- Verifies success with a final
git status
The skill even handles pre-commit hook failures gracefully — if a hook fails, it fixes the issue and creates a new commit rather than amending the previous one (which could destroy work).
The /review-pr Skill
Type /review-pr 123 and Claude Code will pull up the pull request, read through every changed file, analyze the code quality, check for bugs and security issues, and provide a detailed review. It uses the gh CLI to interact with GitHub, reading diffs, comments, and PR metadata to give you a comprehensive review.
The /pr Skill
The /pr skill automates pull request creation. It examines all commits on your branch since it diverged from the base branch, analyzes the full set of changes (not just the latest commit), drafts a PR title and description, pushes to the remote if needed, and creates the PR using gh pr create. The resulting PR description includes a summary, test plan, and proper formatting.
Discovering Available Skills
Want to see every skill available to you? Simply type / in Claude Code and pause. The autocomplete will show you all registered skills — built-in, user-level, and project-level. This is the fastest way to discover what is available in your current context.
/ followed by a partial name filters the list. For example, /re would show skills starting with “re” — like /review-pr, /refactor, or any custom skills you have created with that prefix.
Anatomy of a Skill File
Before we start building custom Skills, you need to understand the structure of a skill file. Every skill is a markdown file with two parts: frontmatter (metadata) and body (instructions).
The Frontmatter
The frontmatter is a YAML block at the top of the file, enclosed in triple dashes. It tells Claude Code what the skill is called, what it does, and what arguments it accepts.
---
name: deploy
description: Deploy application to staging or production environment
arguments:
- name: environment
description: Target environment (staging or production)
required: true
---
The frontmatter fields are:
- name — The skill’s identifier, used for the slash command. A skill named
deployis invoked with/deploy. - description — A human-readable description shown in the skill listing and autocomplete.
- arguments — An array of argument definitions, each with a name, description, and required flag.
The Body
Below the frontmatter is the markdown body — the actual instructions that Claude Code will follow. This is where you define the workflow, specify commands to run, set expectations for output, and handle edge cases.
The body can use the $ARGUMENTS placeholder, which gets replaced with whatever the user types after the slash command. For a skill invoked as /deploy staging, every instance of $ARGUMENTS in the body becomes staging.
A Complete Skill File
Here is a minimal but complete skill file to illustrate the structure:
---
name: greet
description: Generate a greeting message for a team member
arguments:
- name: person
description: Name of the person to greet
required: true
---
Generate a warm, professional greeting message for $ARGUMENTS.
## Instructions
1. Use the person's name in the greeting
2. Reference the current project if possible
3. Keep it under 3 sentences
4. Output the greeting directly — do not save to a file
File Naming and Directory Structure
Skill files follow a simple naming convention: the filename (without extension) becomes the command name. A file named deploy.md creates the /deploy command.
# Project skills (shared with team via git)
.claude/
skills/
deploy.md # /deploy
write-tests.md # /write-tests
db-migrate.md # /db-migrate
# User skills (personal, not shared)
~/.claude/
skills/
my-snippet.md # /my-snippet
quick-review.md # /quick-review
write-tests.md becomes the command /write-tests. Avoid underscores and spaces — hyphens are the convention.
Building Custom Skills — Step by Step
Now for the good part. Let us build six practical, production-ready skills that you can drop into your project today. Each one solves a real problem that developers face daily, and each one demonstrates different skill-building techniques.
Skill 1: /deploy — Deploy to Staging or Production
This skill automates the full deployment pipeline. It accepts an environment argument, runs pre-deployment checks, executes the deployment, and verifies that everything is healthy afterward.
---
name: deploy
description: Deploy application to staging or production with safety checks
arguments:
- name: environment
description: Target environment — staging or production
required: true
---
You are deploying the application to the **$ARGUMENTS** environment.
Follow every step carefully. Do NOT skip safety checks.
## Step 1: Validate Environment
Confirm that "$ARGUMENTS" is either "staging" or "production".
If it is neither, stop immediately and tell the user:
"Invalid environment. Use: /deploy staging or /deploy production"
## Step 2: Pre-Deployment Checks
Run the following checks in parallel where possible:
1. **Git status check**: Run `git status` to ensure the working
directory is clean. If there are uncommitted changes, warn the
user and ask if they want to continue.
2. **Branch check**: Run `git branch --show-current`. If deploying
to production, verify we are on the `main` branch. If not, warn
the user.
3. **Test suite**: Run `npm test` (or the project's test command).
If any tests fail, STOP and report the failures. Do NOT deploy
with failing tests.
4. **Build check**: Run `npm run build` (or the project's build
command). If the build fails, STOP and report the error.
## Step 3: Deploy
For **staging**:
```bash
git push origin HEAD:staging
# or: npm run deploy:staging
# or: kubectl apply -f k8s/staging/
```
For **production**:
```bash
git push origin main:production
# or: npm run deploy:production
# or: kubectl apply -f k8s/production/
```
Adapt the deploy command to whatever deployment mechanism the
project uses. Check for deploy scripts in package.json, Makefile,
or deploy/ directory.
## Step 4: Post-Deployment Verification
1. Wait 30 seconds for the deployment to propagate
2. Run a health check against the deployed environment:
- Staging: `curl -s https://staging.example.com/health`
- Production: `curl -s https://example.com/health`
3. Check that the response includes a 200 status code
## Step 5: Report
Provide a summary:
- Environment deployed to
- Git commit SHA that was deployed
- Test results (pass/fail counts)
- Health check status
- Timestamp of deployment
How to use it:
/deploy staging
/deploy production
Notice how the skill validates the argument, runs safety checks before deploying, and verifies health after deploying. This is significantly more robust than a bare git push — and it is the same workflow every time, whether you run it or your teammate does.
Skill 2: /write-tests — Generate Comprehensive Tests
This skill analyzes a source file and generates a complete test suite for it. It automatically detects the project’s testing framework and follows existing test patterns.
---
name: write-tests
description: Generate comprehensive tests for a given source file
arguments:
- name: file_path
description: Path to the source file to test
required: true
---
Generate a comprehensive test suite for the file at: $ARGUMENTS
## Step 1: Analyze the Source File
Read the file at `$ARGUMENTS` completely. Identify:
- All exported functions, classes, and methods
- Input parameters and their types
- Return values and their types
- Side effects (API calls, file I/O, database queries)
- Edge cases (null inputs, empty arrays, boundary values)
- Error conditions and exception handling
## Step 2: Detect Testing Framework
Check the project for testing configuration:
- Look at `package.json` for jest, vitest, mocha
- Look at `pyproject.toml` or `setup.cfg` for pytest
- Look at `go.mod` for Go testing
- Look at existing test files to match patterns and conventions
Use whatever framework the project already uses. If none is
configured, recommend and use the most common one for the language.
## Step 3: Study Existing Test Patterns
Find existing test files in the project:
- Search for files matching `*.test.*`, `*.spec.*`, `test_*.*`
- Read 2-3 existing test files to understand:
- Import patterns
- Describe/it block structure
- Mocking patterns
- Assertion style
- Setup/teardown patterns
Match the existing style exactly.
## Step 4: Write the Tests
Create a test file following the project's naming convention
(e.g., `foo.test.ts` for `foo.ts`, `test_foo.py` for `foo.py`).
Include tests for:
- **Happy path**: Normal inputs producing expected outputs
- **Edge cases**: Empty inputs, null/undefined, boundary values
- **Error cases**: Invalid inputs, missing required parameters
- **Integration points**: Mock external dependencies
- **Regression targets**: Any complex logic that could break
Each test should:
- Have a clear, descriptive name
- Test exactly one behavior
- Follow the Arrange-Act-Assert pattern
- Include inline comments explaining WHY the test exists
## Step 5: Verify
Run the test suite to ensure all tests pass:
```bash
npm test -- --testPathPattern="" # JS/TS
pytest -v # Python
go test -v -run ./... # Go
```
If any test fails, fix it. All tests MUST pass before finishing.
## Step 6: Report
Tell the user:
- How many tests were written
- What categories they cover (happy path, edge cases, etc.)
- Any areas that could use additional testing
- The command to run just these tests
How to use it:
/write-tests src/utils/parser.ts
/write-tests lib/models/user.py
The beauty of this skill is that it adapts to whatever project it is in. It detects the testing framework, matches existing patterns, and produces tests that feel like they were written by a team member — because the instructions explicitly tell Claude Code to study and mirror the project’s conventions.
Skill 3: /refactor — Guided Code Refactoring
Refactoring is risky. This skill adds safety rails by requiring tests to pass before and after changes, producing a detailed plan before touching any code, and making changes incrementally.
---
name: refactor
description: Guided code refactoring with safety checks
arguments:
- name: description
description: What to refactor and why
required: true
---
You are performing a guided code refactoring based on this request:
"$ARGUMENTS"
Follow this process carefully to ensure the refactoring is safe.
## Step 1: Understand the Request
Parse the user's refactoring request. Identify:
- Which files or modules are involved
- What the current code does
- What the desired outcome is
- Why the refactoring is needed
Read all relevant source files completely before proceeding.
## Step 2: Run Existing Tests
Run the project's full test suite BEFORE making any changes.
Record the results. If tests are already failing, note which
ones and tell the user — those failures are pre-existing.
```bash
npm test 2>&1 | tail -20 # JS/TS
pytest -v 2>&1 | tail -20 # Python
go test ./... 2>&1 | tail -20 # Go
```
## Step 3: Create a Refactoring Plan
BEFORE making any code changes, present a detailed plan:
- List every file that will be modified
- For each file, describe what will change and why
- Identify potential risks (breaking changes, API changes)
- Note any files that import/depend on modified code
- Estimate the scope: small (1-2 files), medium (3-5), large (6+)
Wait for implicit approval — present the plan, then proceed.
## Step 4: Implement Changes
Make changes incrementally:
1. Modify one logical unit at a time
2. After each modification, check that the file is syntactically
valid (no broken imports, no undefined references)
3. Keep a mental changelog of every change made
Important rules:
- Do NOT change public API signatures without updating all callers
- Do NOT delete code that might be used elsewhere — search first
- Preserve all existing comments unless they are now incorrect
- Update comments and docstrings that reference changed behavior
## Step 5: Run Tests Again
Run the full test suite after all changes:
```bash
npm test
pytest -v
go test ./...
```
If any test that was previously passing now fails:
1. Analyze the failure
2. Fix the issue (either in the refactored code or the test)
3. Run tests again until all previously-passing tests still pass
## Step 6: Summary Report
Provide:
- List of all files modified with a one-line description of each
- Before/after comparison for key changes
- Test results: all passing, or note any changes
- Any follow-up refactoring that would be beneficial
How to use it:
/refactor Extract the validation logic from UserController into a separate ValidationService class
/refactor Convert all callback-based functions in src/api/ to async/await
Skill 4: /db-migrate — Create Database Migrations
Database migrations are one of those tasks where getting the details wrong can be catastrophic. This skill generates migration files that match your project’s ORM and conventions.
---
name: db-migrate
description: Create a database migration for a schema change
arguments:
- name: description
description: Description of the schema change needed
required: true
---
Create a database migration for the following schema change:
"$ARGUMENTS"
## Step 1: Detect ORM and Migration Framework
Search the project for:
- `prisma/schema.prisma` → Prisma
- `alembic/` or `alembic.ini` → SQLAlchemy + Alembic
- `migrations/` + Django patterns → Django ORM
- `db/migrate/` → Rails ActiveRecord
- `drizzle.config.*` → Drizzle ORM
- `knexfile.*` → Knex.js
- `sequelize` in package.json → Sequelize
- `typeorm` in package.json → TypeORM
Read the existing migration files to understand patterns and
naming conventions.
## Step 2: Analyze Existing Schema
Read the current schema definition:
- Prisma: Read `prisma/schema.prisma`
- Alembic: Read the latest migration and models
- Django: Read `models.py` files
- TypeORM: Read entity files
Identify what tables, columns, and relationships already exist
that are relevant to the requested change.
## Step 3: Generate the Migration
Create the migration file using the framework's conventions:
**For Prisma:**
1. Update `prisma/schema.prisma` with the schema changes
2. Run `npx prisma migrate dev --name `
**For Alembic:**
1. Generate: `alembic revision --autogenerate -m "$ARGUMENTS"`
2. Review and edit the generated migration file
3. Ensure both upgrade() and downgrade() are correct
**For Django:**
1. Update the model in `models.py`
2. Run `python manage.py makemigrations`
3. Review the generated migration
**For Knex/TypeORM/Drizzle:**
Generate the appropriate migration file with both up and down
methods.
## Step 4: Safety Checks
Every migration MUST have:
- A **rollback/down migration** — never create an irreversible
migration without explicit user approval
- **Null safety** — new NOT NULL columns need defaults or a
data migration step
- **Index considerations** — add indexes for new foreign keys
and frequently-queried columns
- **No data loss** — column renames and type changes should
preserve existing data
## Step 5: Verify
Run the migration against the development database:
```bash
npx prisma migrate dev # Prisma
alembic upgrade head # Alembic
python manage.py migrate # Django
npx knex migrate:latest # Knex
```
Then verify by checking the schema matches expectations.
## Step 6: Report
Provide:
- Migration file path and name
- Summary of schema changes
- Whether a rollback migration exists
- Any manual steps needed (data backfill, etc.)
- The command to apply the migration
How to use it:
/db-migrate Add a "last_login_at" timestamp column to the users table
/db-migrate Create a many-to-many relationship between posts and tags
Skill 5: /api-doc — Generate API Documentation
Keeping API documentation in sync with code is a perennial struggle. This skill scans your codebase for route definitions and generates comprehensive, OpenAPI-compatible documentation.
---
name: api-doc
description: Generate API documentation by scanning route definitions
arguments:
- name: scope
description: Optional — specific file or directory to document (defaults to all routes)
required: false
---
Generate comprehensive API documentation for this project.
Scope: $ARGUMENTS (if empty, document all routes).
## Step 1: Discover Route Definitions
Search the codebase for route/endpoint definitions:
- **Express.js**: `app.get(`, `app.post(`, `router.get(`, etc.
- **FastAPI**: `@app.get(`, `@app.post(`, `@router.get(`
- **Django**: `urlpatterns`, `path(`, `@api_view`
- **Flask**: `@app.route(`, `@blueprint.route(`
- **Rails**: `routes.rb`, `resources :`, `get '/'`
- **Go**: `http.HandleFunc(`, `r.GET(`, `e.GET(`
- **Spring**: `@GetMapping`, `@PostMapping`, `@RequestMapping`
List all discovered endpoints.
## Step 2: Analyze Each Endpoint
For every endpoint, determine:
- HTTP method (GET, POST, PUT, DELETE, PATCH)
- URL path and path parameters
- Query parameters
- Request body schema (read the handler to see what fields
it expects)
- Response schema (read the handler to see what it returns)
- Authentication requirements (middleware, decorators)
- Error responses (what status codes and error formats)
## Step 3: Generate Documentation
Create a markdown file at `docs/api-reference.md` with the
following structure:
```markdown
# API Reference
## Authentication
[Describe auth mechanism]
## Endpoints
### [Resource Name]
#### GET /api/resource
Description of what this endpoint does.
**Parameters:**
| Name | In | Type | Required | Description |
|------|-----|------|----------|-------------|
| id | path | string | Yes | Resource ID |
**Response 200:**
```json
{ "id": "...", "name": "..." }
```
**Response 404:**
```json
{ "error": "Resource not found" }
```
```
Also generate an OpenAPI 3.0 YAML file at `docs/openapi.yaml`
if the project does not already have one.
## Step 4: Cross-Reference
- Verify every route in code has documentation
- Verify every documented route exists in code
- Flag any discrepancies
## Step 5: Report
Provide:
- Total number of endpoints documented
- Breakdown by HTTP method
- Any endpoints that could not be fully documented (and why)
- File paths for generated documentation
How to use it:
/api-doc
/api-doc src/routes/users.ts
Skill 6: /security-audit — Check for Security Vulnerabilities
This is the skill that could save your company from a breach. It systematically checks for OWASP Top 10 vulnerabilities, dependency issues, and accidental secret exposure.
---
name: security-audit
description: Scan codebase for security vulnerabilities and secrets
arguments:
- name: scope
description: Optional — specific file or directory to audit (defaults to full project)
required: false
---
Perform a comprehensive security audit of this codebase.
Scope: $ARGUMENTS (if empty, audit the entire project).
## Step 1: Secrets Detection
Search the entire codebase for accidentally committed secrets:
1. Search for patterns matching:
- API keys: strings matching `[A-Za-z0-9_-]{20,}` near
keywords like "key", "token", "secret", "password"
- AWS credentials: `AKIA[0-9A-Z]{16}`
- Private keys: `-----BEGIN.*PRIVATE KEY-----`
- Connection strings with passwords
- Hardcoded passwords in configuration files
- JWT secrets
2. Check that `.gitignore` includes:
- `.env` and `.env.*`
- `*.pem`, `*.key`
- `credentials.json`, `secrets.yaml`
3. Check for `.env.example` that accidentally contains real values
## Step 2: OWASP Top 10 Check
Scan for common vulnerabilities:
**Injection (SQL, NoSQL, Command):**
- Search for string concatenation in database queries
- Search for unsanitized input in shell commands
- Search for `eval()`, `exec()`, or equivalent
**Broken Authentication:**
- Check password hashing (bcrypt/argon2 vs MD5/SHA1)
- Check session management
- Check for hardcoded credentials
**Sensitive Data Exposure:**
- Check for sensitive data in logs
- Check HTTPS enforcement
- Check for sensitive data in error messages
**XML External Entities (XXE):**
- Check XML parser configurations
**Broken Access Control:**
- Check for missing authorization middleware
- Check for IDOR vulnerabilities (direct object references)
**Security Misconfiguration:**
- Check CORS configuration
- Check for debug mode in production configs
- Check default credentials
**Cross-Site Scripting (XSS):**
- Check for unsanitized user input in HTML output
- Check for dangerouslySetInnerHTML (React)
**Insecure Deserialization:**
- Check for unsafe deserialization of user input
**Using Components with Known Vulnerabilities:**
- Run `npm audit` or `pip audit` or equivalent
- Check for outdated dependencies
**Insufficient Logging:**
- Check that authentication events are logged
- Check that authorization failures are logged
## Step 3: Dependency Audit
Run the appropriate dependency audit:
```bash
npm audit # Node.js
pip audit # Python
go vuln check ./... # Go
bundle audit # Ruby
```
## Step 4: Generate Report
Create a security report with severity ratings:
| Finding | Severity | Location | Recommendation |
|---------|----------|----------|----------------|
| ... | CRITICAL/HIGH/MEDIUM/LOW | file:line | Fix description |
Sort by severity (CRITICAL first).
For each finding:
- Describe the vulnerability
- Show the specific code involved
- Explain the potential impact
- Provide a concrete fix (code snippet)
## Step 5: Summary
Provide:
- Total findings by severity
- Top 3 most critical issues to fix immediately
- Overall security posture assessment
- Recommended next steps
How to use it:
/security-audit
/security-audit src/auth/
This skill is particularly valuable because it codifies security knowledge that many developers do not have memorized. Every team member can now run a thorough security audit just by typing twelve characters.
Advanced Skill Techniques
Once you have the basics down, there are several advanced patterns that can make your Skills even more powerful.
Skills That Call Other Skills
One of the most powerful features of Skills is that they can invoke other Skills. This lets you build complex workflows from simpler building blocks. For example, a /release skill might internally call /write-tests, then /security-audit, then /deploy:
---
name: release
description: Full release workflow — test, audit, deploy
arguments:
- name: version
description: Version number for this release
required: true
---
Execute the full release workflow for version $ARGUMENTS.
## Step 1: Run Tests
Invoke the /write-tests skill for any files changed since the
last release. Ensure full coverage on modified code.
## Step 2: Security Audit
Invoke the /security-audit skill on the entire project.
If any CRITICAL findings exist, STOP and report them.
## Step 3: Deploy
If all checks pass, invoke /deploy production.
## Step 4: Tag Release
```bash
git tag -a v$ARGUMENTS -m "Release $ARGUMENTS"
git push origin v$ARGUMENTS
```
Composition means you do not have to duplicate logic across skills. Write each capability once, then combine them into higher-level workflows.
Skills That Read Project Configuration
Smart Skills adapt to the project they are running in. Instead of hardcoding tool names or paths, have your Skills read the project’s configuration files:
## Step 1: Detect Project Type
Read the project root to determine the technology stack:
- If `package.json` exists → Node.js project
- Read it to find the test command, build command, and linter
- If `pyproject.toml` exists → Python project
- Read it to find the test runner and build system
- If `go.mod` exists → Go project
- If `Cargo.toml` exists → Rust project
Use the detected commands throughout this skill instead of
hardcoded values.
This pattern makes your skills portable across different project types. The same /deploy skill can work in a Node.js project, a Python project, or a Go project because it detects the stack first.
Skills with Complex Argument Handling
While the $ARGUMENTS placeholder gives you the raw user input, you can write instructions that parse complex arguments:
---
name: scaffold
description: Scaffold a new component with options
arguments:
- name: spec
description: "Format: component-name --type=page|component --with-tests --with-styles"
required: true
---
Parse the following specification: $ARGUMENTS
Extract:
- **Component name**: The first word
- **Type**: Value after --type= (default: component)
- **Include tests**: Whether --with-tests is present
- **Include styles**: Whether --with-styles is present
Example valid invocations:
- /scaffold UserProfile --type=page --with-tests --with-styles
- /scaffold Button --type=component --with-tests
- /scaffold Header
Since Claude Code is parsing the instructions (not a shell), you can define any argument format you want — even natural language arguments work fine.
Skills That Use Environment Variables
Skills can reference environment variables for configuration that should not be hardcoded:
## Deployment Configuration
Read the deployment target from environment variables:
```bash
echo $DEPLOY_HOST
echo $DEPLOY_USER
echo $DEPLOY_PATH
```
If any of these are not set, ask the user to configure them
in their .env file before proceeding.
Skills That Interact with MCP Servers
Model Context Protocol (MCP) servers extend Claude Code with additional capabilities — database access, API integrations, custom tools. Skills can leverage MCP servers by referencing their tools in instructions:
## Step 3: Query the Database
Use the database MCP server to check the current schema:
- List all tables
- Show the columns for the affected table
- Check for existing indexes
This information will guide the migration generation.
If your team has MCP servers configured for Slack, Jira, or internal APIs, your Skills can orchestrate interactions across all of those systems — sending deployment notifications to Slack, creating Jira tickets for follow-up work, or querying internal services.
Error Handling in Skills
Robust Skills anticipate failure and provide clear guidance for recovery:
## Error Handling
If any step fails:
1. **Command not found**: The required tool may not be installed.
Tell the user what to install and how.
2. **Permission denied**: Suggest running with appropriate
permissions or checking file ownership.
3. **Network error**: Check if the target host is reachable.
Suggest checking VPN connection if applicable.
4. **Test failure**: Do NOT proceed with deployment. Show the
failing tests and ask the user how to proceed.
5. **Build failure**: Show the full error output and suggest
common fixes based on the error type.
In ALL error cases: provide the exact error message, the command
that failed, and a suggested fix. Never silently skip a failed step.
Testing Skills Before Sharing
Before committing a skill to your project’s repository, test it thoroughly:
- Start with user-level: Put the skill in
~/.claude/skills/first so only you can see it. - Test with dry runs: Add a
--dry-runmode to your skill that prints what would happen without actually doing it. - Test edge cases: Try invoking the skill with no arguments, wrong arguments, and unusual inputs.
- Test in a clean environment: Clone a fresh copy of your repo and test the skill there to ensure it does not depend on local state.
- Get a teammate to try it: Fresh eyes catch unclear instructions and missing steps.
Sharing Skills With Your Team and the Community
Skills are only as valuable as their reach. A brilliant deployment skill that lives on one developer’s laptop helps one person. The same skill committed to the project repository helps the entire team. Let us look at the different sharing mechanisms.
Project Skills — Team-Wide via Git
Place your skills in .claude/skills/ within your repository and commit them to git. Every team member who clones the repo gets access to the same skills. This is the recommended approach for project-specific workflows.
# Add skills to your project
mkdir -p .claude/skills
cp deploy.md .claude/skills/
cp write-tests.md .claude/skills/
# Commit and push
git add .claude/skills/
git commit -m "Add team skills: deploy, write-tests"
git push
Benefits of project skills:
- Version controlled — you can see when skills changed and why
- Code review — skill changes go through the same PR process as code
- Consistency — everyone uses the same workflows
- Onboarding — new team members immediately have access to all workflows
User Skills — Personal Productivity
Skills in ~/.claude/skills/ are personal. They apply to every project you work on but are not shared with anyone. Use these for:
- Personal coding style preferences
- Workflows specific to your role (not everyone needs a
/deploy-to-my-dev-serverskill) - Experimental skills you are still refining
- Skills that reference personal configuration (your SSH keys, your servers)
Community Skill Repositories
As the Claude Code ecosystem grows, community repositories of skills are emerging. These are collections of battle-tested skills that you can browse, copy, and adapt for your own projects. When using community skills, always:
- Read the skill file completely before installing it — you are giving it instructions that Claude Code will follow
- Adapt paths, commands, and conventions to your project
- Test in a safe environment first
- Keep attribution if the skill has a license
Best Practices for Team Skill Libraries
| Practice | Why It Matters |
|---|---|
| Prefix skill names with your team or project name | Avoids conflicts with built-in skills and other teams’ skills |
| Include a comment header in each skill with author and date | Makes it easy to find the right person to ask about a skill |
Write a README in .claude/skills/ listing all available skills |
New team members can discover skills without guessing names |
| Review skill changes in PRs just like code | A bad skill instruction can cause Claude Code to make mistakes |
| Keep skills focused — one skill, one job | Composable skills are more reusable than monolithic ones |
| Use composition for complex workflows | Avoids duplicating logic across multiple skills |
Skills in the Broader Claude Code Ecosystem
Skills do not exist in isolation. They are one piece of a larger extension architecture that includes CLAUDE.md files, hooks, and MCP servers. Understanding how these pieces fit together helps you make better design decisions about where to put your logic.
Skills and CLAUDE.md
CLAUDE.md files provide persistent, always-on context. Every time Claude Code starts a session in your project, it reads the CLAUDE.md file and follows its instructions throughout the conversation. This is the right place for:
- Project-wide coding standards (“always use single quotes”)
- Architectural decisions (“we use the repository pattern for data access”)
- File organization rules (“tests go in
__tests__/directories”) - Forbidden patterns (“never use
anytype in TypeScript”)
Skills, by contrast, are loaded on-demand. They are the right place for workflows that have a clear start and end — “deploy this,” “write tests for that,” “audit this code.” The distinction is: CLAUDE.md is “always remember this” and Skills are “when I ask you to do this specific thing, do it this way.”
Skills and Hooks
Hooks are automated behaviors that trigger on specific events — before a commit, after a file save, when a new file is created. They are configured in settings.json and run without user invocation. The key difference: Skills are user-initiated (you type the slash command), while hooks are event-initiated (they trigger automatically when something happens).
A common pattern is to use Skills for the manual workflow and hooks for the automated enforcement. For example, your /security-audit skill lets developers run manual audits, while a pre-commit hook automatically runs a lightweight secret scan on every commit.
Skills and MCP Servers
MCP servers provide tools — discrete capabilities like “query a database” or “send a Slack message.” Skills provide workflows — sequences of steps that might use multiple tools. The relationship is complementary: Skills orchestrate, MCP servers provide the building blocks.
Think of it this way: an MCP server for your database gives Claude Code the ability to run queries. A Skill tells Claude Code when to run queries, what to query for, and what to do with the results — all in the context of a specific workflow like generating a migration or auditing data integrity.
The Complete Extension Architecture
| Extension | When It Runs | What It Does | Best For |
|---|---|---|---|
| CLAUDE.md | Always (every session) | Provides persistent context and rules | Coding standards, project knowledge |
| Skills | On-demand (slash command) | Injects workflow instructions | Complex, multi-step workflows |
| Custom Commands | On-demand (slash command) | Injects simpler instructions | Project-specific quick tasks |
| Hooks | Automatically (on events) | Runs scripts on triggers | Enforcement, automation |
| MCP Servers | When tools are called | Provides external capabilities | Database, APIs, integrations |
Common Mistakes and How to Fix Them
After building and reviewing dozens of custom Skills, these are the patterns that trip people up most frequently.
| Mistake | What Happens | Fix |
|---|---|---|
| Instructions are too vague | Claude Code interprets the task differently each time, producing inconsistent results | Be specific: name exact commands, file paths, and expected outputs |
| No error handling | Skill silently fails or continues after an error, causing cascading problems | Add explicit “if this fails, do X” instructions for each critical step |
| Hardcoded paths and tools | Skill only works on the original author’s machine or project | Detect the project stack and adapt commands dynamically |
| Missing output format specification | Claude Code produces output in a random format each time | Specify exactly how output should be formatted (file, console, table) |
| No safety checks before destructive actions | Skill deploys broken code, drops a database table, or overwrites files | Always run tests, verify state, and confirm before destructive operations |
| Trying to do too much in one skill | Skill is fragile, hard to maintain, and confusing to use | Break it into smaller skills and use composition |
| Not testing with different argument values | Skill works with one input but breaks with others | Test with empty, minimal, and unusual arguments before sharing |
| Naming conflicts with built-in skills | Your custom skill is never invoked because the built-in takes precedence | Use unique, descriptive names — prefix with project or team name |
| Forgetting the frontmatter | Skill may not be recognized or arguments may not be parsed correctly | Always include the YAML frontmatter block with name, description, and arguments |
| No final report or summary | User has no idea what the skill did or whether it succeeded | End every skill with a “Report” step summarizing what was done |
npm test and check that the exit code is 0. If any test fails, show the first 30 lines of output and stop.”
Conclusion
Skills are one of those features that separate casual Claude Code users from power users. They transform Claude Code from a chatbot that happens to have terminal access into a purpose-built automation platform that understands your team’s exact workflows. And unlike traditional automation tools, Skills are written in plain English — no DSL to learn, no YAML schemas to memorize, no build systems to configure.
Let us recap the key points. Skills are markdown-based instruction sets loaded into Claude Code’s context on-demand via slash commands. They have frontmatter for metadata and arguments, and a body of detailed instructions. They exist at three levels — built-in, user, and project — with built-in taking precedence. The built-in skills like /commit, /review-pr, and /pr handle common git workflows, while custom skills can automate literally any workflow you can describe in English.
The six skill examples we built — /deploy, /write-tests, /refactor, /db-migrate, /api-doc, and /security-audit — represent the kinds of high-value automations that save teams hours every week. But they are just starting points. The real power comes when you identify the repetitive, error-prone workflows in your own development process and encode them as Skills.
Here is what I recommend as your next step: pick one thing you did manually this week that took more than five minutes and involved multiple steps. Write a Skill for it. Put it in ~/.claude/skills/ and test it. Refine the instructions until the output is exactly what you want. Then move it to .claude/skills/ and share it with your team. In a month, you will have a library of Skills that makes your entire team measurably faster — and you will wonder how you ever worked without them.
Leave a Reply