Three Claude Code skills repos sit in GitHub’s top 10 trending simultaneously. Combined stars: north of 146,000. Everything Claude Code alone crossed 100K. Superpowers hit 42K+ and landed in the official Anthropic marketplace. A third collection, claude-skills by Alireza Rezvani, passed 5,200 stars with 192 skills covering everything from compliance to C-level advisory prompts.
That kind of concentration around a single tool’s extension ecosystem is unusual. It tells you something about where developer attention is right now. It also tells you the hype machine is running hot.
Here is what actually works, what is noise, and how to build something useful yourself.
Why The Star Count Matters
GitHub stars are a vanity metric. Everyone knows that. But three repos trending at once for the same niche is not vanity. It is signal.
Claude Code went from a terminal-based coding assistant to the centre of an emerging plugin economy in under a year. Developers are not just using it. They are extending it, packaging those extensions, and sharing them. The speed at which this happened caught even Anthropic off guard. They responded by launching an official plugin marketplace and a built-in Skill Creator tool that walks you through building skills interactively.
The comparison that keeps coming up is VS Code extensions circa 2017. Small ecosystem, big enthusiasm, lots of low-quality entries, a few genuinely useful tools that became essential. The difference is that Claude Code skills are not compiled binaries or complex TypeScript plugins. They are markdown files with YAML frontmatter. The barrier to entry is a text editor and twenty minutes.
That low barrier is both the opportunity and the problem.
The Three Tiers: Superpowers vs Native Skills vs Community
Not all skills are equal. The ecosystem has settled into three rough tiers, and understanding the differences saves you from installing 119 skills when you need four.
Native (built-in) skills
Claude Code ships with a handful of bundled skills: /simplify, /review, /loop, /debug, /claude-api, and a few others. These are prompt-based, not hardcoded. They give Claude a detailed playbook and let it orchestrate using its existing tools. The /simplify skill, for instance, reviews your changed code for reuse opportunities and quality issues, then fixes what it finds. The /loop skill runs a command on an interval, useful for polling CI or watching a deploy.
Built-in skills are reliable, well-tested, and limited in scope. They do not try to reshape how you work. They just make specific tasks faster.
Superpowers (framework-level)
Superpowers is the most opinionated player in this space. Created by Jesse Vincent, it went from a few thousand stars to over 42,000 by March 2026 and earned a spot in the official Anthropic marketplace.
My take: Superpowers is not a skill. It is a development methodology encoded as a skills framework. It forces test-driven development cycles. It mandates a four-phase debugging methodology that requires root cause investigation before any fix. It runs Socratic brainstorming sessions that refine requirements before coding starts. The key word is “force.” These are not suggestions. Claude will not let you skip the red-green-refactor cycle once Superpowers is active.
This works well if your team already values TDD and structured debugging. It works badly if you want quick prototyping or exploratory coding. The 42K stars reflect genuine utility for a specific workflow, but they also reflect the GitHub trending algorithm rewarding early momentum.
Community collections
Everything Claude Code ships 28 agents, 119 skills, and 60 commands. It works across Claude Code, Cursor, Codex, and OpenCode from a single repo. The 100K star count is real. The question is how many of those skills you will actually use.
I have seen developers install the full ECC suite, use three or four skills regularly, and forget the rest exist. The selective-install architecture in v1.9.0 (released March 2026) acknowledges this by letting you pick what you need instead of loading everything.
Other collections like awesome-claude-skills from ComposioHQ curate rather than create. They are starting points for discovery, not frameworks to adopt wholesale.
Build A Minimal Useful Skill
Enough theory. Here is how to build a skill that does something practical: a pre-commit reviewer that checks your staged changes before you commit.
Step 1: Create the directory
mkdir -p .claude/skills/pre-commit-review
Step 2: Write the SKILL.md file
---
name: pre-commit-review
description: Review staged git changes for bugs, security issues, and style problems before committing
trigger: manual
---
## Instructions
When invoked, do the following:
1. Run `git diff --cached` to get all staged changes
2. Review each changed file for:
- Obvious bugs (null references, off-by-one errors, unclosed resources)
- Security issues (hardcoded secrets, SQL injection, XSS vectors)
- Style violations specific to this project's conventions
3. Report findings grouped by severity: blocking, warning, note
4. If no blocking issues found, say "Clear to commit" with a one-line summary of what changed
5. If blocking issues exist, list them and suggest fixes
That is the entire skill. No build step. No dependencies. Invoke it with /pre-commit-review from the Claude Code terminal.
Step 3: Make it smarter (optional)
Add a context.md file in the same directory with your project’s specific conventions. The skill will pick it up automatically. You could include things like “we use snake_case for database columns” or “never commit console.log statements.” The more specific your context, the more useful the review.
The official docs at code.claude.com/docs/en/skills cover the full SKILL.md spec. Verify frontmatter fields against current docs, as the schema has evolved since the initial release.
A Real-World Reference: Maud
Skills are individual tools. The more interesting question is what happens when you compose them into an autonomous system.
Maud is an open-source personal AI assistant built on Claude Code’s tooling layer. It runs as a single runtime with MCP (Model Context Protocol) servers for RSS feeds, Twitter, Reddit, and Substack. It has a memory system split into context, working, and archive tiers. It has a self-evolve capability that writes its own improvement specs, runs them through Claude, tests the result, and commits passing changes automatically.
Maud uses specialist agents, each with their own skill files and working memory. A journalist agent writes articles. A researcher agent investigates topics. A backend developer agent handles code changes. Each agent’s skill file defines its personality, capabilities, and constraints.
The architecture shows where skills are heading: not standalone slash commands, but composable building blocks inside larger autonomous systems. MCP servers give Claude access to external data. Skills define how it processes that data. Memory systems let it learn from previous runs. Self-evolve closes the loop by letting the system improve its own skills over time.
This is not theoretical. Maud runs scheduled tasks, generates reports, and manages its own codebase improvements in production. The skills are the atoms. The orchestration layer is what makes them useful.
Where This Is Heading
Three trends worth watching.
Consolidation is coming. 192 skills in one repo sounds impressive until you realise 150 of them overlap. The ecosystem will consolidate around fewer, higher-quality skills. The Anthropic marketplace will accelerate this by giving visibility to curated entries and burying the rest.
Security is the unsolved problem. Skills are markdown files that instruct an AI agent with access to your filesystem, terminal, and git history. A malicious skill could exfiltrate code, inject backdoors, or delete files. The ECC project includes an “AgentShield” security scanning layer, which is a start. But the broader ecosystem has no code signing, no sandboxing, and no review process beyond “read the SKILL.md before installing.” This will matter more as skills move from developer tools to production agent systems.
Skill marketplaces will fragment. Anthropic has their official marketplace. GitHub has trending repos. Individual developers sell premium skills on Gumroad. Enterprise teams build internal skill libraries. There is no npm-for-skills yet, and whoever builds it will capture significant value. The infrastructure for versioning, dependency management, and compatibility testing does not exist. Someone will build it.
The skills gold rush is real. The demand is real. The best skills, the ones that encode a specific, opinionated workflow like Superpowers or solve a concrete problem like a pre-commit reviewer, deliver genuine value.
The hype is in the star counts, the “119 skills” headlines, and the assumption that more skills equals more productivity. Most developers will use fewer than ten skills regularly. The winners in this ecosystem will not be the biggest collections. They will be the most focused ones.
Build skills that solve problems you actually have. Start with one. Make it good. That is worth more than installing everything.


