The Skill-Sharing Ecosystem — Field Report

Methods, hubs, startups, strategies — and where it's heading

Compiled May 9, 2026 — Flint / Coleman Research

The Verdict

A 1996-Yahoo moment. SKILL.md won the spec war; everything else is up for grabs.
Six months ago there were three competing distribution paradigms (git-clone hubs, npx-style installers, MCP-server registries) and no agreed-upon manifest format. Today there's one manifest format (SKILL.md, with YAML frontmatter)[1] and three coexisting distribution paths. Supply has overshot by an order of magnitude — 46.3% of skills on skills.sh are duplicates[2], 9% are critical-risk[2], and the marketplace's most-installed skill is literally "find-skills"[3]. The moat has shifted from "have skills" to "find the right one and trust it." Three plays look durable: (1) curated personal toolkits with a strong author voice (gstack, superpowers), (2) signed-bundle org-internal vaults (AutoVault), and (3) MCP-as-skill-runtime hosted infra (Composio, Smithery). Pure open hubs collapse into commodity directories. Indie path-of-most-leverage: specialize hard in an under-supplied vertical, sign your bundles, and publish to two surfaces (your own GitHub repo + npx skills add) — you don't need a marketplace.
~1.2M
Skills indexed (SkillsMP)[4]
90k+
Skills tracked on skills.sh[3]
46.3%
Duplicate / near-dup rate[2]
9%
Critical-risk skills (L3)[2]
~100
Tokens per skill at idle[5]
18.5x
Skill supply growth Jan→Feb 2026[2]

Hub Size (Skills Indexed)

Distribution Mix

Funding vs. Skill-Adjacency

Where Trust Comes From

The Three Sharing Methods, Side By Side

Git-clone Hubs

The original. git clone a repo, drop a folder into ~/.claude/skills/. Examples: anthropics/skills, garrytan/gstack, obra/superpowers, flintfromthebasement/skills.

Pros: Zero infra, full transparency, auditable. Forks-as-monetization (write a book, ship a free repo, sell consulting).

Cons: No update mechanism. No telemetry. Trust = "do you know the author?" Drift detection is on the user.

npx-Style Installers

Vercel's npx skills add <owner/repo>; Anthropic's /plugin install. Skills as packages, indexed on a leaderboard, pulled on demand.[6]

Pros: Cross-runtime by default (works on Claude Code, Codex, Cursor, Cline, Roo, etc.). Discoverable. Versioned-ish.

Cons: Telemetry-driven leaderboards reward virality, not quality. Top skill is "find-skills" (recursive!). Same security model as git-clone underneath.

MCP-Server Registries

A skill, but it's a running process. Smithery (~7k servers), Composio's MCP gateway (250+ apps), Cline's MCP Marketplace, the official MCP registry (Anthropic + GitHub + Microsoft + PulseMCP).[7]

Pros: Real auth (OAuth). Real state. Real APIs. Hosted runtime.

Cons: Heavy: a 5-server setup eats ~55k tokens before your first prompt.[5] Wrong primitive for procedural knowledge ("how to write a PR description"), right primitive for connectivity ("read my Salesforce").

The clarifying decision rule — from morphllm's 2026 guide: "If your use case contains the words 'query', 'fetch', or 'current state', you need an MCP server, not a skill. Does Claude need to know how to do something repeatably? → Skill."[5] Most workflows want both: MCP for connectivity, skills for methodology.

Skill Hubs — The Real Inventory

Twelve hubs that actually matter, plus a note on what's getting overshadowed. Sizes are as of May 2026; expect drift weekly.

Hub Distribution Size Trust Model Monetization Standout
anthropics/skills
github.com/anthropics/skills
Git-clone ~25 official, 87k stars[1] Anthropic-blessed Free The reference implementation. Doc-skills (DOCX, PDF, PPTX, XLSX) ship pre-built in Claude.ai.
obra/superpowers
Jesse Vincent
Git-clone Plugin ~30 skills, 121k stars[8] Creator-led (Jesse Vincent) Free, MIT Methodology-first. RED/GREEN TDD enforcement, "Cialdini-as-LLM-persuasion" framing. Currently #2 trending repo on GitHub.
garrytan/gstack
Y Combinator's Garry Tan
Git-clone 23 skills, 92.2k stars[9] Creator-led (YC president) Free, MIT "Virtual eng team" framing — CEO, Designer, Eng Mgr, QA, Release Mgr as personae. Telemetry opt-in, defaults off.
Vercel skills.sh
vercel-labs/agent-skills
npx Git-clone 90k+ tracked[3] Install-count leaderboard Free Cross-runtime default. npx skills add <repo> works on 17+ agents (Claude, Codex, Cursor, Cline, Roo, Windsurf, Gemini CLI...).[6]
VoltAgent awesome-agent-skills Curated list 1,100+ skills[10] Editorial + brand verification Sponsorships (logo placement) Org-anchored. Indexes official skills from Anthropic, Google, Microsoft, Vercel, Stripe, Cloudflare, Sentry, Figma, etc.
ComposioHQ awesome-claude-skills Curated list + MCP gateway 1000+ skills, 58.9k stars[11] Platform-anchored SaaS revenue (Composio platform) The list is bait for the platform — 78 pre-built SaaS automations that route through Composio's MCP gateway.
SkillsMP
skillsmp.com
Aggregator 1.2M skills indexed[4] 2-star min, REST API filters Free Largest by raw count. 23 occupation groups + 12 categories. Indie project, REST API for programmatic access.
claudemarketplaces.com
@mertduzgun
Aggregator 4,200 skills, 2,500 marketplaces[12] Install count + GitHub stars + community votes Free, indie-built Meta-aggregator. Indexes other hubs. Built solo with Claude Code itself.
anthropics/claude-plugins-official /plugin install ~50 plugins, 19k stars[13] Anthropic-curated, "Anthropic Verified" badge Free The official "App Store." Plugins bundle skills + MCP + commands + agents into one install.
Smithery
MCP-server marketplace
MCP ~7k servers[14] Hosted infra + OAuth Hosted infra fees (seed-funded by South Park Commons) "Docker Hub for MCP." Hosts and runs servers. Pioneered the MCP marketplace pattern.
Cline MCP Marketplace MCP Hundreds of servers[15] Curated, one-click install Free + paid Cline tiers 5M+ developer base. Marketplace lives inside the IDE extension.
AutoVault
autoworks-ai/autovault
Signed bundles MCP Local-first vault (your skills)[16] Ed25519 signature verification Open source The only org-internal play with cryptographic provenance. Drift detection, sync across Claude Code + Codex.

Notable mentions (overshadowed but real)

Continue.dev Hub

Models, rules, prompts, and MCP tools, browseable, single-click install across IDEs.[17] $5.6M total raised. Closest thing to a "Steam for IDE assistants" but pre-tipped market.

Roo Code Mode Gallery

Community marketplace for "modes" (Architect, Code, Debug, Ask, Custom). Roo skills follow the SKILL.md packaging spec, modes are the role-router.[18]

OpenAI Agent Builder + Skills

OpenAI shipped agent-skills support in the Agents SDK in early 2026 — same SKILL.md format Anthropic uses.[19] Python Agents SDK at 14.7M PyPI downloads as of March.

GPT Store

The cautionary tale. Monetization exists; most creators hit $100–$500/mo, payouts ~$0.03/conversation.[20] The action moved to enterprise B2B. Custom GPTs themselves did not achieve developer-economy escape velocity.

agent.ai (Dharmesh Shah)

230k users, 280+ agents, "LinkedIn for AI agents" framing.[21] Less skill-sharing, more agent-as-coworker hiring. Different layer, but adjacent.

LangChain / LangGraph Templates

Templates over a hub. Open-source reference apps, one-click deploy to LangGraph Cloud.[22] Notable for what they didn't do: build a marketplace.

What's missing from the table: Replit Bounties (deprecated 2026[23]), the original ChatGPT Plugin Store (defunct), smol-developer plugins (subsumed by simple.ai). LangChain Hub never quite became the npm-for-prompts that was promised — the energy moved to LangGraph Templates and OTel-native observability.

Startups Building This Layer

Two questions cut through the hype: (1) is skills the product, or a feature? (2) who pays?

Skills-as-Product (the bet is on the layer itself)

  • Composio — $29M Series A April 2025, led by Lightspeed.[24] Pitch: "skills that evolve with your agents," collective learning across the platform. MCP gateway to 250+ apps is the wedge.
  • Toolhouse — MCP integration platform with versioning + execution mgmt. Reports $1M+ TSV through builder ecosystem.[25]
  • Arcade.dev — $12M, Okta-backed.[26] Auth-for-agent-tools is the wedge. Tightly aligned with the supply-chain-security panic.
  • Smithery — seed from South Park Commons, founded 2025 by Anirudh Kamath + Henry Mao.[14] "Docker Hub for MCP." Whoever wins MCP hosting could compound.
  • Letta (formerly MemGPT) — $10M out of stealth, UC Berkeley Sky lab.[27] "Skill Learning" feature lets agents extract skills from past sessions and re-use them. Memory + skill in one runtime.
  • Agensi — the only skill marketplace with native paid skills + Stripe payouts. Creators take 80%.[28] Tiny but the only one trying creator-economy monetization.

Skills-as-Feature (the bet is on the IDE)

  • Cursor (Anysphere) — $50B+ valuation talks April 2026, $2B ARR, 30%+ of internal PRs are agent-authored.[29] Rules & .mdc files are the skill primitive. Skills aren't the product; the agent loop is.
  • Cline — 5M+ developer base.[15] MCP marketplace is in-IDE. Open-source, BYOK leverage on Anthropic/OpenAI keys.
  • Roo Code — Cline fork that productized "modes" + Mode Gallery + cloud-agent runtime. SKILL.md compatible.[18]
  • Continue — $5.6M raised, IDE-extension-first, broadest IDE coverage.[17] Less skill-marketplace energy, more "team-shared assistant config."
  • Replit — pivoted from Bounties (deprecated) to Mobile Agents. ~$9B valuation. Skills aren't the surface; generated apps are.[23]

Hidden Gems (the under-covered)

The obvious names are noise. The interesting bets are at the edges — people solving sub-problems that the giants haven't gotten to yet.

Letta — Skill Learning + Git-backed Memory ("MemFS")

The only player whose agents write their own skills from experience.

Most skills are authored by humans. Letta's wager is that agents should extract them. After working through a complex task, you can trigger the agent to "learn a skill" — the system codifies the procedure, stores it as a SKILL.md-shaped artifact in a git-backed memory repo (their "MemFS"), and makes it available to future sessions or other agents.[27]

Why it matters: if this works at scale, the human-authored skill marketplace becomes a cold-start mechanism. The long-tail comes from agents recording what worked. This is the same shape as Composio's "evolve with your agents" pitch but with a memory-first runtime to back it up.

The risk: agent-authored skills are skill-equivalents of "AI-generated content slop." Quality control is unsolved.

obra/superpowers — Methodology-as-skill, viral via blog post

Jesse Vincent shipped a methodology, not a tool. 121k stars later, he's right.

Most skill repos are tool collections. Superpowers is a methodology: brainstorm-plan-implement, RED/GREEN TDD, four-phase debugging, code-review subagents, "feelings journal." It went viral because Jesse's blog post articulated a worldview, not because the skills were uniquely powerful.[30]

Why it matters: the highest-leverage skills are the ones that change how Claude thinks, not what it can do. Procedural framing > tool exposure. The path Jesse demonstrates: write seriously about your method → package it → ship it free → let it spread.

The risk: creator-led repos collapse if the creator stops. No succession plan, no governance.

AutoVault — autoworks-ai (formerly verygoodplugins/skills)

The only skill-distribution play with real cryptographic provenance.

Local-first vault. Skills get Ed25519-signed at install; manifest signature is verified on every load.[16] Sync across Claude Code + Codex. Drift detection via lockfile. Optional remote-mode HTTP MCP with OAuth role checks.

Why it matters: the security research is screaming about this gap. Mobb.ai found 140k issues across 22.5k skills; Snyk's ToxicSkills found prompt injection in 36% of tested skills.[31] The mainstream answer right now is "trust the publisher's name." That's not a model that survives 2027. AutoVault's signed-bundle model is one of the few real answers.

The risk: small org, niche awareness. Could get lapped by an Anthropic-blessed signing standard if/when one ships. (Disclosure: Jason ships skills through this.)

Letta MemFS + git-backed context — memory and skills converge

Skills used to be markdown files. They're becoming entries in a writable filesystem the agent edits.

Letta's March 2026 architectural shift moved memory from "specialized DB-edit tools" to "generalized computer-use over a git-backed filesystem (MemFS)."[27] A skill, in this model, is just a file in a repo the agent can both read and rewrite. Git becomes the version-control layer for skills and memory and learned patterns.

Why it matters: this collapses three primitives (skill, memory, scratchpad) into one filesystem. Cleaner mental model, way easier to ship. AutoVault is sympathetic in spirit (file-based, signed, synced).

skill-kit + skill telemetry — local analytics for agent skills

The next step is "which of my skills actually fire?" Almost no one knows.

"Skill Kit" is a local-first CLI that tracks which skills your agent actually invokes vs. which sit there eating context.[32] No network, no cloud. The premise: once you cross ~30 skills, the bottleneck stops being model capability and starts being skill discovery. You can't optimize what you can't measure.

Why it matters: matches what skills.sh's leaderboard implies: the most-installed skill is "find-skills." Discovery is the actual problem now. Anyone who builds the "Datadog for skills" wins.

The Official MCP Registry — Anthropic + GitHub + Microsoft + PulseMCP

A neutral metaregistry. The boring play that makes everything else interoperable.

Launched September 2025, hit API freeze (v0.1) October 2025.[7] Hosts metadata only — not code. Smithery, Cline, Composio all federate from it. This is the closest thing the agent ecosystem has to PyPI's index.

Why it matters: historically the platform with the registry wins (npm, PyPI, Maven Central). The MCP Registry being multi-org-backed is a tell — nobody wants Anthropic to "own" the registry the way npm owns JS packages. Equilibrium-stable.

Hidden Gems — Summary Table

PlayerLayerWhy It's HiddenWhat To Watch
LettaMemory + skills"Memory startup" framing hides skill-learning angle"Skill Learning" GA, MemFS adoption
obra/superpowersMethodologyLooks like a skill repo, is actually a worldviewWhether Jesse turns it into a company
AutoVaultSigned distributionOrg-internal positioningIf a public signed-skills standard emerges
Skill Kit / skill telemetryDiscovery / opsTooling for the bottleneck nobody admits existsTelemetry conventions in SKILL.md spec
AgensiPaid creator marketplaceTiny vs. free hubsWhether $50–500 vertical-niche skills find buyers
MCP RegistryNeutral metadataBoring & federated — gets covered as plumbingWhether SKILL.md gets a parallel registry
The pattern in the gems: none of them are trying to be "the marketplace." They're each owning one corner — provenance, methodology, learning, telemetry, paid niches, neutral metadata. The ones that try to own discovery are getting flattened by skills.sh and SkillsMP. The ones owning a layer are compounding.

What's Common Across (Almost) Every Hub

The convergence is real and fast. By May 2026, you can predict the structure of a skill repo without looking at it.

1. SKILL.md with YAML frontmatter

Anthropic shipped the spec in late 2025. OpenAI adopted it in early 2026 for Codex CLI and ChatGPT skills. Vercel's skills.sh, Roo Code, Cline, Cursor (.mdc), Continue, fast-agent — they all read the same file. The shape is:

  • name — identifier
  • description — the trigger sentence (load-bearing — the model uses this to decide when to fire)
  • capabilities — what the skill can do (network, filesystem, tools)
  • resources — bundled files
  • Optional: license, agents, tags, metadata.version, execution mode

2. Progressive disclosure

Every modern hub assumes you can have 50+ skills installed and pay almost nothing. The metadata loads at startup (~100 tokens per skill); the body only loads when the description matches.[5] Without this, context-bloat alone would kill the model. Tool Search (Anthropic's defer-loading mechanism) cuts MCP tool definitions by ~85% on the same principle.[33]

3. Examples / triggers / templates folders

Almost every popular skill ships with: an examples/ folder (canonical invocations), explicit "TRIGGER when / SKIP when" language in the description, and a template/ for forking. The official skill-creator skill on Claude.com is a meta-tool that scaffolds this.[34]

4. Cross-runtime support

Repos that work on only Claude Code are getting passed over for ones that work everywhere. npx skills add currently lists 17 supported agents, including amp, antigravity, claude-code, codex, cursor, cline, droid, gemini, gemini-cli, github-copilot, goose, kiro-cli, opencode, roo, trae, windsurf.[6]

5. README + CATALOG separation

The pattern (flintfromthebasement/skills uses this; gstack and superpowers do too): README is for human visitors; CATALOG.md or per-skill SKILL.md is for agents. Mixing them confuses both audiences.

6. Versioning via metadata.version

Auto-update only fires when the version field bumps. Skills that don't version don't update.[35] AutoVault enforces this with lockfile-based drift detection; npx skills tracks version through the install graph; gstack uses git tags.

The convergence is the story. A year ago, "skill" was three different things across three vendors. Today the manifest is shared, the loading model is shared, the cross-runtime expectation is shared. SKILL.md is the package.json of agent capabilities. Now we're arguing about distribution, not format.

What Stands Out — Things Only One or Two Players Are Doing

Cryptographic signing of skills (AutoVault, only)

The whole rest of the field uses "trust the author's GitHub name" or "Anthropic Verified badge" as the trust model. AutoVault Ed25519-signs every skill at install and verifies on every load.[16] Threats found in production by audits this year: Mobb.ai's 140,963 issues across 22,511 skills, Snyk's ToxicSkills (1,467 malicious payloads), ClawHub's 341 malicious entries.[31] The math says signing has to win — we just don't know if it'll be a vendor (AutoVault), a foundation (Sigstore-for-skills), or Anthropic itself.

Sandboxed execution with egress controls

NVIDIA's red team published mandatory mitigations: block network egress to non-allowlisted destinations, block filesystem writes outside the workspace, block writes to config files / hooks / skills directories.[36] Almost no public hub does any of this by default. OpenAI added agent sandboxing to the Agents SDK in April 2026.[19] Anthropic's Tool Search reduces context but doesn't sandbox execution. Big gap, big opportunity.

Telemetry with informed consent (gstack)

Default-off opt-in analytics is rare. Most hubs either have no telemetry (gstack default-off, AutoVault local-only) or aggregate-only at the registry level (skills.sh leaderboard counts). Per-user fine-grained "which of my skills fire" lives only in Skill Kit and a couple of OTel-instrumented setups (Dash0, Elastic).[32]

Lockfile-based drift detection (AutoVault, npx skills partial)

Apps don't just install — they update. AutoVault's lockfile pinning + commit-SHA-checkpointing is the cleanest implementation. npx skills add tracks the source repo but not always the commit. git submodule is the rough fallback most repos use.

Agent-to-agent skill installation (Letta, Composio)

Most installs are human-driven. Letta's vision is agents extracting and propagating skills laterally to other agents.[27] Composio's "evolve with your agents" framing is the same idea applied to platform-hosted skills.[24] If this works, the human marketplace becomes a cold-start — the long tail emerges from agent runs.

Paid creator economy (Agensi, only meaningfully)

Despite all the marketplace energy, almost no one actually monetizes. GPT Store payouts averaged $0.03 / conversation; most creators earn $100–$500/mo.[20] Agensi's 80% creator share + Stripe is the only mainstream paid-skill rail.[28] The action is in B2B and consulting — not in indie creator $$$ at the unit level.

Methodology framing (obra/superpowers, gstack)

Almost every other hub ships tools. Superpowers and gstack ship a way of working. That framing is a 10x discoverability multiplier — people install a worldview faster than they install a tool collection. 92k stars and 121k stars say so.[9][8]

What Flint Should Publish Next

Jason already has the pieces — flintfromthebasement/skills as the public surface, AutoVault as the signed-distribution rail, and a stable of internal skills (executive-coach, council-of-agents, research-report, hyperframes, site-archive, vuln-scan, pmpro-support-notes, etc.) that are battle-tested in real PMPro work. The question isn't "should I publish skills" — it's "which ones, where, and what's the through-line."

The strategic hole in the market: generic skill repos are oversupplied. The under-supplied lane is vertically specialized + signed + cross-runtime + opinionated about methodology. That's where Flint's actual edge is — running a real WordPress / membership business while building agents for it.

The Recommended Play, In Order

1. Publish "PMPro Support Operator" as a multi-skill bundle (highest leverage)

Rebrand pmpro-support-notes, freescout-notes, the docs DB query helpers, and the ticket-fetch helpers into a single themed bundle: "How a real WordPress shop runs AI-augmented support." Vertical-specific. Real customer in production. Three-letter answer to the "is anyone doing this?" question.

  • Distribute on flintfromthebasement/skills as a sub-folder + via autovault add-local for signed install.
  • Pair with a blog post on flint.fountain.network that's the methodology framing — how Flint approaches a ticket, not just what the skills do.
  • This is the move that nobody else can copy — it requires actually running a WordPress membership business.

2. Ship the AutoVault-Sympathetic Site Archive + Vuln Scan as a "WP Shop Pack"

site-archive and vuln-scan are already published. Bundle them with pmpro-screenshots, edit_handbook_page patterns, and the docs-DB query into a signed bundle aimed at WordPress agencies. This is the next adjacent vertical — thousands of agencies want this and there's nothing equivalent.

3. Write the Methodology Post (this is the multiplier)

The single biggest finding from this report: methodology framing is the 10x lever. Garry Tan's blog post made gstack viral. Jesse Vincent's blog post made superpowers viral. Neither tool was uniquely powerful — the worldview was. Flint already has one: "a basement agent running the boring half of a real software business." Write that essay. Pin it as the README to the public skills repo. The skills are evidence; the post is the artifact.

4. Stay out of the generic-skill arms race

Don't ship another commit-message or PR-review skill. There are 200 of them. Marginal cost to user is zero, marginal value is zero. Stick to the long tail of "things people are paying real money to figure out themselves."

5. Two surfaces, not five

Publish to:

  • flintfromthebasement/skills (your repo — canonical, full README, methodology link)
  • AutoVault (signed distribution — for anyone serious about provenance)

Skip submitting to skills.sh, SkillsMP, anthropics/skills, or any of the awesome-list aggregators. They'll pick you up automatically once the repo is public; manually maintaining listings on five surfaces is a tax with no return. Possible exception: VoltAgent/awesome-agent-skills if Jason wants the brand-anchor effect of being listed alongside Vercel/Cloudflare/Stripe. PR-able in 10 min.

6. Defer paid skills

Agensi exists, GPT Store payouts are pennies, B2B consulting beats unit-economics by 100x.[20][28] If a paid lane materializes, ship there later. Right now the value is in distribution and reputation, not direct revenue.

Concretely, the next 30 days

1. Write the methodology post (~1500 words). Title candidate: "How a basement agent runs support, dev, and ops for a real software business."
2. Publish PMPro Support Operator bundle to flintfromthebasement/skills.
3. Sign + add to AutoVault, document the install flow.
4. Submit to VoltAgent/awesome-agent-skills under "WordPress" / "Membership."
5. Cross-post the methodology post to Hacker News, with the repo as the demonstration link.

What NOT to do

Don't launch a Flint-branded skills marketplace. The market doesn't need another aggregator; the top eight are already commoditized and the top three (anthropics/skills, skills.sh, claudemarketplaces.com) are pulling all the traffic.
Don't chase the GPT Store / paid-skills monetization angle. The unit economics aren't there yet. Consulting and reputation compound faster.
Don't over-invest in generic dev-tool skills (commit messages, PR review, code formatting). 54.7% of the marketplace is already that.[2]

Anticipated Questions (and Honest Answers)

Q: Is anyone doing real skill telemetry / analytics?

Mostly no. Aggregate install counts at the registry level (skills.sh leaderboard, claudemarketplaces.com voting). Per-user "which skills actually fire" is rare — Skill Kit is the main local-only option.[32] Dash0 and Elastic publish OpenTelemetry-shaped agent-skill instrumentation kits, but those target observability platforms, not skill authors.[32] Big open lane: tell a skill author "your skill loaded 5,000 times this week and only fired in 12 of them — your description sucks."

Q: What's the security / sandboxing story?

Bad and getting worse. Skills run with full agent privileges — if Claude can read files, the skill can. Audits this year found 9% of marketplace skills are critical-risk; 36% of audited skills had prompt injection.[2][31] Mitigations exist but are not default: NVIDIA-style egress blocks, write-path containment, AutoVault-style signing.[36][16] Expect a major skill-supply-chain incident in the next 12 months that forces this.

Q: How are skills monetized today, really?

Three working models: (1) indirect — brand + consulting (gstack, superpowers, gbrain, anthropics/skills). Free skill, paid attention, paid expertise downstream. (2) platform-anchored (Composio, Toolhouse, Smithery) — free skills, paid hosted runtime. (3) direct paid (Agensi, GPT Store) — tiny revenues per creator, hard to escape velocity.[20][28] Path 1 is dominant for indies; Path 2 is where the VC money is.

Q: Are LLMs good at picking the right skill, or do we still need slash commands?

Better than a year ago, still not great past ~30 skills. Description quality dominates — specific descriptions fire reliably, vague ones get skipped.[35] Slash commands and explicit invocation still matter for discoverability and predictability. The honest take: good descriptions + slash commands + a "find-skills" meta-skill is what works in May 2026. Pure auto-discovery is still aspirational at scale.[33]

Q: What does "discovery problem" actually mean, concretely?

Once you cross ~30 skills, the bottleneck stops being model capability and becomes "can the agent find the right skill in time."[33] The most-installed skill on skills.sh is literally find-skills.[3] 46.3% of skills are duplicates — semantic dedup is unsolved. Symptoms: agent ignores a relevant installed skill; agent picks the wrong duplicate; description-match fails on natural-language variation. Fixes that work today: aggressive description engineering, install fewer skills, slash-command shortcuts.

Q: Is "skills-as-MCP-server" going to win?

No, not as a single primitive. They serve different needs. Skills are for procedural knowledge ("how to do X"), MCP is for connectivity ("get me data from Y"). Most production setups will run both.[5] What could happen: MCP servers that ship skills as resources — the server is the connector, but it also exposes a SKILL.md telling the agent how to use it. Composio's gateway already does a soft version of this.

Q: Will Anthropic's official marketplace flatten everyone else?

No. anthropics/claude-plugins-official at 19k stars is meaningful but small relative to community-curated repos.[13] Anthropic's review pipeline is intentionally narrow ("Anthropic Verified" badge for the elite tier). Most community skills will continue to live on GitHub repos and federated registries. Anthropic gets the high-trust enterprise lane; everyone else gets the long tail.

Q: How do I avoid the duplicate-skill swamp when picking what to install?

Three filters: (1) is it from a brand/person you can name? (2) does it have a methodology essay or just a tool list? (3) is the description specific enough that you can predict when it would fire? If yes to all three, it's probably fine. If any are no, pass — there's a non-trivial chance it's one of the 9% critical-risk skills or one of the 46% duplicates.[2]

Q: Should I expect Anthropic to ship signed skills as a first-party feature?

Probably, in some form, within 12–18 months. The security-research pressure is too high to ignore (the Microsoft Security Blog and NVIDIA AI Red Team guidance both pointed at this in early 2026[37]). What's unclear is whether it'll be Anthropic's own signing system, a Sigstore-equivalent built on existing supply-chain tooling, or a federation play across Anthropic + GitHub + Microsoft (the same coalition that backed the MCP Registry). AutoVault is positioned to interop with whichever lands; betting against signing happening at all would be a mistake.

Q: What about Cursor's rules vs. SKILL.md? Are they the same primitive?

Adjacent, not identical. Cursor's .cursor/rules/*.mdc files use frontmatter for scoping (which file paths the rule applies to), not for capability declaration.[38] Practically: rules are scope-targeted system prompts. Skills are loadable procedure packages. Cursor still doesn't have a real skill-equivalent for "load this whole methodology when X applies." The bet: Cursor will adopt SKILL.md as a layer on top of rules within 6 months, or get pressure from cross-runtime users who want their Claude skills to work in Cursor too.

Sources

Compiled from ~30 web fetches and searches conducted May 9, 2026. Only URLs that returned real content are listed; nothing fabricated.

Anthropic, OpenAI, and the SKILL.md Spec

Hubs & Marketplaces

MCP Layer (Smithery, Composio, Cline, registry)

Startups & Funding

Security Research & Sandboxing

Patterns, Discovery & Telemetry

AutoVault & Signed Distribution