Govern the skills your engineers' AI runs.
Across Claude Code, Cursor, internal copilots, and MCP servers. Every skill authored, approved, audited, revocable across the fleet — without slowing engineering down.
The problem your platform team is about to inherit
Engineering adopted Claude Code, Cursor, an internal copilot — maybe all three. Productivity is up. So is the surface area.
A senior backend engineer asks Claude to refactor the auth service. Clean code, solid tests. The PR goes up. Security finds it: wrong wrapper, missing audit logging, skipped the “any auth change requires security review” gate. Eleven hours of delay. The CTO sends another reminder about following the engineering handbook.
The handbook is 47 pages. It lives in Notion. The AI cannot read Notion.
This is the gap. Every standard your team has built — code review checklists, deployment policies, approval flows, framework conventions, security guardrails — was designed to be enforced through people. The entity now writing the code isn't a person.
If you train the human and not the AI, you've trained the interface. The AI underneath still defaults to public GitHub and Stack Overflow patterns. Your code review checklist might as well not exist.
What platform teams actually need
Skills your engineers' AI loads automatically. Pushed centrally. Versioned. Enforced.
Anthropic's Skills system gives you the mechanism. A skill is a piece of context — instructions, examples, references, conditions — that an AI loads when relevant. Skills live in two places: a personal folder the engineer manages, and an Enterprise layer that overrides it.
The Enterprise layer is where governance lives. When your platform team publishes a skill there, it sits on every engineer's machine, can't be turned off, and updates centrally. The difference is not subtle. “Everyone should follow the code review checklist” is a memo. “The checklist is loaded into the AI of every engineer who touches the codebase, every time” is enforcement.
Bundles of skills become role plugins.
A Backend Engineer at $Company plugin contains your preferred web framework patterns, logging standard, database client conventions, the trigger for “any change to auth or billing requires security review,” the code review checklist as an active gate, the on-call escalation matrix.
A Data Engineer plugin contains your warehouse conventions, dbt patterns, PII handling rules, the “no production table without a staging migration first” rule, the metric definition standard.
A Platform Engineer plugin contains your IaC patterns, deployment promotion ladder, SLO templates, runbook structure, change management flow.
A new hire shows up Monday. IT provisions their machine. The role plugin is already loaded. They open Claude Code. The AI knows how your company works before the engineer does.
What Invoked does for your platform team
Building the skills is the easy part. The hard part is the system around them — who can author, how they're approved, how they're distributed, how they're measured, how they're retired. Invoked operates that loop. Three layers.
Authoring
Your senior engineers shouldn't have to learn YAML to publish their judgment. The authoring studio captures methods directly from the people who own them, structures them as skills, and enforces your publishing standard at the moment of authorship — not after.
Governance
Three layers of enforcement on every skill before it reaches the Enterprise layer.
- Structural — the skill is well-formed.
- Evaluative — it passes the tests.
- Organizational — the right approvers signed off, the right scope is set, the audit trail is attached.
Nothing reaches your engineers' machines until all three clear.
Consumption
Skills are discovered online with no installation friction. They run offline with auto-sync. No runtime dependency on Invoked being up. Your engineers' AI works whether the network is good or not.
What every skill carries
Your code review standard was a Confluence doc. Now it's a contract.
When you update the standard — say, you switch logging libraries — you don't send a Slack announcement and hope. You ship a new version of the skill. Tuesday morning, every engineer's AI follows the new standard. By Wednesday, the old pattern stops appearing in PRs. By Friday, an auditor can prove it happened.
Start with a free exposure scan
Before you can govern it, you need to see it.
Every AI agent on your engineers' machines discovers skills from a standard path. Invoked reads those paths — nothing else. No source code, no repo permissions, no installation.
You get a map of every skill running across your engineering organisation: what it does, who built it, whether it's ever been reviewed, what risk surface it represents.
Most platform teams are shocked by what they find.
What comes after the scan
The scan is the first step of the design partner path. If what we find together is meaningful, we run a 90-day paid pilot with one team — the one carrying the most agent risk.
Approval workflow. Federated authorship. Full invocation audit trail. Continuous monitoring. Live, with your data, in your environment.
Pilots shape the product. Your edge cases become our roadmap. Your compliance constraints become our defaults.