Frontier AI is a commodity. Company expertise is the moat.
Every AI agent in production right now is running on the same handful of frontier models. Sonnet, Opus, GPT, Gemini, Groq. Same training data, same general knowledge, same internet baked into the weights. The model your agent uses is the model your competitor's agent uses.
That fact is doing more strategic work than most people building agents have stopped to admit.
If the thinking power is the same, the differentiation isn't in the model. It's in what you put on top of the model. And the thing you put on top of the model — the thing that actually determines whether your agent triages a ticket the way your best support engineer would, or qualifies a deal the way your best account exec would, or reviews a contract the way your most senior counsel would — is your company's expertise.
That expertise is the moat. The agents are the distribution. The frontier model is plumbing.
This is what nobody on the demo circuit is saying out loud, so let me say it.
The model is not the product. Your judgment is.
Strip a typical enterprise AI agent down to its constituent parts and you get four things. A frontier model. A retrieval layer over your documents. A set of tools the agent can call. And a prompt that tells it what kind of work to do and how to do it.
Three of those four are commodities. The model is rented from a vendor. The retrieval layer is solved. The tools are exposed via APIs that have existed for a decade.
The fourth one is the only piece that's actually yours — the encoded judgment of how your company makes decisions. How your best sales operator qualifies. How your best engineer debugs. How your best CFO reads a board deck and finds what's missing in eleven seconds.
Today that judgment lives inside people's heads. It walks out at six. It leaves when they leave. The most valuable asset in your company has a two-week notice period.
The opportunity sitting in front of every operator right now is to extract that judgment from the heads it lives in and put it somewhere it can be invoked at scale by the agents you're already deploying. Not stored — invoked. Reproducibly. With audit. With governance.
That's what the next decade of enterprise AI is actually about. Not better models. Better encoded expertise.
The unit of encoded expertise already exists
Anthropic opened the agent skills standard in December 2025. Within weeks, OpenAI, Microsoft, Google, Atlassian, and Figma had shipped support. MCP won the runtime protocol war in parallel. Every major agent framework speaks both.
A skill, in this standard, is a small task-scoped instruction file. It tells an agent how to do one thing — the way your company wants it done. The way your best person would do it.
This sounds boring until you stop and think about what it actually is. It's the first time enterprise software has had a portable, vendor-neutral, machine-readable unit of human judgment. We've had documents. We've had wikis. We've had Notion pages titled "How we run discovery calls." None of those are invokable by an agent at runtime, in context, with reproducibility.
A skill is.
The implication — every company's expertise is about to become an asset class. Skills are the units. Companies that capture them early build the moat. Companies that don't end up running the same generic agents as their competitors, because that's what's left when the model is a commodity.
The chaos coming, and why it matters in 2026
The skill format being open is good news for adopters and a problem for the people who run the agents.
Snyk ran an open-source agent-skill scanner across major marketplaces last quarter. They found nearly 4,000 skills and uncovered credential theft, backdoor installation, and data exfiltration hidden in publicly available ones. Bitdefender published a deep dive showing malicious skills are being cloned and republished under slight name variations to look legitimate, then quietly executing obfuscated shell commands once installed. OWASP and NIST have both published guidance on the same surface area in the last 90 days.
Here's the structural picture. Every enterprise that takes agents seriously is about to be sitting on hundreds — eventually thousands — of skills. Some written in-house, some pulled from registries, some forked from open source. No version control beyond what individual teams improvise. No registry. No way to know which skills are in production. No way to know which one made which decision when an auditor or a regulator shows up to ask.
That's not a minor operational issue. That's the same governance gap enterprises faced with open-source code in 2010, before Sonatype and JFrog turned dependency management into a category. It's the same gap they faced with API access in 2015 before API gateways became table stakes. It's the same gap they faced with Kubernetes in 2018 before policy engines like OPA showed up.
Every time this pattern has played out, the same thing has happened. A standard emerges. Adoption races ahead of governance. A 12-month chaos window opens. The companies that build the enforcement layer during the chaos window become category-defining infrastructure.
JFrog launched an Agent Skills Registry with NVIDIA at GTC on March 16, 2026. When public-market incumbents commit their roadmap to a category, the category is real.
We are inside the chaos window now.
What the next 18 months actually look like
Three things are going to happen in sequence, and you can already watch the first one starting.
One — the publishing surface. Every enterprise will have a place where domain experts publish skills. Not engineers. Sales operators publish how they qualify. Counsel publishes how contracts get reviewed. CFOs publish what to look for in a vendor MSA. The publishing surface becomes the front door for company expertise the way Confluence became the front door for company knowledge. The difference is that this front door is read by agents, not humans, and it has consequences when the skill is wrong.
Two — the governance layer. Once the publishing surface exists, the skill catalog goes from dozens to thousands inside 12 months. That's the point at which somebody — security, compliance, eng leadership — has to own the question of which skills are sanctioned, which are shadow, which one made the decision when the agent did the thing. The skill registry needs scoring, policy, audit, trajectory capture. This is the layer enterprises will buy, hard, in 2026 and 2027. Same urgency curve as Snyk in 2018, JFrog in 2014, OPA in 2019.
Three — the trajectory log becomes regulator currency. Once agents are making decisions on contracts, claims, money, every regulator on the planet is going to start asking the same question — show me how this decision was reached. Not "show me the model." Show me which skill got invoked, what context it had, what reasoning trajectory the agent followed, who authored that skill, when, and what version of it ran. The companies that own that trajectory log per customer, in their environment, on their data, own the moat that gets cited in the regulatory-compliance section of every enterprise RFP from 2027 onward.
If you're building agents in production right now, the question I'd ask yourself isn't which model do I use. It's who in my company is publishing the skills, where, and who's accountable when one of them makes a $400k decision the wrong way. If you don't have an answer, you're inside the chaos window without a plan to get out of it.
What we're building, and what we're looking for
I'm building Invoked because the layer above is missing and someone has to ship it. We work with enterprises in the 200–2,000 employee range that already have agents in production making decisions that matter. The first design partner cohort opened this quarter.
If your company is hiring for "AI infrastructure," "agent platform," or "agent reliability," and you've started hitting the governance wall — published skills with no version control, no audit, no idea which one is in production — I want to talk to you. Not as a vendor. As someone working through the same problem from the other side.
The conversation is the conversation. The pitch comes later. We're shaping the product around the design partners we work with for the first 90 days.
The thesis I'm betting the next decade on — frontier AI is a commodity, and company expertise is the moat. The companies that figure out how to capture, govern, and invoke their own expertise are the ones that win the agent era. The ones that don't, end up with agents that look exactly like everybody else's.
The window to start is right now.