Manifesto
The model isn't the bottleneck.
The knowledge is.
For two years, the conversation was about model quality. Each release closed the gap on what AI could do in principle. Now anyone deploying agents into a real company workflow runs into the same wall, in the same week:
The model is brilliant, and it has no idea how your company works.
It doesn't know that the refund cap was lifted last month for Enterprise customers. It doesn't know the AE matrix gives 25% below 250 seats. It doesn't know that restarting payment pods once double-charged forty customers and that's why we don't do it anymore. None of that is in the model. None of that is in any vector database. It's in two-year-old Slack threads, in a Notion doc that was last edited eighteen months ago, in a post-mortem nobody re-reads, and in one engineer's head.
Tribal knowledge is the operating system of every company. It runs silently in the background until you try to automate something against it. Then you find it everywhere, and it's incoherent.
What we're building
A company brain that pulls knowledge out of Slack, Notion, Google Drive, GitHub, Intercom, and Linear; structures it into machine-checkable facts with provenance; and synthesizes it into executable skills that AI agents actually run. Not embeddings. Not RAG over docs. Skills — the same primitive Anthropic ships with Claude — but generated from your company's own history.
A SKILL.md isn't a chatbot pretending to know your policies. It's a versioned, reviewable, auditable playbook that an agent loads when relevant and runs deterministically. It encodes the hard-won lessons that would otherwise live in someone's head until they leave.
What we believe
- Provenance, not vibes. Every fact in the brain is traceable to the Slack message, doc, or commit it came from. Hallucinations aren't a model problem — they're a sourcing problem.
- Hard rules survive humans. The reason “don't restart payments pods” is a hard rule is that someone learned it the bad way. That lesson should outlive that person's tenure. It should be a skill.
- Living, not snapshotted. The brain re-extracts when the underlying sources change. Stale policy is worse than no policy. We monitor the sources directly and republish skills.
- Action, not retrieval. A pile of documents an agent could quote isn't a brain. Skills tell agents what to do, not just what is true.
Why now
Three things became true in the same six months. Models got good enough to take real action. Anthropic shipped Skills as a real primitive. And every company that put an agent into production hit the knowledge wall and started looking for what we're building.
Tom Blomfield put it on YC's 2026 RFS:
“The biggest blocker to AI automation of companies is no longer the models. Now the blocker is the domain knowledge. We need a new primitive: a company brain. Every company in the world is going to need one.”
He's right. We're building it.