Blog

Introducing the Fabric Developer Platform

Jonathan Bree


The infrastructure layer for agents that actually remember.

Every knowledge worker has a desk. A filing cabinet. A notebook. A memory of what happened last week and why it matters today. They accumulate context over time, and that context is what makes them useful.

Agents don't have any of this. They show up, do a thing, and disappear. Next session, they start from zero. They don't know what they said yesterday, what the user cares about, or what they've already tried. Every interaction is a first interaction.

The current workaround is to shove everything into a vector database and hope the right chunks come back. Developers end up stitching together conversation logs, object storage, embedding pipelines, and retrieval stacks. It works, kind of, until you need your agent to actually understand that a user changed their mind, or find a document it created last Tuesday, or pick up a task where it left off.

This is a missing layer of the stack. Not a smarter model. Not a better prompt. Infrastructure.

Today we're launching the Fabric Developer Platform. Memory, storage, and search for agents. The foundation for AI that persists.


What we're building

Fabric is a knowledge platform that people love. Hundreds of thousands of users store their notes, files, bookmarks, and research in Fabric, and that number is growing fast. The infrastructure behind it handles semantic search, self-organizing memory, and context retrieval at scale.

Now we're opening that same infrastructure to developers.

The Developer Platform gives your agents what our users already have: a workspace with memory, files, and semantic search. A place to accumulate knowledge, store work, and build context over time. Except you create and control these workspaces programmatically, through an API.

  • •  •

We're launching with three primitives today:

Memory API. Store facts, preferences, events. Search by meaning. Retrieve context formatted and ready for your prompt. Fabric Memory doesn't just store and retrieve. It tracks relationships, resolves contradictions, and updates as new information comes in. If a user says "I love Adidas" in January and "I'm switching to Nike" in March, a vector database might still surface that first statement. Fabric knows the relationship changed.

Resources API. Files, notes, bookmarks, documents. Full CRUD, semantic search, folders, tags. Everything your agent creates or references gets persisted, indexed, and made findable by description, not just filename. Think of it as a filesystem that understands what's in it.

Workspaces API. Each workspace is a full, isolated environment with its own memory, files, and search. Created programmatically. Controlled by your app. You're giving your agent a home. You can audit any workspace through the Fabric UI, see exactly what was stored, created, or modified. Debug by looking, not by logging.


Memory is infrastructure

Most people think of memory as a feature. We think it's a layer.

Fabric Memory is self-organizing. You feed it information and it figures out the structure. Relationships between facts get tracked automatically. Contradictions get resolved as new information comes in. Context stays fresh without you managing it. You don't need to think about embeddings, chunking strategies, or retrieval pipelines. You just write memories and read context.

We're pushing the frontier with memory infrastructure that improves with use, turning stateless models into self-improving agents. Cursor found in November 2025 that semantic search improved agent accuracy by more than 12.5%. But the whole point is that you shouldn't have to care how it works. It just does.


Scale-ready

Fabric already operates at scale, with over 20 million files and memories stored, and sub-300ms retrieval. Running on the same production infrastructure that serves our consumer platform, with enterprise-grade encryption and security model design.


Built for every agent stack

Any model. Any framework. Any deployment. Fabric gives your agent memory and storage regardless of how you build or where you run.

Python and JavaScript SDKs. REST API. CLI that works with Claude Code, shell scripts, or anything with a terminal. MCP support for Claude Desktop, Cursor, Windsurf, Continue, and whatever comes next.


What comes next

Memory and storage are the starting point. But the opportunity is much bigger.

The cloud was built for applications. Stateless, request-driven, optimised for serving. Agents need something fundamentally different. They need to persist. They need to accumulate context. They need to work across sessions, manage their own state, and operate with increasing autonomy over time.

As that shift plays out, agents will need more of the services that knowledge workers already take for granted. Checkpointing and version control for their state. The ability to receive information asynchronously. Workspaces that self-enrich. Sub-agents that spin up their own environments.

We're building toward all of this. Snapshots, write-back to user knowledge bases, and new primitives we haven't announced yet.

We think the infrastructure stack for agents is going to be as large and as important as the cloud infrastructure stack was for applications. Memory is the first layer. We intend to build many more.


Get started

The Developer Platform is in beta. It's waitlist-only while we scale up. Sign up and we'll get you in.


Get started free →

Read the docs →

Unlock productivity superpowers.
The AI workspace that thinks with you.

Unlock productivity superpowers.

The AI workspace that thinks with you.

Unlock productivity superpowers.

The AI workspace that thinks with you.