Tech 8 min read

Comparing the AI implementations of Cosmic, Sanity, and Hygraph

IkesanContents

I read a comparison article on DEV Community.
Rather than lumping Cosmic, Sanity, and Hygraph into the vague bucket of “AI-powered CMS,” it separated which layer AI operates on for each product.

The original article was written by Cosmic’s CEO, so the comparison leans toward Cosmic.
Hygraph is treated as having “no dedicated AI agent product,” but the official docs describe Hygraph AI Agents as an Early Access enterprise feature.
Better to cross-reference with official sources rather than taking the comparison at face value.

CMS AI adoption is splitting in three directions

What’s interesting about this comparison is that the three companies aren’t heading the same way.

ProductPrimary AI layerClosest analogy
CosmicCMS, code, browser, chatAn AI team member that handles everything from content creation to PR submission
SanityBulk editing within CMS, plus MCP context delivery for external agentsSafely searching, editing, and distributing existing content
HygraphTranslation, summarization, SEO, and editing assistance within workflowsAutomating routine tasks with editor approval gates

“The CMS has AI now” tells you nothing.
Whether AI touches the code repository, only reads CMS documents, or operates as one step in an editorial workflow changes what you need to verify during adoption.

This connects to what I wrote in ACF 6.8 turns WordPress into an AI agent operation target.
ACF 6.8 was about making WordPress capabilities discoverable and executable by AI through the Abilities API and MCP Adapter.
The three-way comparison here is how headless CMSes implement the same problem.

Cosmic reaches beyond the CMS

Cosmic has the most aggressive implementation.
The official site and CLI docs put Team Agents, Content Agents, Code Agents, Computer Use Agents, and Workflows front and center.

Looking at the Cosmic CLI documentation, the agent types include content, repository, and computer_use, with repository having code and repo as aliases.
The design covers branch creation, code changes, and PR submission against GitHub repositories.
This is less editorial assistance and more of a development operations platform connected to the CMS.

The AI Agents Reorg on April 27, 2026 moved Agents, Workflows, and Conversations from Bucket scope to Project scope.
This makes it easier to run something like “a marketing agent that works across staging and production Buckets.”
Since this change dropped the same day as the original article, Cosmic’s comparison surface is moving fast.

graph TD
    A["Chat<br/>Slack etc."] --> B["Team Agent"]
    B --> C["Content Agent<br/>CMS content"]
    B --> D["Code Agent<br/>GitHub PR"]
    B --> E["Computer Use Agent<br/>Browser automation"]
    C --> F["Workflow"]
    D --> F
    E --> F

The trade-off is that permission design gets heavy.
CMS permissions alone aren’t enough. You need to decide where human approval gates go across GitHub, chat, browser sessions, external APIs, and deploy permissions.

As I wrote in Cloudflare beta-launches EmDash, a serverless CMS as a WordPress successor, once a CMS exposes MCP and agent endpoints, it stops being just a publishing admin panel.
Cosmic is going all in on that direction.

Sanity hands context to external agents

Unlike Cosmic, Sanity doesn’t try to own everything through its own agents.
Agent Context is a mechanism for letting external AI agents read Sanity content via MCP.

Agent Context MCP is read-only, providing scoped access to a single dataset.
The exposed tools are limited to three: initial_context, groq_query, and schema_explorer.
AI agents understand the Sanity schema, query via GROQ, and use semantic search when needed.
The focus isn’t “agents rewriting CMS content on their own” but “delivering correct content context to production search, support, and recommendation agents.”

The Sanity MCP server is a separate thing.
It’s an MCP server for operating Sanity workspaces from development tools like Claude Code and Cursor, handling queries, release management, schema deploy, and document patches.
Sanity separates read-only context for production users from workspace operations for developers.

Content Agent handles bulk editing, auditing, translation, and image editing within the CMS.
This is where it shines for editorial teams with large existing content libraries.
Rather than pushing code PRs like Cosmic, it’s designed to run large-scale editorial operations within Sanity’s Content Lake and Studio.

Hygraph leans into “the human editorial process”

The original article characterized Hygraph as having “AI features, not AI agents.”
That framing is outdated based on current official information.

Hygraph AI & Automation separates AI Assist and AI Agents.
AI Assist is editorial help for generation, translation, and refinement within Studio.
AI Agents are functions that run automatically within publishing workflows, such as Translation Agent, Summarization Agent, and SEO Agent.
In the docs, AI Agents are offered as Early Access for enterprise customers.

That said, Hygraph’s agents aren’t the “write code and submit PRs” kind like Cosmic’s.
Their scope stays within content models and workflows.
The emphasis is on permissions, audit logs, editor approval, and schema integrity.

Hygraph’s strength lies in its combination with Federation.
With a design that bundles multiple content sources through a GraphQL API, AI features naturally extend toward “handling distributed content in a single editorial workflow.”
For teams doing global rollouts or multilingual operations, automating translation, summarization, and SEO delivers more value than code execution.

Look at permission boundaries, not comparison tables

The original article’s side-by-side table is convenient, but for adoption decisions, boundaries matter more than feature counts.

BoundaryCosmicSanityHygraph
CMS content generation/updateYesYes, via Content AgentYes, via AI Assist and Agents
MCP context for external agentsMCP Server availableAgent Context and MCP server separatedNo dedicated MCP product front and center
Code repository operationsCode AgentNot a primary featureNot a primary feature
Browser automationComputer Use AgentNot a primary featureNot a primary feature
Editor approval/governanceDepends on workflowStudio and permission controlsWorkflows, permissions, audit logs

If you want to delegate down to code, look at Cosmic.
If you want to safely feed CMS context to external AI apps, look at Sanity Agent Context.
If you want to automate translation, summarization, and SEO within editorial workflows, look at Hygraph.

Even under the same “AI CMS” label, the danger zones differ.
For Cosmic, check repository and deploy permissions. For Sanity, check dataset access scope via MCP. For Hygraph, check which fields AI can modify on the workflow and at what approval stage.

Don’t take the original article’s conclusion at face value

The original article has a strong Cosmic perspective.
It’s useful for understanding Cosmic’s “AI team member” approach, but it doesn’t match Hygraph’s current official description.
Rather than adopting the article’s conclusion wholesale, it makes more sense to extract which layer each vendor places AI in.

Markdown + Git + CLI wrapper as an alternative

The three-way comparison is about running AI on top of SaaS headless CMSes, but the same problem arises without a CMS platform.

This blog runs on Astro + Markdown + Git with no headless CMS in between.
Content is Markdown files, the frontmatter schema is defined in Zod, and validation happens at build time.
AI touches the filesystem and Git repository directly.

In kana-chat v2, jobs are dispatched to Claude Code via a CLI wrapper to generate articles, with a validation gate after completion.
Required frontmatter fields, section structure, and image placement are checked programmatically, and errors trigger notifications.
This is the same kind of thing Hygraph does with its “translation → SEO → approval” workflow.

The difference shows up in where the schema lives and where the permission boundary sits. Hygraph holds the content model inside the platform.
On this side, rules are scattered across CLAUDE.md, Zod schemas, and template files, and the AI agent reads the files to understand the rules.
Sanity’s Agent Context “delivers schemas and content to external agents via MCP” and CLAUDE.md’s approach of “writing rules directly in plain text” differ in interface shape but aim at the same thing.

Permissions overlap too.
Cosmic goes the direction of “letting AI submit GitHub PRs,” while kana-chat takes the opposite approach of blocking destructive operations through tool approval gates.
Whether it’s CMS permission controls or CLI wrapper tool gates, the question is the same: where do humans inspect AI output.

WUPHF’s LLM wiki extended this direction to team scale.
Markdown is the source of truth tracked via Git history, and the search index is layered on top and rebuilt if broken.
Where headless CMS AI runs on top of the platform’s database, API, and workflows, the Markdown + Git approach solves the same problem by connecting AI to the filesystem and version control.