Tech 5 min read

uv for Claude SDK Python agents: pyproject.toml + uv.lock as the stable entry

IkesanContents

For small Claude SDK and MCP Python experiments that get rebuilt over and over, uv init, uv add, uv run, and uv.lock win on reproducibility, not just speed.
A recent DEV Community post about spinning up a Claude SDK project in under one second with uv 0.11.11 framed it around the seconds.
My local Homebrew copy is uv 0.9.21, a different version from the original’s uv 0.11.11, so this post focuses on the operational side rather than re-running the benchmark.

Treat it as a standard entry for experiments, not just a fast pip replacement

In the original measurements, uv init claude-agent-demo took 0.435 seconds, uv add anthropic took 0.874 seconds, and a cached uv sync took 0.074 seconds.
Looking only at those numbers, the takeaway stops at “faster than pip”, but for small Claude SDK experiments, there’s a different value.

AI agent experiments start with the code you want to try; environment setup tends to be done sloppily every time.
If you type out python -m venv .venv, activate, pip install anthropic python-dotenv by hand each time, things drift between branches, between validation runs, between machines.
When you’re watching Claude API behavior in a drifted environment, it gets harder to tell whether you’re looking at a code issue or a dependency issue.

uv init creates a pyproject.toml, and uv add records dependencies into it.
Astral’s official docs say application projects are created with main.py, pyproject.toml, README.md, and .python-version.
The original’s tree output didn’t include .python-version, but at least in current uv, pinning the Python version is part of the same flow.

uv init claude-agent-demo
cd claude-agent-demo
uv add anthropic python-dotenv
uv run main.py

These four lines become the team’s standard “entry for trying anything with the Claude SDK.”
If you’re thinking about Claude Code operations, this becomes the Python-environment-side foundation for the “give context” and “verify” parts of the earlier Claude Code Best Practices and Agent SDK official guide summary.

uv run takes virtual environment state out of the conversation

The minimal Claude SDK code, as shown in Anthropic’s Client SDKs docs, is just calling messages.create from the Python client.
The hard part isn’t the SDK call itself; it’s lining up which Python, which dependencies, and which environment variables get loaded, every single time.

import os

import anthropic
from dotenv import load_dotenv

load_dotenv()

client = anthropic.Anthropic(
    api_key=os.environ["ANTHROPIC_API_KEY"],
)

message = client.messages.create(
    model=os.environ["ANTHROPIC_MODEL"],
    max_tokens=256,
    messages=[
        {"role": "user", "content": "Explain in one sentence why to use uv"}
    ],
)

print(message.content[0].text)

If you launch it with uv run main.py, you don’t have to make “remember to activate” part of the conversation premise.
Astral’s locking and syncing docs explain that uv run verifies the lock and syncs the environment before running.
Once you tell an agent “in this repo, run Python scripts with uv run,” you cut down on accidents where the system Python’s old SDK gets used by mistake.

This is close to the BlenderMCP setup that launches MCP servers via uvx.
In MCP, uvx blender-mcp is the entry for “launch this tool right here.”
In small Claude SDK validations, uv run is the entry for “run with this project’s dependencies.”

uv.lock is the project state you can hand to an agent

When you delegate Python experiments to Claude Code or Codex, what survives as files is stronger than what survives in chat logs.
The earlier Claude Code session management post made the case for committing to git rather than relying on sessions.
uv’s lockfile aligns nicely with that approach.

If you commit uv.lock, other machines can restore the same dependencies with uv sync.
The lockfile isn’t a description; it’s the resolved dependency graph itself.
Even if an agent loses context partway through, as long as pyproject.toml, uv.lock, and the run command are still there, at least the environment restoration can be redone from scratch.

This gap matters most in autonomous experiment setups.
Karpathy’s Autoresearch post also used uv sync, uv run prepare.py, uv run train.py as the entry.
That’s a GPU-using ML experiment so the weight is different from the small Claude SDK case here, but “humans or agents can resume the experiment with the same commands” is the same principle.

Don’t try to replace conda territory

The original article also said this: uv is PyPI-centric, so it’s not meant to replace heavy ML environments that hinge on CUDA, torch, tensorflow, or conda-channel-native dependencies.
In domains like Claude SDK experiments hitting an API, small FastAPI proxies, MCP server trials, and CLI tools, the fit is excellent.

Conversely, if your experiment involves GPU drivers, the CUDA toolkit, or conda-forge native dependencies, leaning purely on uv will get stuck in a different way.
In that case, either keep just the Python app part inside uv, or manage the whole thing as a conda environment from the start; it reads more cleanly.

What uv is good at is making the “I worked hard on the Python environment” feeling disappear.
For trying a single file with the Claude SDK, don’t treat environment setup as a task. Just make uv init, uv add, uv run, and uv sync your regular execution routine.