Back to Blog
2026-03-04
4 min read
Wire docvet's MCP server into VS Code, Cursor, or Claude Code — your AI coding agent gets structured docstring quality checks without parsing CLI output.
Your AI coding agent can read your code, run your tests, and search your repo. But can it check whether your docstrings actually match what the code does?
In a previous post, I made the case that stale docstrings actively mislead AI agents — in controlled tests, incorrect documentation drops LLM task success by 22.6 percentage points. Agents given wrong docstrings performed worse than agents given no docstrings at all. docvet was built to catch those gaps: 19 rules that check whether your docstrings actually match what the code does.
Since v1.8, docvet ships an MCP server. That means any MCP-aware editor — VS Code, Cursor, Claude Code, Windsurf, Claude Desktop — can give its AI agent direct access to docstring quality checks. Structured JSON, not CLI text parsing. One config block, zero install.
Two tools appear in the agent's toolbox:
docvet_check — Run checks on any Python file or directory. The agent receives structured findings:
{
"findings": [
{
"file": "src/pipeline/extract.py",
"line": 42,
"symbol": "extract_text",
"rule": "missing-raises",
"message": "Function 'extract_text' raises ValueError but has no Raises section",
"category": "required"
}
],
"summary": {
"total": 3,
"by_category": {"required": 2, "recommended": 1},
"files_checked": 8
}
}
docvet_rules — List all 19 rules with descriptions and categories. Useful when the agent needs to explain a finding or decide what to fix first.
No CLI output to parse. No regex. The agent gets typed fields it can reason about directly.
Without MCP, your agent shells out to docvet check and has to make sense of this:
src/pipeline/extract.py:42: missing-raises - Function 'extract_text' raises ValueError but has no Raises section
src/pipeline/extract.py:87: missing-param - Parameter 'timeout' not documented
src/utils/cache.py:15: stale-signature - Docstring parameters don't match function signature
That works, but the agent is regex-matching strings instead of reasoning about typed data. With MCP, it gets the structured JSON shown above — fields it can filter, sort, and act on directly.
The MCP server runs on stdio via uvx — no pip install in your project, no virtual environment pollution, no global packages. uvx downloads and runs docvet in an isolated environment automatically. You add the config and it just works.
Add to .vscode/mcp.json in your project root:
{
"servers": {
"docvet": {
"type": "stdio",
"command": "uvx",
"args": ["docvet[mcp]", "mcp"]
}
}
}
Note: VS Code uses "servers" as the top-level key, not "mcpServers".
Add to .cursor/mcp.json (project-level) or ~/.cursor/mcp.json (global):
{
"mcpServers": {
"docvet": {
"command": "uvx",
"args": ["docvet[mcp]", "mcp"]
}
}
}
One command, no JSON editing:
claude mcp add --transport stdio --scope project docvet -- uvx "docvet[mcp]" mcp
Use --scope user to make it available across all projects.
Same mcpServers pattern — see the full editor guide for copy-paste configs. There's also a dedicated VS Code setup tab if you just want the one-click config.
Once configured, your AI agent can invoke docvet_check on any file it's working with. A typical workflow:
docvet_check on the fileThe feedback loop becomes automatic — like a line cook who taste-tests every dish before it leaves the pass. The agent checks docstring quality as part of its normal workflow, not as an afterthought.
If you have VS Code with MCP support enabled:
.vscode/mcp.json block aboveRaises: section)Now your agent can check whether your docstrings actually match what the code does. One config block, zero install.
docvet is on PyPI, GitHub, and the MCP Registry. The full editor setup guide is at alberto-codes.github.io/docvet/editor-integration.