Skip to content

Agent Integration Overview

vibecop integrates with AI coding agents as an automatic linter that runs after every code edit. The agent reads findings from stdout and self-corrects before proceeding.

Quick Setup

Run the setup wizard to auto-detect your tools and generate config files:

Terminal window
npx vibecop init

This detects installed/active tools and writes the appropriate config files:

vibecop — agent integration setup
Detected tools:
✓ Claude Code (.claude/ directory found)
✓ Cursor (.cursor/ directory found)
✓ Aider (aider installed)
✗ Codex CLI (not found)
Generated:
.claude/settings.json — PostToolUse hook (blocks on findings)
.cursor/hooks.json — afterFileEdit hook
.cursor/rules/vibecop.md — always-on lint rule
.aider.conf.yml — lint-cmd per language
Done! vibecop will now run automatically in your agent workflow.

The Three Tiers

vibecop supports 10+ AI coding tools across three integration tiers:

Tier 1 — Deterministic Hooks

These tools support native hook execution. vibecop runs synchronously after each edit and blocks the agent until findings are resolved.

ToolHook TypeBehavior
Claude CodePostToolUseFires after Edit/Write/MultiEdit, exit 1 blocks
CursorafterFileEdit + rulesHook runs scan, rules file reinforces fix behavior
Codex CLIPostToolUseSame pattern as Claude Code
AiderNative --lint-cmdBuilt-in lint integration, runs after every edit

Tier 2 — LLM-Mediated Instructions

These tools do not have deterministic hook execution. Instead, vibecop is injected as a persistent instruction into the agent’s context. The LLM follows the instruction voluntarily.

ToolIntegration
GitHub CopilotCustom instructions file
WindsurfRules file with trigger: always_on
Cline/Roo Code.clinerules file

Tier 3 — MCP Server

These tools connect to vibecop via the Model Context Protocol. The agent calls vibecop tools directly through the MCP interface.

ToolIntegration
Continue.devMCP server config
Amazon QMCP server support
ZedMCP settings

How the Loop Works

Agent writes code
→ vibecop hook fires automatically
→ Findings? Exit 1 → agent reads output, fixes code
→ No findings? Exit 0 → agent continues

This creates a tight feedback loop: the agent never moves on while there are unresolved findings.

Exit Codes

CodeMeaning
0No findings — clean
1One or more findings found
2Scan error (bad args, git error, etc.)

Output Format for Agents

The --format agent output is token-efficient (one finding per line, ~30 tokens each):

file:line:col severity detector-id: message. suggestion

Example:

src/api.ts:42:1 error unsafe-shell-exec: execSync() with template literal. Use execFile() with argument array instead.
src/llm.ts:18:5 warning llm-unpinned-model: Unpinned model alias "gpt-4o". Pin to a dated version like "gpt-4o-2024-08-06".

Tool-Specific Setup