Why an AI Agent Team Instead of Hiring

The decision was economic and practical. Adasight is a boutique consultancy -- four humans, targeting EUR 1M in annual revenue. Hiring junior staff for GTM execution meant salaries, onboarding, management overhead, and the reality that most of the work is structured and repeatable. AI agents can execute structured work 24/7, they don't need onboarding twice, and they cost a fraction of even one junior hire.

But the real reason was leverage. I wanted to answer one question every day: did I do this, or did I direct something else to do this? That question became the Leverage Ratio -- the core metric I track to measure whether AI is actually helping or just creating busywork.

The Full Agent Roster

Each agent is a Claude Code session with a written definition file that includes: role, responsibilities, principles (modeled after real-world experts), an anti-brief (what the agent refuses to do), a coordination protocol, and a learning schedule. They are not chatbots -- they are persistent, role-specific execution engines.

Holden -- Chief Revenue Officer. Produces revenue briefs, monitors pipeline state, flags when deals stall. Holden identified that our pipeline was at EUR 45K active across 4 deals without me asking.

Bobbie -- Strategic Account Executive. Gates outbound copy quality, maintains deal records, drafts follow-up sequences. Nothing goes to a prospect without passing Bobbie's review.

Amos -- Outbound Specialist. Builds Apollo sequences, handles prospecting research, manages contact enrollment. Amos built the Shopify outbound sequences that are currently in market.

Alex -- Growth Manager. ICP research, content strategy, editorial calendars, competitive analysis. Alex identified the PE/Portfolio CTO segment as a new high-LTV target.

Dawes -- Personal Brand Agent. LinkedIn content recommendations, profile audits, brand positioning. Dawes handles the distribution side of content.

Prax -- Website and SEO Agent. Keyword research, technical SEO, content optimization, publishing pipeline management. Prax runs the 45-article PostHog content engine.

Elvi -- Internal Product Manager. System audits, agent registry maintenance, knowledge management. Elvi keeps the agent infrastructure clean and documented.

Naomi -- Lead Dev. Builds and deploys everything technical. Site generators, MCP servers, scheduled tasks, infrastructure. Naomi runs on the Mac Mini -- the always-on execution server.

Anna -- Chief of Staff. Weekly planning, blocker surfacing, cross-agent coordination. Anna drafted the W15 plan with the note: 'Total Gregor time: ~1.5 hours. Everything else is agent-owned.'

Cotyar -- Finance Monitor. Financial tracking and monitoring. Honest disclosure: Cotyar is still a stub. Not every agent works immediately, and pretending otherwise would be dishonest.

How They Coordinate

Individual agent performance is the easy part. Coordination is where it gets hard.

The agents coordinate through Supabase -- a PostgreSQL database that serves as the shared brain. There is an agent_handovers table where agents file work products, blockers, and escalations. There is an agent_memory table for persistent knowledge that survives across sessions.

The handover protocol took three iterations to get right. Version 1 used file-based handovers in a shared directory -- it was chaos. Version 2 used Slack messages -- too noisy, things got lost. Version 3 (current) uses structured Supabase rows with status tracking. An agent picks up a handover, marks it in-progress, completes the work, and files the result back.

The key insight: agents need the same coordination infrastructure that human teams need. Task boards, status tracking, escalation paths, and clear ownership. The technology is different but the organizational design is the same.

What Actually Works

The first time the full team activated in a single session (April 10, 2026), they produced: a revenue brief, ICP research for a new segment, two outbound sequences that passed quality gates, deal records, a system audit, and a weekly plan. Total Gregor time: roughly 1.5 hours of direction and review.

Specific wins:

Structured execution is excellent. Agents that follow defined playbooks -- outbound sequences, content publishing, SEO optimization -- produce consistent, reliable output.

Research and analysis saves real time. Alex completing a full ICP research brief in one session replaces what would be 4-6 hours of my manual work.

Publishing cadence is inhuman. Agents do not get writer's block. They do not miss deadlines. The 45-article content engine runs on schedule because agents do not have bad days.

What Does Not Work

Not everything is a success story, and pretending otherwise is exactly the kind of content I want to avoid.

The daily sales prep automation was a disaster. It generated 106 Attio tasks that nobody completed. The output was a wall of text in a Slack DM that was hard to action. The architecture was wrong -- it should have been a lightweight summary, not a task generator.

LinkedIn signal monitoring through browser automation is fragile. It requires authentication, pages load dynamically, results vary by session. High failure rate. I have not solved this.

The Fireflies-to-Attio processor requires human interaction. It was designed as an automated pipeline but it needs an approval loop that does not work in an unattended context.

Cotyar (Finance Monitor) is still a stub. Not every agent earns its role immediately. Some tasks are harder to delegate than others, and financial monitoring turned out to be one of them.

Memory degradation is real. Agent output quality drops after about two weeks without memory maintenance. Context windows fill up, old patterns resurface, and the agent starts producing generic output instead of role-specific work.

The Economics

The full agent team costs approximately $97/month in API credits. That replaces what would be 2-3 junior hires at roughly EUR 3,000-4,000/month each.

But the comparison is not purely financial. Agents execute faster on structured tasks. They are available 24/7. They do not require management overhead. The trade-off is that they cannot handle ambiguous judgment calls, they cannot build relationships, and they need a human to set direction.

The honest assessment: AI agents are not replacing my team. They are replacing the 2-3 people I would have needed to hire to scale GTM execution. That is a meaningful difference.

How to Build Your Own Agent Team

Start small. You do not need 10 agents. You need one agent that does one job well.

Step 1: Pick your highest-volume repetitive task. For me, it was outbound prospecting research. For you, it might be content drafting, email responses, or data analysis.

Step 2: Write the agent definition. Role, responsibilities, principles, anti-brief. Be specific about what the agent should refuse to do -- this prevents scope creep.

Step 3: Build the feedback loop. Review every output for the first 2 weeks. Calibrate. Then gradually reduce review frequency as quality stabilizes.

Step 4: Add coordination only when you have 3+ agents. A single agent does not need Supabase coordination. Two agents can share a file. Three or more need a real system.

The key lesson: the agent code should be ~50 lines. The complexity is in role design, verification rules, and domain-specific problems. Do not over-engineer the infrastructure. Over-invest in the agent definitions.

Frequently Asked Questions

What AI model do the agents use?

All agents run on Claude (Anthropic) through Claude Code sessions. The specific model varies by task: Haiku for data processing, Sonnet for drafting, Opus for strategic work. The agent definitions are model-agnostic -- they define the role, not the model.

How much does it cost to run a 10-agent AI team?

Approximately $97/month in API credits, plus Supabase free tier and Cloudflare free tier. The cost scales with usage -- heavy weeks with lots of content generation cost more. Light weeks with mostly monitoring cost less.

Can non-developers build an AI agent team?

Partially. The agent definitions are plain text files that anyone can write. The infrastructure (Supabase, Python generators, Mac Mini setup) currently requires some technical ability. This is an area where the tooling is improving rapidly.

How long did it take to build the full 10-agent team?

About 6 weeks from first agent to full team operational. But the first useful agent (outbound prospecting) was working within 3 days. Start with one, expand as you learn.

This article was drafted by an AI agent and reviewed by Gregor Spielmann. The source material, frameworks, and experiences are real. The writing is AI-assisted. Learn how this site works.