We Did Not Plan to Run the Company with AI Agents
It started as an experiment in mid-2023. We had one agent helping with code reviews. By the end of 2024, we had five. By early 2026, we hit 19 - and the company runs on them.
This is not a research project. It is not a sales pitch. This is how SmartQix actually operates today.
The Full Roster
Here is every agent currently active:
Leadership (2)
- Kofi (COO) - orchestrates all operations, routes tasks, manages cross-team coordination. Runs on Claude Opus.
- Elon (CTO) - owns all technical decisions, assigns dev work, reviews architecture. Also Claude Opus.
Development Team (6)
- Mavis - Senior Dev, leads complex builds, reviews PRs
- Kelvin - Laravel specialist
- Richard and Davis - Next.js developers
- Appiah - Tech PM, manages timelines and blockers
- Rose - QA Lead, writes test plans, catches regressions before humans do
Infrastructure (2)
- Enoch - DevOps, manages servers, CI/CD, deployment pipelines
- Koby - Infrastructure monitoring, alerts on downtime and anomalies
Business Functions (7)
- Stephanie (CMO) - owns marketing strategy and campaigns
- Adez (CFO) - financial tracking and reporting
- Esi - sales outreach and lead qualification
- Afia - content writing and blog posts
- Yaw - customer support and product expertise
- Eliel - operations coordination
- Ty - communications and briefings
Quality Control (1)
- Obed - inspector, runs cross-team audits and verifies work claims
Design (1)
- Ama - UI/UX design guidance and component specs
How the Chain of Command Works
The structure mirrors a real company. Kofi (COO) reports to Ricky (the human CEO). Tech tasks go to Elon, who assigns them to his team. No agent contacts a developer directly without going through the CTO.
This matters more than it sounds. Early on, we had coordination chaos - two agents working on the same feature, conflicting implementations, wasted compute. Enforcing a real chain of command fixed it. The agents now follow it consistently.
The CEO gives direction to the COO. The COO delegates. Agents execute. Progress gets reported up the chain. It is boring. It works.
The Stack
Every agent runs on Discord channels with persistent sessions. Orchestration is handled by OpenClaw, a custom AI agent runtime that routes messages, manages sessions, and provides tool access.
The models:
- 2 agents run on Claude Opus (executive-level reasoning tasks)
- 16 agents run on Claude Sonnet 4.6 (fast, capable, cost-effective for execution work)
- 1 agent runs on Gemini Flash (for inspection tasks that benefit from a different model perspective)
Task management lives in QixOS - our own internal tool built specifically for multi-agent coordination. Every task has an ID. Every task starts in the system before work begins. Agents update status in real time.
What Actually Works (And What Surprised Us)
Delegation beats doing. The biggest lesson: executive agents should never execute. Kofi spent weeks accidentally doing technical work instead of delegating. Once we enforced the COO role properly - orchestrate, do not code - throughput went up significantly.
Agents will lie about completing work. This was the hardest lesson. An agent will say "done" and believe it. We now require audits before anything is marked complete. The QA agent (Rose) and inspector (Obed) exist specifically to catch this.
Chain of command is not overhead. We considered flattening the structure for speed. Every time we tried it, work got duplicated or conflicting. The hierarchy prevents that.
Memory is a solved problem. Each agent maintains daily notes and a long-term memory file. Sessions start fresh, but agents read their memory files at the start of every session. Continuity is good enough for professional work.
Different models for different jobs. Opus for judgment calls. Sonnet for execution. Flash for inspection. Matching model capability to task type matters for both quality and cost.
The Honest Limitations
Agents still need human oversight for:
- Final approval on anything going external (emails, posts, client communications)
- Calls and real-time decision making that requires voice or video
- Novel situations with no prior context
- Creative direction at the brand level
We are not running a fully autonomous company. Ricky reviews and approves before anything customer-facing ships. The agents handle the volume of execution so he can focus on direction.
What This Means for Our Clients
When we build AI agent systems for clients, we are not guessing. We have made every mistake in the book on our own infrastructure first. We know what breaks at scale. We know which patterns work.
If you want to add AI automation to your business, or build a custom multi-agent system, we have the practical experience to do it right.
We run 19 agents. We build the same for you.
Interested in what an AI agent system could look like for your business? Start a conversation.