How I Run 3 Engineering Teams with 1 AI Brain


I manage three engineering teams across two products. Software Engineering handles the core TonyRobbins.com platform. Experience Portal Frontend builds the customer-facing portal. Experience API powers the backend services that connect everything. Each team has its own Linear board, its own cycle, its own set of contractors and full-time engineers.
I do not attend daily standups. I do not manually check GitHub for PR status. I do not open Grafana to look at error rates. I do not triage my own inbox. An AI system does all of that for me, every single day, and reports the results to Discord.
The Architecture
The system is built on Claude Code running as a persistent session with 10 recurring cron jobs. It is not a custom application. It is not a SaaS product. It is a single AI session with access to Linear, GitHub, Grafana Loki, Sanity CMS, Apple Mail, and Discord via MCP servers and CLI tools.
Every weekday morning at 7:57am, it pulls data from all three Linear teams, runs GitHub PR and CI queries across both repositories, queries Grafana Loki for production error rates, compares everything against alert thresholds, and compiles a daily CTO brief. The brief includes red flags, per-venture status, team activity, merged PRs, completed issues, and a ranked list of action items for the day.
The 10 Cron Jobs
The system runs on a schedule that covers the full operational day. Daily CTO brief at 7:57am. Morning inbox triage at 8:13am. Morning PR triage at 10:17am. Afternoon inbox triage at 2:07pm. Afternoon PR triage at 2:47pm. Git backup at 9:03pm. LinkedIn post at 7:03am. Nightly SEO optimization at 11:07pm. Monday Linear grooming at 9:37am. And a self-renewal cron every Thursday at 8:57pm that recreates all the other crons before they hit their 7-day expiry.
The times are deliberately offset from round numbers. Every AI user who asks for a 9am cron gets minute zero. I offset by a few minutes to avoid load spikes on the scheduling API. Small detail, but it matters at scale.
What the Daily Brief Actually Looks Like
The brief is not a summary. It is a decision document. It tells me which PRs are ready to merge, which team members are blocked, which production errors are trending, and which action items carried over from yesterday. It compares today's numbers to yesterday's and flags anything that crossed an alert threshold.
A typical brief might say: XAPI deploy pipeline broken for 62 days. Checkout error cluster up 43 percent. A contractor's PR has been open for 30 days with no merge. A new team member's first hook PR needs review. These are not observations. They are calls to action, ranked by urgency, with enough context to act immediately.
PR Triage Is the Killer Feature
Twice a day, the system pulls every open PR across both repositories, checks CodeRabbit review status, classifies each PR into ready-to-merge, needs-review, watch-list, or not-ready, and posts a summary to Discord. For PRs that need my review, it does a focused code review looking at scope alignment, security concerns, performance red flags, and convention violations.
This replaced the ritual of opening GitHub, scanning notifications, clicking into each PR, reading the diff, and deciding what to do. Now I open Discord, read the triage, and say merge the green ones or review number 1178. The system handles the rest.
Inbox Triage Saves 30 Minutes a Day
The inbox triage reads Apple Mail via osascript, applies a rule set that has been refined over weeks of corrections, and deletes noise, summarizes reports, and keeps actionable items. It knows to keep Sentry alerts for our production site but delete Sentry alerts for other projects. It knows to never delete billing or charge emails after an incident where one got accidentally trashed. It summarizes daily Salesforce reports before deleting them.
The rule set is stored in memory files that persist across sessions. When I correct the system, it saves the correction as a feedback memory so the mistake never happens again. This is the closest thing to a learning system I have built without fine-tuning.
What It Gets Wrong
The system is not perfect. Session-only crons die when Claude exits, which is why the self-renewal cron exists. Discord channel access expires and needs manual re-authorization. The Grafana API token is not always available in every session. Production error analysis sometimes lacks context because Loki queries have limits. And the system cannot make judgment calls about people. It can tell me a contractor has been idle for 30 days, but it cannot tell me whether to have a hard conversation or give them another chance.
The biggest limitation is context window management. Long sessions accumulate state, plugin injections bloat the context, and eventually the system compacts and loses nuance from earlier in the conversation. I wrote about this problem in detail in my post about what I would change in Claude Code's architecture.
The ROI Is Not What You Think
The value is not time saved. It is decisions made faster with better data. Before the brain, I would context-switch between Linear, GitHub, Grafana, and email dozens of times a day. Each switch cost 10-15 minutes of regaining context. Now I check Discord once in the morning and once in the afternoon. Everything I need to know is already there, pre-analyzed, with recommendations.
The real ROI is that nothing falls through the cracks. A PR that has been open for 30 days gets flagged every single day until someone acts on it. A deploy pipeline that has been broken for 62 days shows up in every brief with an incrementing counter. Production error spikes get caught within hours, not days. The system has no ego and no fatigue. It just keeps watching.
For the origin story of how this system was built, read I Built a Brain with Claude Code. For the architectural problems I have found in the underlying tool, read 7 Things I Would Change in Claude Code.
