Billionaire Mode v1
01 / 12
Personal infrastructure · Apr 2026

Billionaire
Mode.

Stop operator work. Voice and video in. Your life becomes the content.

Audience
Nick Bryant — and a few friends
Phase
v1 — substrate first, six months out
North Star
Stop operator work
01 — Today

Most of my time goes to operator work a machine should already do.

Where the hours go

  • AManual cross-platform posting. Same idea, three voices, three windows.
  • BInbox triage. Warm leads buried under operational noise.
  • CTyping architecture into terminals. The slowest interface I own.
  • DThree separate stacks. Search Fund Ventures, VN, SMB — no shared memory.
  • ENo personal database. Decisions evaporate. Patterns don't compound.

Why it's wrong

Operator work is the part of my day that isn't actually me. It's a context switch from CEO into clerk. The fund's thesis, the writing, the deal patterns — none of that needs me typing into Buffer at 11pm.

The output is the same whether I do it or a system does. The difference is whether I get the next ten years back.

02 — North Star

Stop
operator
work.

Not "voice-only." Not "keyboardless aesthetic." The optimization target is simpler: no task in my life requires me to context-switch into ops mode.

Voice is the most ergonomic input for my schedule, but it's one input among many. The real moat is the keyboard-person handoff primitive — the part where AI gracefully delegates what it can't finish.

03 — Vision · Apr 2027

A single conversational surface that runs the company of one.

 Captures intent via voice — pool deck, mobile, glasses, projector. Same brain, different surface.

 Routes to one of N agents — content factory, memory, repo router, interpreter, inbox, Open Claude on the Mac.

 Returns results in my cloned voice and on screen, with structured artifacts.

 Emits clean work orders to one or two keyboard people for tasks AI can't finish — full context, acceptance criteria attached.

 Maintains persistent memory of me — writing style, deal patterns, network map, prior decisions.

 Indexes every voice session. The system becomes the personal database.

 One product with phased delivery. Not five products fused together.

04 — Dream-state delta

From terminal-typist to conversational CEO.

Today · Apr 2026 v1 ship · ~6 months
Talk to Claude via CC in a terminal. Talk to Billionaire Mode from the pool deck — mobile, glasses, projector.
Manual cross-platform posting, manual replies. 15-min daily huddle produces a week of cross-platform content, scheduled, in my literal voice.
Manual inbox triage to newsletter funnel. Inbox auto-triages. Warm leads surface as huddle items. Keyboard-person gets clean work orders.
Type architecture ideas into Claude / gstack. Speak the architecture. System routes to the right repo, drafts the PR, hands off downstream.
Three separate stacks (Search Fund Ventures / VN / SMB). One conversational surface. Memory layer carries context across all three lenses.
No personal database. Every voice session indexed and queryable. The Nick-brain compounds.
05 — Architecture

The voice loop is the substrate. Everything else hangs off it.

Voice + video in
Mic · camera · PWA
Whisper
STT
Claude · MCP
Reasoning
Agents
Open Claude · repo · memory
ElevenLabs
Voice clone · TTS
Every session → transcribed · embedded · stored → memory layer (Postgres + pgvector)
06 — The moat

The keyboard-person task queue is the most novel primitive.

When AI can't finish a task, most products fail open or fail silent. We fail to a structured human handoff — with full context, acceptance criteria, and an SLA. Phase 2.

Voice intent"send the term sheet, ping mark, set follow-up"
→   orchestrator routes   →
Three actions emittedtwo automated, one queued
Queued taskstructured payload
{ intent, context_links[], acceptance_criteria, sla_hours, attachments[] }
Postgres LISTENreal-time inbox
Keyboard personweb inbox
→   completes with proof   →
Closed loopURL · screenshot · note

Every Phase 1+ agent inherits escalate_to_keyboard_person(). AI handles what it can. Humans get clean work orders. Nothing falls through.

07 — Accepted scope

Seven expansions made it through CEO review.

01 · PHASE 0

Open Claude bridge

Open Claude registered as one routable MCP agent. Whitelisted command surface — never eval arbitrary.

02 · PHASE 1

Memory layer

RAG over transcripts, tweets, decisions. Every agent queries via memory.query().

03 · PHASE 3

Daily huddle

15-min ritual. Pre-fetched mentions, DMs, calendar, portfolio, gstack PRs. Output: a week of content.

04 · PHASE 1

Voice clone

ElevenLabs trained from podcast corpus. From Phase 1 forward, every TTS reply is in my voice.

05 · PHASE 4

Ambient capture · life IS the podcast

Camera on. The system watches you live your day. Conversations, walks, deal calls — all chopped into shorts and long form. The podcast isn't a separate event. It's just edited life.

06 · PHASE 2

Repo router

Voice → repo detection (Search Fund Ventures, VN, SMB, Billionaire Mode) → drafts PR in target repo's voice.

07 · PHASE 0 BASELINE

Voice transactions log

Every voice session recorded, transcribed, stored in Postgres, embedded in pgvector. Queryable from day one. The personal database starts the moment Phase 0 ships.

08 — Phasing

Substrate first. Every phase ships value.

Phase 0
Weeks 1–2

Substrate

  • Voice loop end-to-end
  • Voice transactions log
  • Open Claude as MCP agent
  • Passkey auth · module boundaries
Phase 1
Weeks 3–6

Content factory · Memory · Voice clone

  • Ambient capture → drafts pipeline
  • Voice-approves-drafts UX
  • Voice clone trained
  • Memory layer v1 (RAG)
Phase 2
Weeks 7–10

Keyboard queue · Repo router

  • Task queue primitive
  • Assistant inbox · proof of done
  • Repo router agent
  • escalate() across all agents
Phase 3
Months 3–4

Huddle · Feed interpreter

  • 15-min daily ritual
  • Feed scrape under auth session
  • Decisions log → memory
  • Graceful degradation
Phase 4+
Months 4–6

Multimodal · Second brain

  • Projector + VITURE HUD
  • Live second-brain projection
  • Inbox automation
  • Multi-tenant deferred
09 — Stack

Boring infrastructure for a sharp product.

Runtime
Bun + Hono
Voice-loop latency budget. Lightweight, modern.
Database
Supabase + pgvector
One Postgres for queue, auth, transcripts, embeddings.
Agent protocol
MCP
Native fit for Claude as primary primitive.
STT
Whisper
Accuracy at conversational pace.
Reasoning
Claude API
Plus Open Claude as a routable agent. Loop closes.
TTS
ElevenLabs
Cloned voice from Phase 1 onward.
Monorepo
Turborepo + pnpm
Mature caching. Vercel integration. Boring.
Mobile
PWA-only
iOS 2026 PWAs are good enough. Expo only if walls hit.
Hosting
Vercel · Fly.io
Web on Vercel, API on Fly. Instant rollback.
Secrets
Doppler
One control plane across environments.
Mac bridge
Tailscale
Secure path to Open Claude on the Mac.
Tenancy
Personal-only
Clean module boundaries. Multi-tenant is refactor, not rewrite.
10 — Trust model

Three Phase-0 CRITICALs. Bake them in or pay later.

A voice agent that runs shell on my Mac is a security primitive, not a feature. Three controls go in before anything else.

CRITICAL · 01

Open Claude whitelist

The Mac bridge never executes eval on arbitrary strings. Only commands on a registered, versioned whitelist. Anything else fails closed and logs.

CRITICAL · 02

Prompt-injection defense

Untrusted text — feeds, DMs, scraped content — is never concatenated into agent instructions. Inputs travel as data, not as commands. Hard boundary, enforced in the orchestrator.

CRITICAL · 03

Audit log substrate

Every voice intent, agent call, MCP invocation, and shell command is appended to an immutable log. Queryable. Replayable. The audit trail is part of the product, not a postscript.

Pool deck. One mic.
The entire stack.

Billionaire Mode · v1 · Apr 2026