Billionaire
Mode.
Stop operator work. Voice and video in. Your life becomes the content.
Most of my time goes to operator work a machine should already do.
Where the hours go
- AManual cross-platform posting. Same idea, three voices, three windows.
- BInbox triage. Warm leads buried under operational noise.
- CTyping architecture into terminals. The slowest interface I own.
- DThree separate stacks. Search Fund Ventures, VN, SMB — no shared memory.
- ENo personal database. Decisions evaporate. Patterns don't compound.
Why it's wrong
Operator work is the part of my day that isn't actually me. It's a context switch from CEO into clerk. The fund's thesis, the writing, the deal patterns — none of that needs me typing into Buffer at 11pm.
The output is the same whether I do it or a system does. The difference is whether I get the next ten years back.
Stop
operator
work.
Not "voice-only." Not "keyboardless aesthetic." The optimization target is simpler: no task in my life requires me to context-switch into ops mode.
Voice is the most ergonomic input for my schedule, but it's one input among many. The real moat is the keyboard-person handoff primitive — the part where AI gracefully delegates what it can't finish.
A single conversational surface that runs the company of one.
→ Captures intent via voice — pool deck, mobile, glasses, projector. Same brain, different surface.
→ Routes to one of N agents — content factory, memory, repo router, interpreter, inbox, Open Claude on the Mac.
→ Returns results in my cloned voice and on screen, with structured artifacts.
→ Emits clean work orders to one or two keyboard people for tasks AI can't finish — full context, acceptance criteria attached.
→ Maintains persistent memory of me — writing style, deal patterns, network map, prior decisions.
→ Indexes every voice session. The system becomes the personal database.
⌁ One product with phased delivery. Not five products fused together.
From terminal-typist to conversational CEO.
| Today · Apr 2026 | v1 ship · ~6 months | |
|---|---|---|
| Talk to Claude via CC in a terminal. | → | Talk to Billionaire Mode from the pool deck — mobile, glasses, projector. |
| Manual cross-platform posting, manual replies. | → | 15-min daily huddle produces a week of cross-platform content, scheduled, in my literal voice. |
| Manual inbox triage to newsletter funnel. | → | Inbox auto-triages. Warm leads surface as huddle items. Keyboard-person gets clean work orders. |
| Type architecture ideas into Claude / gstack. | → | Speak the architecture. System routes to the right repo, drafts the PR, hands off downstream. |
| Three separate stacks (Search Fund Ventures / VN / SMB). | → | One conversational surface. Memory layer carries context across all three lenses. |
| No personal database. | → | Every voice session indexed and queryable. The Nick-brain compounds. |
The voice loop is the substrate. Everything else hangs off it.
The keyboard-person task queue is the most novel primitive.
When AI can't finish a task, most products fail open or fail silent. We fail to a structured human handoff — with full context, acceptance criteria, and an SLA. Phase 2.
Every Phase 1+ agent inherits escalate_to_keyboard_person(). AI handles what it can. Humans get clean work orders. Nothing falls through.
Seven expansions made it through CEO review.
Open Claude bridge
Open Claude registered as one routable MCP agent. Whitelisted command surface — never eval arbitrary.
Memory layer
RAG over transcripts, tweets, decisions. Every agent queries via memory.query().
Daily huddle
15-min ritual. Pre-fetched mentions, DMs, calendar, portfolio, gstack PRs. Output: a week of content.
Voice clone
ElevenLabs trained from podcast corpus. From Phase 1 forward, every TTS reply is in my voice.
Ambient capture · life IS the podcast
Camera on. The system watches you live your day. Conversations, walks, deal calls — all chopped into shorts and long form. The podcast isn't a separate event. It's just edited life.
Repo router
Voice → repo detection (Search Fund Ventures, VN, SMB, Billionaire Mode) → drafts PR in target repo's voice.
Voice transactions log
Every voice session recorded, transcribed, stored in Postgres, embedded in pgvector. Queryable from day one. The personal database starts the moment Phase 0 ships.
Substrate first. Every phase ships value.
Substrate
- Voice loop end-to-end
- Voice transactions log
- Open Claude as MCP agent
- Passkey auth · module boundaries
Content factory · Memory · Voice clone
- Ambient capture → drafts pipeline
- Voice-approves-drafts UX
- Voice clone trained
- Memory layer v1 (RAG)
Keyboard queue · Repo router
- Task queue primitive
- Assistant inbox · proof of done
- Repo router agent
- escalate() across all agents
Huddle · Feed interpreter
- 15-min daily ritual
- Feed scrape under auth session
- Decisions log → memory
- Graceful degradation
Multimodal · Second brain
- Projector + VITURE HUD
- Live second-brain projection
- Inbox automation
- Multi-tenant deferred
Boring infrastructure for a sharp product.
Three Phase-0 CRITICALs. Bake them in or pay later.
A voice agent that runs shell on my Mac is a security primitive, not a feature. Three controls go in before anything else.
Open Claude whitelist
The Mac bridge never executes eval on arbitrary strings. Only commands on a registered, versioned whitelist. Anything else fails closed and logs.
Prompt-injection defense
Untrusted text — feeds, DMs, scraped content — is never concatenated into agent instructions. Inputs travel as data, not as commands. Hard boundary, enforced in the orchestrator.
Audit log substrate
Every voice intent, agent call, MCP invocation, and shell command is appended to an immutable log. Queryable. Replayable. The audit trail is part of the product, not a postscript.