Overview
How the System Works
We build AI that thinks like an expert — not just mimics their words. Our moat is the decision architecture: we extract how an expert chooses their words, not just what words they choose. This is a joint venture, not a service. We build together, we win together.
Discover
Find & qualify experts
Score
AI rubric + subjective
Board
Collect content & IP
Build
Clone + platform
Launch
Alpha → iterate → scale
First Principle
What's the user experience? Every decision gets evaluated through this lens. The landing page should disqualify more people than it qualifies. We're fishing walleye — when we catch perch, we have a secondary path for them, but we don't optimize for perch.
Ecosystem
Brand Architecture
Three brands serve different purposes in the pipeline. Understanding which brand the expert and their audience interact with at each stage is critical.
| Brand | Role | Faces |
|---|---|---|
| Athio | The prestige JV brand. "Cognitive Infrastructure for Exceptional People." This is the page experts see. It qualifies them ($250K+ revenue, 10K+ audience, proven framework). Limited to 7 experts per quarter. | Expert prospects |
| MasteryOS | The operating system that powers everything. Partner-first platform for turning audience + IP into AI-native products. Modular AI stack, signal-driven build rhythm, IP extraction systems. | Tech partners, experts post-JV |
| MasteryMade | The venture studio. Holding company that orchestrates strategic partnerships, houses the Growth Operator framework, and manages the ASAP programs. Asymmetric leverage at the portfolio level. | Internal, strategic investors |
Phase 1
Discovery & Qualification
The universe of potential experts is large. Our job is to filter ruthlessly. Maybe 10% of the people Derek talks to are worth considering, and those 10% sort into different buckets based on timing, audience alignment, and revenue potential.
Expert Buckets
| Tier | Profile | Path | Priority |
|---|---|---|---|
| Titan | $250K+ revenue, 10K+ audience, proven framework, people already pay them, audience aligns with our current offering | Full JV partnership via Athio. Revenue share. No upfront cost. | This is our ICP. These are the walleye. |
| Strong Fit | Meets most criteria but audience might be slightly out of sync, or they're in a domain we're expanding into in 1-2 iterations | JV with modified terms, or waitlist for next quarter when roadmap catches up | Keep warm. Don't lose the thread. |
| Pay-to-Play | Excited, willing, but doesn't meet Titan criteria. Maybe smaller audience or less established framework. | Upfront payment package through MasteryOS (not Athio). Separate brand positioning. | Short-term revenue. Don't taint the JV brand. |
| Future | Great conversation but wrong timing — too early in their journey, or we don't have the tech for their domain yet | Nurture list. Capture their info. Revisit next quarter. | Don't throw them back. Have a net. |
Anti-Sell Principle
A good landing page disqualifies more people than it qualifies. The language should speak to exactly who we want — everyone else should self-select out. Everyone's talking AI, everyone's overhyped it, everyone's tried a million tools. Our anti-sell is actually a positive sell: we're the ones who actually deliver because we replicate how experts think, not just what they say.
Qualification Questions
These questions need to be answerable with low cognitive load — multiple choice where possible, conversational flow preferred over a 15-question form. Delivered via chatbot, voice agent, live call with Derek, or voice-recorded intake.
- Revenue range — Annual income from expertise (MCQ brackets)
- Audience size — Platform reach across channels (MCQ brackets)
- Current offer stack — What they sell today (MCQ categories)
- Client lifetime value — Revenue per client relationship
- Framework or methodology — Do they have a named system people pay for? (Y/N + describe)
- Current affiliate program — Do they have one, and what does it look like?
- Primary constraint — Why JV? What's the real motivation? (MCQ: scale reach / save time / new product / monetize differently / other)
- Client pain points — What does their audience struggle with most? (MCQ categories)
- Content volume — How much existing content do they have? (podcasts, videos, courses, books)
- AI readiness — Have they used AI tools? What's their comfort level?
On the Constraint Question (#7)
This matters more than people think. If the motivation is "just make more money" — that's a signal, not a qualifier. Reframe as: "What would change for your business if your best thinking was available 24/7 without you being on every call?" The answer reveals whether they understand the actual value prop or just want a shiny object.
Intake Channels
Multiple paths to the same scoring system. Don't require one method — meet the expert where they are.
- Live call with Derek — He asks the questions conversationally, records via Tactiq/Fireflies, transcript gets extracted
- Chat agent — AI-driven conversational intake on athio.ai that asks questions naturally. If they answer 3 in one response, it adapts.
- Voice agent — Phone-style AI interview. The wow factor alone sells the partnership. (Roadmap)
- Voice memo from Derek — After a call, Derek records a brain dump: "Just talked to John, he's got X, might be a hot lead for bucket Y." Transcript extracted and scored.
- Form fallback — Email them a lightweight version if nothing else works
Phase 2
Scoring & Approval
All roads lead to a standardized score. Whether the data came from a live call, a chatbot session, or Derek's voice memo — it all feeds the same rubric.
AI Scoring Rubric
The LLM takes the open-ended answers and maps them to a standardized scoring system. Categories scored 1-5:
- Audience alignment — Does their audience overlap with our current offering and 1-2 iteration roadmap?
- Revenue signal — Evidence of real paying customers, not aspirational numbers
- Framework maturity — Do they have a named, teachable system, or just general expertise?
- Content depth — Enough raw material to build a quality clone (podcasts, courses, books, calls)
- Partnership temperament — Win-win mindset vs. transactional. Are they rowing the same direction?
- Time-to-value — How quickly can we get their audience using the product?
- Constraint clarity — Do they know why they want this? Clear motivation = better partner
Bell Curve Override
This rubric handles the 68% inside one standard deviation. For the tails: if audience potential is massive and everything else is mediocre — override. If everything scores great but your gut says no — override. The rubric is a heuristic with exceptions based on actual opportunity analysis. Score first, then apply judgment.
Decision Gate
After scoring, three outcomes:
- Approved for JV — Moves to Phase 3 boarding. Gets into the queue based on capacity (not all at once).
- Redirected to pay-to-play — Warm handoff to MasteryOS brand. Different landing page, different expectations. They can bid for a spot — leave it open-ended, not a sticker price.
- Waitlisted — Captured in CRM, revisited quarterly. Not wasted effort if the contact is preserved.
Phase 3
Expert Onboarding
Once approved, we need to extract everything required to build their clone and set up their platform. This is the "boarding" — and it has hard dependencies. Miss a step here and everything downstream breaks.
The Boarding Checklist
Contract & Agreement
JV terms, revenue share structure, IP ownership clarity. Who owns the clone, what happens on exit, licensing vs. ownership. Send via e-sign. Template exists — swap in variables per expert.
Payment Link (if applicable)
For pay-to-play tier only. JV partners have no upfront cost. Stripe link generated per expert.
Expert Content Folder
Shared G-Drive folder where they upload: core framework docs, books/PDFs, course transcripts, podcast episodes, YouTube links, coaching call recordings, their "greatest hits" content. We also scrape their public channels (Perplexity/Gemini deep research → social media, podcast appearances, blog posts). Aim for 20-50 source files.
Expert Biography (1-3 pages)
Who are they? Their story, their origin, their credentials. This isn't for the clone's thinking — it's for when users ask "Who is [Expert]?" Context, not training.
Framework Definition
The expert's named methodology, drawn on paper if needed. What's the flow? What are the gates? What are the dependencies between steps? What's the first micro-step a user takes? Coach the expert through this — most visionaries don't think in dependencies. Use the lasagna exercise: "Walk me through it like paint-by-numbers, including every dependency."
Boarding Preferences (10 questions)
Quick yes/no or A/B decisions: Should the clone sound like them or be clearly labeled AI? Counselor tone or pit bull? Do they want gamification in onboarding? What's the first thing a user should see? Book-a-call link — where does it point? These are calibration inputs, and the expert will change their mind. That's fine. Capture the first pass.
Resource Library Seed (3-5 items)
A blank resource library looks sad. Ask the expert: "If someone was starting from scratch with your method, what are the 3-5 most important resources?" PDFs, videos, key articles. Not everything — just the lasagna, not the cookbook. Follow the "needed it 10 times" rule: only include what's proven.
Welcome Video Script
Short video explaining what the platform does for their audience. Can be faceless initially, eventually replaced with the expert's face. Sits in the resource library and plays during first login.
Critical Lesson from Bridger & Brad
Experts will redefine their ICP mid-process. Bridger started with "faith-based Gen Z men in sales" then pivoted toward pure sales training. Brad was "everything is great" energy with limited content. Lock the ICP early: "Who are your current paying clients? That's who we build for." We don't help them find new customers — we extend value to existing ones. Changes can happen after validation, not during build.
Phase 4
Clone Architecture & Build
This is the secret sauce. We don't pattern-match words. We extract decision heuristics — how the expert chooses their words, approaches problems, and structures their thinking. Everyone else gives you an actor reading a script. We give you the writer's room.
The Three-Track Extraction
Raw expert documents are processed through three parallel pipelines:
Track A — Landing Page Extraction
Content for the expert's public-facing page. Pain points, transformation outcomes, testimonials, positioning language. Feeds the real-time proposal system.
Track B — Clone Extraction
Heuristic rubric, decision patterns, thinking architecture. Creates the SCALE+POWER type rubrics. Extracts test questions and ground-truth answers for validation.
Track C — Document Intelligence
PII scrubbing, logic metadata injection, tag extraction. Takes the Track B heuristics and applies them back to each document as metadata. Creates the enriched RAG corpus.
Track D — Psychographic Profiling
User-facing boarding questions generated from expert content analysis. Creates USER_PROFILE_{EXPERT} for each end-user based on their context, communication style, analogy domain, and voice patterns.
Validation Process
Before going live, the clone must pass the tequila test.
- Extract test cases — Segment some expert documents as holdouts. Generate questions from them as if a real user was asking.
- Run the clone — Feed those questions to the clone. Capture its responses.
- Compare against ground truth — Does the clone's response follow the same thinking path as the real expert? Not word-for-word — same journey, same landing point, same approach. "If we gave our friend tequila, would the clone react the same way?"
- Score across rubrics — Logic match, voice match, framework adherence, model collapse resistance. Did it stay in character? Did thinking patterns hold? Did it drift to generic ChatGPT responses?
- Expert review — Put the expert in front of it. "Does this think like you?" Not perfect — 80% is the target. If it handles 68% of questions at a high level and 80% at a confident level, the remaining 20% is what makes the real expert valuable.
The 80% Pitch
We sell the imperfection. If the clone was 100% of the expert, the expert is obsolete. At 80%, the clone handles the daily stuff — answering questions, running frameworks, coaching through basics. The last 20% is when people want the real thing. That's the upsell. That's the retention. That's why this is a JV and not a replacement.
Phase 5
Platform Deployment
Multi-tenant architecture. Think apartments, not houses. Each expert gets a unit in the building. Costs stay low during alpha. If someone goes dormant, it doesn't eat capital.
Multi-Tenant Model
Previously, each expert was a standalone deployment (single-tenant) — their own codebase, hosting, database. Expensive to maintain when idle. Now all experts share infrastructure with isolated data. First tenant hits the most bugs. Second tenant finds the multi-tenant bugs. By tenant three, it should be smooth drops.
What Gets Deployed
Expert's AI Clone
Chat interface trained on their thinking heuristics. Uses the enriched RAG database with Track B + C extracted documents. Model-agnostic — can swap OpenAI, Anthropic, etc. via our routing layer.
Resource Library
PDFs, videos, frameworks. Curated 3-5 items from the expert's best material. Not a content dump — think "4-page PDF that gives you the skill" not "college textbook."
Experiences (Modules)
Step-by-step guided sequences. Can be gated (unlock next step by completing a prompt). Text, images, video + LLM or human prompts. Great for onboarding, frameworks, and upsell courses. Free modules included, premium modules for credits or payment.
Landing Page
Expert-specific page generated from Track A extraction. Can be produced mid-call using the real-time proposal system (transcript → template → published URL). E.g. athio.com/bridger or freedomlife.live.
Admin Panel
Expert-facing backend. Manage their clone's prompts, view user conversations, update resources, track engagement. This is 70% of the engineering effort people forget about — the customer-facing widget is 30% of a SaaS. The admin panel is everything else.
Alpha Testing Protocol
Before wide launch, 5 users from the expert's existing audience. Not strangers — people who already know the expert and can evaluate whether "this thinks like [Expert]." Key question for them: not "is this useful?" but "is this them?"
UX Gap to Solve
Bridger's feedback: "I created an account and didn't know what to do next." This is the critical moment. The welcome flow must eliminate the blank stare. Options: gated experience that onboards them (framework walkthrough before chat unlocks), guided first prompt suggestions, or a curated "start here" resource. Demo accounts should have enough credits to actually explore — Bridger ran out in 3 messages on the free plan.
Phase 6
Launch & Iteration
Launch is not the finish line. It's the beginning of the feedback loop. We ship fast, learn what breaks, and iterate. 48-hour shipping rule applies.
The Real-Time Proposal System
The killer demo that closes JVs. Here's what it looks like in practice:
- Derek is on a discovery call with a potential expert. Our transcription system is running.
- Halfway through the call, Derek hits a button. Selects a template.
- The system imports the transcript to that point, applies it to the template, and generates an interactive proposal page — customized to everything they just discussed.
- By the end of the call, Derek sends a live URL:
athio.com/[expert-name] - The page includes a light version of their chatbot clone. The expert can try talking to "themselves."
- Their mind is blown. The only obvious answer is yes.
The Close
We put people in a box where the only obvious answer is the one we want them to give. Not because we're manipulating — because we're demonstrating, in real time, exactly what we build. If they don't want it after seeing themselves replicated live on a call, they're not our person. And that's fine.
Iteration Cadence
- 48-hour shipping — Every cycle must produce a deliverable. Not "worked on it" — shipped it. Live. This prevents scope creep, forces prioritization, and builds momentum.
- Expert feedback loops — After alpha users engage, collect: "Does this sound like me?" "Where did it drift?" "What frameworks are missing?" Feed back into Track B/C extractions.
- Feature roadmap — Ideas captured during boarding go on the roadmap, not into the build. "Needed it 10 times" rule: don't build features until the demand is proven. Gamified onboarding, video clones, advanced voice agents — all roadmap items, not launch blockers.
- Tenant scaling — After expert #3, deployment should be near-frictionless. Goal: eventually automate the full pipeline from content folder → enriched RAG → live clone via agentic workflows.
Reference
Key Dependencies Map
For every step, ask: what's the gate? What blocks this from proceeding? What's the second-order effect? Do we have it ready, have a template, or need to create it?
Athio Landing Page
Copy finalized, apply button active, qualification questions embedded, bifurcation path for non-JV leads
Scoring Rubric
AI prompt that takes intake data and outputs standardized score. Override logic for edge cases.
Contract Template
JV agreement with variable fields. Revenue share, IP ownership, exit formula, licensing terms.
Multi-Tenant Platform
Apartment model deployed. First tenant (Bridger) live. New RAG database on Supabase replacing OpenAI Assistant API.
Extraction Pipeline
Track A/B/C skills built. 0-to-1 versions working. Need optimization and Aaron testing.
Real-Time Proposal System
Transcript → template → live URL. NowPage MCP integration. Template design needed.
Voice Agent
AI-driven phone/voice interview for expert intake. The wow-factor sales tool.
Meeting Operating System
Auto-generated sprint sheets from meeting transcripts. Dashboard, action items, follow-ups, self-healing updates.
Operating System
Principles That Run This
- 48-Hour Ship Rule — Pick the one thing. Deliver it live within 48 hours. Document the 2 things you wanted to add but couldn't as roadmap items. The constraint forces clarity by removal.
- Dependencies Before Building — Map every step's gates. What blocks this? What's the second-order effect? Do we have it or need to create it? If you skip this, someone will have to do those little things later, and that's where friction lives.
- 0-to-1 Over Perfection — Get the dirty version live. 80% done and operational beats 100% planned and theoretical. Everything costs twice as much and takes three times as long. But if you keep on track, you get there.
- Don't Do It One-Off — Whatever you do, document the flow so it can be recreated. Process → checklist → habit → throw the checklist away. That's how we systemize and scale without energy drain.
- ICP Discipline — Build for who pays today, not who might pay tomorrow. Changes to the expert's target audience happen after validation, not during build. Lock the ICP early.
- Anti-Sell Posture — We're selective. We reject most applicants. If the clone scored well against the expert and they still don't want it, we show them the door and point them to Delphi. No hard feelings. The right people recognize value through action, not persuasion.
The Line That Matters
"We don't just copy what experts say. We extract how they decide what to say. That's the difference between an actor reading a script and knowing why the script was written."