Athio × MasteryOS
Boarding Workflow

Expert Clone JV Boarding

End-to-end process for discovering, qualifying, onboarding, and deploying AI expert clones through strategic joint ventures. From first conversation to live product.

Athio — JV Brand MasteryOS — Operating System MasteryMade — Venture Studio

How the System Works

We build AI that thinks like an expert — not just mimics their words. Our moat is the decision architecture: we extract how an expert chooses their words, not just what words they choose. This is a joint venture, not a service. We build together, we win together.

Discover

Find & qualify experts

Score

AI rubric + subjective

Board

Collect content & IP

Build

Clone + platform

Launch

Alpha → iterate → scale

First Principle

What's the user experience? Every decision gets evaluated through this lens. The landing page should disqualify more people than it qualifies. We're fishing walleye — when we catch perch, we have a secondary path for them, but we don't optimize for perch.

Brand Architecture

Three brands serve different purposes in the pipeline. Understanding which brand the expert and their audience interact with at each stage is critical.

Brand Role Faces
Athio The prestige JV brand. "Cognitive Infrastructure for Exceptional People." This is the page experts see. It qualifies them ($250K+ revenue, 10K+ audience, proven framework). Limited to 7 experts per quarter. Expert prospects
MasteryOS The operating system that powers everything. Partner-first platform for turning audience + IP into AI-native products. Modular AI stack, signal-driven build rhythm, IP extraction systems. Tech partners, experts post-JV
MasteryMade The venture studio. Holding company that orchestrates strategic partnerships, houses the Growth Operator framework, and manages the ASAP programs. Asymmetric leverage at the portfolio level. Internal, strategic investors

Discovery & Qualification

The universe of potential experts is large. Our job is to filter ruthlessly. Maybe 10% of the people Derek talks to are worth considering, and those 10% sort into different buckets based on timing, audience alignment, and revenue potential.

Expert Buckets

Tier Profile Path Priority
Titan $250K+ revenue, 10K+ audience, proven framework, people already pay them, audience aligns with our current offering Full JV partnership via Athio. Revenue share. No upfront cost. This is our ICP. These are the walleye.
Strong Fit Meets most criteria but audience might be slightly out of sync, or they're in a domain we're expanding into in 1-2 iterations JV with modified terms, or waitlist for next quarter when roadmap catches up Keep warm. Don't lose the thread.
Pay-to-Play Excited, willing, but doesn't meet Titan criteria. Maybe smaller audience or less established framework. Upfront payment package through MasteryOS (not Athio). Separate brand positioning. Short-term revenue. Don't taint the JV brand.
Future Great conversation but wrong timing — too early in their journey, or we don't have the tech for their domain yet Nurture list. Capture their info. Revisit next quarter. Don't throw them back. Have a net.

Anti-Sell Principle

A good landing page disqualifies more people than it qualifies. The language should speak to exactly who we want — everyone else should self-select out. Everyone's talking AI, everyone's overhyped it, everyone's tried a million tools. Our anti-sell is actually a positive sell: we're the ones who actually deliver because we replicate how experts think, not just what they say.

Qualification Questions

These questions need to be answerable with low cognitive load — multiple choice where possible, conversational flow preferred over a 15-question form. Delivered via chatbot, voice agent, live call with Derek, or voice-recorded intake.

  1. Revenue range — Annual income from expertise (MCQ brackets)
  2. Audience size — Platform reach across channels (MCQ brackets)
  3. Current offer stack — What they sell today (MCQ categories)
  4. Client lifetime value — Revenue per client relationship
  5. Framework or methodology — Do they have a named system people pay for? (Y/N + describe)
  6. Current affiliate program — Do they have one, and what does it look like?
  7. Primary constraint — Why JV? What's the real motivation? (MCQ: scale reach / save time / new product / monetize differently / other)
  8. Client pain points — What does their audience struggle with most? (MCQ categories)
  9. Content volume — How much existing content do they have? (podcasts, videos, courses, books)
  10. AI readiness — Have they used AI tools? What's their comfort level?

On the Constraint Question (#7)

This matters more than people think. If the motivation is "just make more money" — that's a signal, not a qualifier. Reframe as: "What would change for your business if your best thinking was available 24/7 without you being on every call?" The answer reveals whether they understand the actual value prop or just want a shiny object.

Intake Channels

Multiple paths to the same scoring system. Don't require one method — meet the expert where they are.

Scoring & Approval

All roads lead to a standardized score. Whether the data came from a live call, a chatbot session, or Derek's voice memo — it all feeds the same rubric.

AI Scoring Rubric

The LLM takes the open-ended answers and maps them to a standardized scoring system. Categories scored 1-5:

  1. Audience alignment — Does their audience overlap with our current offering and 1-2 iteration roadmap?
  2. Revenue signal — Evidence of real paying customers, not aspirational numbers
  3. Framework maturity — Do they have a named, teachable system, or just general expertise?
  4. Content depth — Enough raw material to build a quality clone (podcasts, courses, books, calls)
  5. Partnership temperament — Win-win mindset vs. transactional. Are they rowing the same direction?
  6. Time-to-value — How quickly can we get their audience using the product?
  7. Constraint clarity — Do they know why they want this? Clear motivation = better partner

Bell Curve Override

This rubric handles the 68% inside one standard deviation. For the tails: if audience potential is massive and everything else is mediocre — override. If everything scores great but your gut says no — override. The rubric is a heuristic with exceptions based on actual opportunity analysis. Score first, then apply judgment.

Decision Gate

After scoring, three outcomes:

Expert Onboarding

Once approved, we need to extract everything required to build their clone and set up their platform. This is the "boarding" — and it has hard dependencies. Miss a step here and everything downstream breaks.

The Boarding Checklist

1

Contract & Agreement

JV terms, revenue share structure, IP ownership clarity. Who owns the clone, what happens on exit, licensing vs. ownership. Send via e-sign. Template exists — swap in variables per expert.

Legal
2

Payment Link (if applicable)

For pay-to-play tier only. JV partners have no upfront cost. Stripe link generated per expert.

Auto
3

Expert Content Folder

Shared G-Drive folder where they upload: core framework docs, books/PDFs, course transcripts, podcast episodes, YouTube links, coaching call recordings, their "greatest hits" content. We also scrape their public channels (Perplexity/Gemini deep research → social media, podcast appearances, blog posts). Aim for 20-50 source files.

Derek + Jason
4

Expert Biography (1-3 pages)

Who are they? Their story, their origin, their credentials. This isn't for the clone's thinking — it's for when users ask "Who is [Expert]?" Context, not training.

Derek
5

Framework Definition

The expert's named methodology, drawn on paper if needed. What's the flow? What are the gates? What are the dependencies between steps? What's the first micro-step a user takes? Coach the expert through this — most visionaries don't think in dependencies. Use the lasagna exercise: "Walk me through it like paint-by-numbers, including every dependency."

Derek
6

Boarding Preferences (10 questions)

Quick yes/no or A/B decisions: Should the clone sound like them or be clearly labeled AI? Counselor tone or pit bull? Do they want gamification in onboarding? What's the first thing a user should see? Book-a-call link — where does it point? These are calibration inputs, and the expert will change their mind. That's fine. Capture the first pass.

Derek
7

Resource Library Seed (3-5 items)

A blank resource library looks sad. Ask the expert: "If someone was starting from scratch with your method, what are the 3-5 most important resources?" PDFs, videos, key articles. Not everything — just the lasagna, not the cookbook. Follow the "needed it 10 times" rule: only include what's proven.

Derek
8

Welcome Video Script

Short video explaining what the platform does for their audience. Can be faceless initially, eventually replaced with the expert's face. Sits in the resource library and plays during first login.

Derek

Critical Lesson from Bridger & Brad

Experts will redefine their ICP mid-process. Bridger started with "faith-based Gen Z men in sales" then pivoted toward pure sales training. Brad was "everything is great" energy with limited content. Lock the ICP early: "Who are your current paying clients? That's who we build for." We don't help them find new customers — we extend value to existing ones. Changes can happen after validation, not during build.

Clone Architecture & Build

This is the secret sauce. We don't pattern-match words. We extract decision heuristics — how the expert chooses their words, approaches problems, and structures their thinking. Everyone else gives you an actor reading a script. We give you the writer's room.

The Three-Track Extraction

Raw expert documents are processed through three parallel pipelines:

Track A — Landing Page Extraction

Content for the expert's public-facing page. Pain points, transformation outcomes, testimonials, positioning language. Feeds the real-time proposal system.

feeds → proposal generator

Track B — Clone Extraction

Heuristic rubric, decision patterns, thinking architecture. Creates the SCALE+POWER type rubrics. Extracts test questions and ground-truth answers for validation.

feeds → rag database + testing

Track C — Document Intelligence

PII scrubbing, logic metadata injection, tag extraction. Takes the Track B heuristics and applies them back to each document as metadata. Creates the enriched RAG corpus.

feeds → supabase vectors

Track D — Psychographic Profiling

User-facing boarding questions generated from expert content analysis. Creates USER_PROFILE_{EXPERT} for each end-user based on their context, communication style, analogy domain, and voice patterns.

feeds → personalization engine

Validation Process

Before going live, the clone must pass the tequila test.

  1. Extract test cases — Segment some expert documents as holdouts. Generate questions from them as if a real user was asking.
  2. Run the clone — Feed those questions to the clone. Capture its responses.
  3. Compare against ground truth — Does the clone's response follow the same thinking path as the real expert? Not word-for-word — same journey, same landing point, same approach. "If we gave our friend tequila, would the clone react the same way?"
  4. Score across rubrics — Logic match, voice match, framework adherence, model collapse resistance. Did it stay in character? Did thinking patterns hold? Did it drift to generic ChatGPT responses?
  5. Expert review — Put the expert in front of it. "Does this think like you?" Not perfect — 80% is the target. If it handles 68% of questions at a high level and 80% at a confident level, the remaining 20% is what makes the real expert valuable.

The 80% Pitch

We sell the imperfection. If the clone was 100% of the expert, the expert is obsolete. At 80%, the clone handles the daily stuff — answering questions, running frameworks, coaching through basics. The last 20% is when people want the real thing. That's the upsell. That's the retention. That's why this is a JV and not a replacement.

Platform Deployment

Multi-tenant architecture. Think apartments, not houses. Each expert gets a unit in the building. Costs stay low during alpha. If someone goes dormant, it doesn't eat capital.

Multi-Tenant Model

Previously, each expert was a standalone deployment (single-tenant) — their own codebase, hosting, database. Expensive to maintain when idle. Now all experts share infrastructure with isolated data. First tenant hits the most bugs. Second tenant finds the multi-tenant bugs. By tenant three, it should be smooth drops.

What Gets Deployed

A

Expert's AI Clone

Chat interface trained on their thinking heuristics. Uses the enriched RAG database with Track B + C extracted documents. Model-agnostic — can swap OpenAI, Anthropic, etc. via our routing layer.

Core
B

Resource Library

PDFs, videos, frameworks. Curated 3-5 items from the expert's best material. Not a content dump — think "4-page PDF that gives you the skill" not "college textbook."

Content
C

Experiences (Modules)

Step-by-step guided sequences. Can be gated (unlock next step by completing a prompt). Text, images, video + LLM or human prompts. Great for onboarding, frameworks, and upsell courses. Free modules included, premium modules for credits or payment.

Feature
D

Landing Page

Expert-specific page generated from Track A extraction. Can be produced mid-call using the real-time proposal system (transcript → template → published URL). E.g. athio.com/bridger or freedomlife.live.

Generated
E

Admin Panel

Expert-facing backend. Manage their clone's prompts, view user conversations, update resources, track engagement. This is 70% of the engineering effort people forget about — the customer-facing widget is 30% of a SaaS. The admin panel is everything else.

Platform

Alpha Testing Protocol

Before wide launch, 5 users from the expert's existing audience. Not strangers — people who already know the expert and can evaluate whether "this thinks like [Expert]." Key question for them: not "is this useful?" but "is this them?"

UX Gap to Solve

Bridger's feedback: "I created an account and didn't know what to do next." This is the critical moment. The welcome flow must eliminate the blank stare. Options: gated experience that onboards them (framework walkthrough before chat unlocks), guided first prompt suggestions, or a curated "start here" resource. Demo accounts should have enough credits to actually explore — Bridger ran out in 3 messages on the free plan.

Launch & Iteration

Launch is not the finish line. It's the beginning of the feedback loop. We ship fast, learn what breaks, and iterate. 48-hour shipping rule applies.

The Real-Time Proposal System

The killer demo that closes JVs. Here's what it looks like in practice:

  1. Derek is on a discovery call with a potential expert. Our transcription system is running.
  2. Halfway through the call, Derek hits a button. Selects a template.
  3. The system imports the transcript to that point, applies it to the template, and generates an interactive proposal page — customized to everything they just discussed.
  4. By the end of the call, Derek sends a live URL: athio.com/[expert-name]
  5. The page includes a light version of their chatbot clone. The expert can try talking to "themselves."
  6. Their mind is blown. The only obvious answer is yes.

The Close

We put people in a box where the only obvious answer is the one we want them to give. Not because we're manipulating — because we're demonstrating, in real time, exactly what we build. If they don't want it after seeing themselves replicated live on a call, they're not our person. And that's fine.

Iteration Cadence

Key Dependencies Map

For every step, ask: what's the gate? What blocks this from proceeding? What's the second-order effect? Do we have it ready, have a template, or need to create it?

Athio Landing Page

Copy finalized, apply button active, qualification questions embedded, bifurcation path for non-JV leads

in progress

Scoring Rubric

AI prompt that takes intake data and outputs standardized score. Override logic for edge cases.

draft exists

Contract Template

JV agreement with variable fields. Revenue share, IP ownership, exit formula, licensing terms.

needs legal review

Multi-Tenant Platform

Apartment model deployed. First tenant (Bridger) live. New RAG database on Supabase replacing OpenAI Assistant API.

in progress

Extraction Pipeline

Track A/B/C skills built. 0-to-1 versions working. Need optimization and Aaron testing.

built — needs refinement

Real-Time Proposal System

Transcript → template → live URL. NowPage MCP integration. Template design needed.

prototype exists

Voice Agent

AI-driven phone/voice interview for expert intake. The wow-factor sales tool.

roadmap

Meeting Operating System

Auto-generated sprint sheets from meeting transcripts. Dashboard, action items, follow-ups, self-healing updates.

prototype exists

Principles That Run This

  1. 48-Hour Ship Rule — Pick the one thing. Deliver it live within 48 hours. Document the 2 things you wanted to add but couldn't as roadmap items. The constraint forces clarity by removal.
  2. Dependencies Before Building — Map every step's gates. What blocks this? What's the second-order effect? Do we have it or need to create it? If you skip this, someone will have to do those little things later, and that's where friction lives.
  3. 0-to-1 Over Perfection — Get the dirty version live. 80% done and operational beats 100% planned and theoretical. Everything costs twice as much and takes three times as long. But if you keep on track, you get there.
  4. Don't Do It One-Off — Whatever you do, document the flow so it can be recreated. Process → checklist → habit → throw the checklist away. That's how we systemize and scale without energy drain.
  5. ICP Discipline — Build for who pays today, not who might pay tomorrow. Changes to the expert's target audience happen after validation, not during build. Lock the ICP early.
  6. Anti-Sell Posture — We're selective. We reject most applicants. If the clone scored well against the expert and they still don't want it, we show them the door and point them to Delphi. No hard feelings. The right people recognize value through action, not persuasion.

The Line That Matters

"We don't just copy what experts say. We extract how they decide what to say. That's the difference between an actor reading a script and knowing why the script was written."