Section 1
First Principles: What We're Actually Building
We are building an automated system that produces a recursive thinking clone of any expert. Not a Delphi-style voice mimic. Not a ChatGPT wrapper with a custom system prompt. A system that thinks like the expert — applies their frameworks, makes decisions the way they would, and gets better at helping each user over time.
The Six Properties of a Real Expert Clone
- Recursive thinking, not pattern matching. The clone runs the expert's actual decision heuristics (the Hat Debate pattern: generate options, critique through expert's lens, override if quality is insufficient). This is what separates "sounds like them" from "thinks like them."
- Framework application, not information delivery. The clone doesn't just explain the expert's methodology — it RUNS it with the user. Tool-by-tool, step-by-step, producing real artifacts the user keeps.
- Progressive personalization. Every interaction enriches the user profile. Artifacts from Tool A become inputs for Tool B. The 50th session is dramatically more valuable than the 1st. This makes the OS useful and sticky.
- Natural business integration. The expert's offers (courses, coaching, masterminds, events) surface at the right moment — when the user genuinely needs more than the AI can provide. Good for user + good for expert's revenue. Variable ratio: 85% value / 15% business by default.
- Progressive deployment. From a 5-minute demo that closes JV deals, to alpha with 5 users, to production at scale. Each stage generates more data, more confidence, more revenue.
- 85%+ fidelity, validated. Not vibes — quantitative scoring against the expert's OWN quality framework. A user or evaluating LLM would swear the outputs came from the original expert. The remaining 20% is the upsell to the real person.
The moat: Everyone can copy "sounds like them." Nobody can copy "thinks like them." The recursive decision architecture — extracting HOW an expert chooses their words, not just WHAT words they choose — is the intellectual property that makes this defensible. We extract the writer's room, not the script.
The Two Users of This System
| User | What They Need | What They Get |
|---|---|---|
| The Expert (JV Partner) | Scale their thinking without their time. Create recurring revenue from existing IP. Extend value of live coaching/events. | An AI OS that handles 80% of their audience's questions. Sales pages, lead magnets, onboarding flows. Business intelligence from user interactions. |
| The End User (Expert's Audience) | Access to the expert's frameworks 24/7. Personalized guidance, not generic advice. Real artifacts they can use in their life/business. | A coaching OS that applies the expert's actual tools to their specific situation. Gets smarter about them over time. Naturally suggests next steps including expert's premium offers. |
Section 2
Comparative Analysis: Four Builds
Each implementation taught us something different. The current architecture synthesizes all four.
Jason (:2hat v3.1 → v6.23)
Meta-SystemBuilt iteratively over months. Jason cloning Jason. Created the cognitive architecture template: 6-phase recursive loop, GOLDEN+SHARP quality filters, Hat Debate (LLM → Critique → Override), MCP conversion engine, Memory Integrity system.
Key insight: :2hat is not a persona — it's a universal template for constraining any LLM to think like a specific person. Every expert needs their own GOLDEN+SHARP equivalent.
What transferred: The architecture pattern. Recursive validation. Identity Lock. Modular separation of concerns.
Matt Gottesman (SOUL+FLOW)
First External ProofFull 7-phase Expert Thinking Clone Workflow. All 8 extraction modules. Created SOUL (input validation) + FLOW (output validation) to replace GOLDEN+SHARP. _2hat Matt Edition v6. 1000-simulation validation. 97% consistency claimed.
Key insight: Phase 4 (Framework Acronym Extraction) is the critical step. Each expert's quality framework must be extracted from THEIR content, not imposed from Jason's.
Gap identified: Entirely manual. No agent orchestration. No progressive demo path. Weeks of work per expert.
Brad Himel (TigerQuestOS)
Business Model ProofIntelligence synthesis completed (gold standard output). Value ladder created. Gap analysis done. Expert boarding workflow documented. But clone didn't reach production.
Key insight: The boarding workflow HTML is the business process bible — discover, qualify, score, onboard, build, launch. The "80% pitch": we sell the imperfection (the last 20% is the upsell to the real expert).
Gap identified: Content was "all in documents, not yet AI-ified." No expert-specific quality framework created.
Bridger (Failure Case)
Critical LessonsEarly test case. UX failure: "I created an account and didn't know what to do next." Pivoted ICP mid-process. Limited content. Ran out of credits in 3 messages.
Key lessons: Lock ICP early. Solve blank-stare UX. 15+ pieces / 50K+ words minimum. Demo accounts need real credits. The welcome flow is THE most critical UX moment.
What the Original 8-Module System Got Right
Jason's pre-Claude Code methodology (the Robust Extraction System) decomposed expert cloning into 8 dimensions. Here's how they map to our current pipeline:
| Original Module | Current Skill | Coverage | Gap |
|---|---|---|---|
| Module 2: Voice & Style | voice-extractor | ~80% | Missing pattern-breaking (Module 7) |
| Module 3: CTA Psychology | NONE | 0% | Entire module missing |
| Module 4: Embedded IP | soul-extractor | ~60% | Thin on proprietary thinking patterns |
| Module 5: Modularization Units | framework-extractor | ~50% | Missing micro-units (prompts, reframes, templates) |
| Module 6: Meta-Structures | framework-extractor | ~40% | Missing temporal scaffolding / progression logic |
| Module 7: Pattern-Breaking | voice-extractor (partial) | ~30% | Contrarian positions, anti-generic safeguards |
| Module 8: Extractable Prompts | clone-compiler | ~70% | Missing identity lock depth |
| Phase 4: Framework Acronym | NONE | 0% | Expert's own quality framework not created |
| Phase 7: 1000 Simulations | NONE | 0% | No validation/testing system |
| Offer Ecosystem Mapping | NONE | 0% | No business intelligence extraction |
| User Psychographic Profiling | onboarding-builder (partial) | ~40% | No progressive user profile system |
Section 3
Architecture v2: Resequenced Layer Map
Critical resequencing from v1: The expert's quality framework (their GOLDEN+SHARP equivalent) now comes FIRST — at Layer 0.25 — because it defines the standard that everything else is scored against. Rubrics are built at Layer 1 BEFORE extraction, so every extractor output is quality-gated from the start. This is better than how Matt was built (framework came after extraction).
Why Framework-First (Layer 0.25) Changes Everything
In the Matt build, the expert's quality framework (SOUL+FLOW) was created AFTER extraction (Phase 4 of 7). This means the extractors had no quality standard during their work — they extracted blind, and the framework was reverse-engineered from the outputs.
In v2, the framework is created from RAW SOURCES (transcripts, documents, deep research output) — not from extraction outputs. This means:
- Extractors run with a quality target. Each extraction output is scored against the expert's own unconscious quality gates as it's produced.
- Rubric-builder has the right input. It creates scoring rubrics from the expert's framework, not from generic criteria.
- Clone-compiler has a north star. The compiled clone is validated against what the EXPERT considers quality, not just what Jason considers quality.
- Dual validation. Expert's framework (philosophical fidelity) + GOLDEN+SHARP (infrastructure quality) = both right and well-built.
Section 4
Complete Skill Inventory (18 Skills)
Gather all public intelligence and raw source materials before any extraction begins.
| Skill | Input | Output | Engine |
|---|---|---|---|
| deep-research | Expert name + known context | deep-research-report.md + deep-research.json + sources.json | Perplexity Sonar Deep Research API (5 queries) |
| expert-recon | deep-research output + WebSearch/WebFetch | recon.json + recon-summary.md + raw source files | Orchestrates deep-research + web scraping + source collection |
| masterybook-sync | Workspace sources | MasteryBook notebook URL + ID | MasteryBook API (notebook creation + source upload) |
Create the expert's own quality framework — the standard everything downstream is scored against. Reads RAW sources, not extraction outputs.
| Skill | Input | Output | Process |
|---|---|---|---|
| expert-framework-creator | All raw sources (transcripts, docs, research output) | expert-framework.json (custom acronym + 1-5 scoring per letter + recursive interconnections + validation examples) |
1. Analyze expert content across all dimensions 2. Identify unconscious quality gates (what criteria they ACTUALLY apply) 3. Create custom framework acronym (like SOUL+FLOW, GOLDEN+SHARP) 4. Define 1-5 scoring criteria per letter with examples from content 5. Map recursive interconnections between elements 6. Create input validation framework (source authenticity) + output validation framework (transformation quality) 7. Score known expert content against framework to validate accuracy |
Example: For Matt Gottesman, this produced SOUL (Soul-aligned, Organic flow, Unified integration, Leverage through being) as input validation and FLOW (Force/flow clarity, Leverage recognition, Outcome momentum, Wisdom integration) as output validation. Each letter has deep context examples from Matt's actual content. Samuel Ngu will get his own equivalent derived from FLC Wisdom Framework patterns.
Produce a minimal but impressive demo from deep research alone. Used for JV sales. Does not block the full pipeline.
| Skill | Input | Output | Purpose |
|---|---|---|---|
| demo-compiler | deep-research output + surface voice patterns + design system | Minimal live demo: 1-2 tool interactions, expert voice applied, branded design. Deployable URL. | Derek sends this during/after JV call. "Your mind is blown. The only obvious answer is yes." Collapses sales cycle from weeks to minutes. |
Build scoring rubrics from the expert's framework BEFORE extraction begins. Rubrics are active quality gates during extraction, not post-hoc scores.
| Skill | Input | Output | Changes from v1 |
|---|---|---|---|
| rubric-builder | expert-framework.json + GOLDEN+SHARP definitions | rubrics.json with dual validation: expert's framework (philosophical fidelity) + GOLDEN+SHARP (infrastructure quality) | Now takes expert's custom framework as PRIMARY input. Creates rubrics that extractors use DURING extraction, not just for post-scoring. Dual standard: "Is this what the expert would approve?" AND "Does this meet Jason's build quality?" |
Five parallel extractors, each producing a structured JSON. Every output is scored against rubrics before acceptance.
| Skill | Extracts | Expanded Scope |
|---|---|---|
| soul-extractor | Governing framework, values, anti-patterns, background behaviors, canonical statements | Add: proprietary thinking patterns (Module 4 depth), decision heuristics |
| voice-extractor* | Tone, vocabulary, energy spectrum, sentence structure, contrast-pair principles | Add: pattern-breaking elements (Module 7) — contrarian positions, zero-fluff markers, anti-generic safeguards, surprise injection points |
| framework-extractor* | Phases, stacks, cross-phase bridges, routing logic | Add: micro-units (Module 5) — micro-prompts, reasoning templates, transformational reframes, mini-lessons. Add: meta-structures (Module 6) — temporal scaffolding, progression logic, content architecture |
| resource-extractor | Books, podcasts, videos, courses, external frameworks, mentors | No change needed |
| offer-extractor | NEW: Expert's business offers, pricing, CTA psychology, introduction gates, variable ratio control | Produces offers.json: every offer (name, price, URL, audience fit), CTA patterns, introduction gates (what user state triggers which offer), anti-patterns (never push during vulnerable moments) |
Cross-extraction consistency check. Schema completeness audit. Contradiction detection. Generates prioritized follow-up questions. Critical gaps BLOCK compilation.
Compile all extractions into system prompt (12-section structure), knowledge files, tool configs, lead magnet assessment, and gamified onboarding flow. Clone-compiler now includes user profile schema and offer integration directives.
| Skill | Process | Scoring Dimensions | Threshold |
|---|---|---|---|
| clone-tester |
1. Take compiled clone + holdout set of real expert content 2. Generate test scenarios (questions the expert has actually answered) 3. Run clone through each scenario 4. Compare clone output vs real expert output 5. Score on 8 dimensions 6. Produce human-readable audit sheet 7. Go/No-Go decision + top 3 improvements |
1. Tone/voice match 2. Word count/density match 3. Vibe/energy match 4. Answer quality (correctness per methodology) 5. Framework alignment (right tool applied?) 6. Offer integration naturalness 7. Expert framework score (custom SOUL+FLOW equivalent) 8. Anti-collapse score (didn't drift to generic ChatGPT?) | 85%+ aggregate for production deployment. 90%+ on tone and framework alignment. Zero tolerance on anti-patterns. Human audit sheet reviewed before shipping. |
Chief of Staff agent. Manages kanban board (~24 cards per expert). Lazy-loads skills on demand. Enforces quality gates between layers. Handles both --recon (full pipeline from Layer 0) and warm boarding (sources already exist, skip to Layer 0.25).
User Profile System (Architectural, Not a Separate Skill)
Baked into clone-compiler's output architecture. Not extracted — designed.
| Component | Description | Where It Lives |
|---|---|---|
| User Profile Schema | Communication preference, journey stage, completed tools, generated artifacts, stated goals, constraints | System prompt Section 12 (Background Systems) |
| Progressive Profiling | Each tool interaction enriches the profile. First name, urgency, context captured naturally. | Tool activation protocol (Section 5) |
| Cross-Tool Data Bridges | Artifact carry-forward: Tool A's output becomes Tool B's input automatically. | Cross-Phase Intelligence (Section 8) |
| Personalization Hooks | AI adjusts depth, pace, examples, and offer timing based on what it knows. | Pathfinder Mode (Section 7) |
| User Dashboard | "Your OS knows these things about you" — transparency + stickiness. | Platform frontend feature |
Section 5
Second-Order Effects
For End Users (Stickiness & Value)
- Flywheel effect. More use = better alignment = better results = more use. The 50th session is 10x more valuable than the 1st because the OS knows the user's context, completed tools, artifacts, and communication preference.
- Artifact compounding. Users create real deliverables (career maps, design briefs, financial plans) that carry across tools. Switching to a competitor means losing all accumulated context.
- Natural business integration. Instead of cold CTAs, the AI knows when the user genuinely needs more than it can provide. "Based on where you are in Phase 2, Samuel's live masterclass on this exact topic is next week." Users experience this as helpful, not salesy.
For Experts (Business & Scale)
- Time asymmetry. Expert invests ~20 hours in boarding (content collection, framework definition, reviews). Gets back 10,000+ hours of AI coaching delivery per year.
- IP documentation as side effect. The extraction process produces soul.json, voice.json, frameworks.json — the most structured documentation of the expert's IP they've ever had. Their "IP insurance policy."
- Market intelligence. User interaction data (with consent) reveals which tools get used, where users get stuck, what questions come up. The expert gets smarter about their own audience.
- Content generation. Micro-units, reasoning templates, and reframes extracted during the process become social media content, course modules, and training materials.
- Revenue without time. The OS handles 80% of audience questions. Premium offers surface naturally. Expert focuses on the 20% that requires their personal touch.
For Athio (Platform & Infrastructure)
- Network effects across experts. Each extraction improves the system. Patterns learned from Samuel improve future onboarding. By expert #5, the pipeline is 3x faster.
- Cross-expert discovery. Users who love Expert A might benefit from Expert B's complementary methodology. Platform-level routing creates a marketplace effect.
- Expert rivalry as growth engine. When Expert A's clone performs well, Expert B's audience notices. "Why doesn't MY coach have this?"
- Agentic cost curve. Deep research ~$1-2. Full extraction ~$5-10. Compilation ~$2-3. Testing ~$3-5. Total: $15-25/expert in API calls. Compare to weeks of manual work.
- Quality bar as competitive moat. 85%+ validated fidelity with human audit. Others ship "close enough." We ship "proven."
- Graduated revenue model. Demo (free, sells JV) → Alpha (5 users, validates) → Beta (50 users, scales) → Production (unlimited). Expert risk is zero until production.
Section 6
System Build Roadmap: How to Build This
This section is the development roadmap. It describes how to build the system itself — the skills, orchestrator, testing framework, and deployment automation. This is done ONCE, then the system is used repeatedly per expert (see Section 7: Execution Playbook).
Phase A: Foundation (Skills that exist, need modification)
Build expert-framework-creator skill
The keystone new skill. Reads raw sources, identifies unconscious quality gates, produces custom framework acronym with 1-5 scoring and recursive interconnections. Reference: Phase 4 of Expert Thinking Clone Workflow + SOUL+FLOW as gold standard output.
File: ~/.claude/skills/expert-framework-creator/SKILL.md
Depends on: Nothing. Standalone design. First to build.
Expand rubric-builder to accept expert framework
Modify rubric-builder to take expert-framework.json as primary input. Create dual validation rubrics: expert's framework (philosophical) + GOLDEN+SHARP (infrastructure). Rubrics must be usable DURING extraction as live quality gates.
File: ~/.claude/skills/rubric-builder/SKILL.md (modify)
Depends on: A1 (needs expert-framework.json schema)
Expand voice-extractor with pattern-breaking
Add explicit Module 7 extraction: contrarian positions, zero-fluff markers, anti-generic safeguards, surprise injection points. These prevent model collapse — the clone drifting toward the LLM's statistical mean.
File: ~/.claude/skills/voice-extractor/SKILL.md (modify)
Depends on: Nothing. Can be done in parallel with A1/A2.
Expand framework-extractor with micro-units + meta-structures
Add Module 5 micro-units (micro-prompts, reasoning templates, transformational reframes, mini-lessons) and Module 6 meta-structures (temporal scaffolding, progression logic, content architecture). These are the LEGO bricks the AI uses in real-time conversation.
File: ~/.claude/skills/framework-extractor/SKILL.md (modify)
Depends on: Nothing. Can be done in parallel.
Phase B: New Skills
Build offer-extractor skill
Extract business offers, pricing, CTA psychology, introduction gates, variable ratio control. Produces offers.json. This is what makes the AI both helpful and commercially viable for the expert.
File: ~/.claude/skills/offer-extractor/SKILL.md
Reference: Module 3 (CTA Psychology) + Brad's ecosystem map + Expert Boarding Workflow Phase 5
Build clone-tester skill
Run scripted scenarios, compare to real expert samples, score on 8 dimensions, produce human audit sheet, Go/No-Go at 85%+. This is the quality gate between "built" and "deployed."
File: ~/.claude/skills/clone-tester/SKILL.md
Reference: GOLDEN_SHARP_TESTER_KF + Expert Thinking Clone Phase 7 (1000 simulations)
Build demo-compiler skill
Lightweight compiler that takes deep-research + surface voice patterns + design system and produces a deployable mini-demo (1-2 tool interactions, branded, live URL). Purpose: close JV deals.
File: ~/.claude/skills/demo-compiler/SKILL.md
Reference: Expert Boarding Workflow "Real-Time Proposal System" section
Phase C: Integration & Orchestration
Update boarding-orchestrator for v2 layers
Update kanban template: ~24 cards across 8 layers. Add Layer 0.25, 0.5, 1, 3.5 cards. Update dependency chains. Support --recon (full), --warm (skip Layer 0), and --demo (fast lane only) modes.
File: ~/.claude/skills/boarding-orchestrator/SKILL.md (modify)
Depends on: A1-A4, B1-B3 complete
Update clone-compiler for user profile + offers
Add user profile schema to system prompt output (Section 12). Add offer integration directives (Section 8 cross-phase intelligence). Add variable ratio control to pathfinder mode.
File: ~/.claude/skills/clone-compiler/SKILL.md (modify)
Dogfood: Run full pipeline on Samuel Ngu / Align360
Execute the complete v2 pipeline on Samuel as the zero-to-one test. Document every friction point. Update skills in real-time as issues surface. This IS the validation of the system itself.
Phase D: Automation & Scaling
Build deployment automation
After clone-tester passes, automatically: push clone config to MasteryOS platform, publish sales pages to expert subdomain (e.g., align360.io), publish lead magnets, configure experiences/resources in frontend. Requires OS frontend/backend code repos.
Build standalone GUI
Web interface that: shows extraction progress, provides human steps and rules, gathers inputs, tells where to place files, visualizes quality scores, shows audit sheets, manages kanban. This replaces CLI-only workflow for non-technical operators.
Full automation mode
After validating with Samuel (zero-to-one) and one more expert (one-to-many), the system runs end-to-end with human checkpoints only at: framework review (Layer 0.25), extraction audit (Layer 2.5), and clone-tester review (Layer 3.5). Everything else is automated.
Section 7
Expert Execution Playbook: How to Run This Per Expert
This section is the operational playbook. It describes how to execute the system for each new expert, step by step. Both LLM agents and human operators use this document. It will be recursively updated as we dogfood with Samuel and subsequent experts.
Prerequisites
- Expert has been qualified (Titan or Strong Fit tier)
- JV contract signed (or pay-to-play agreement)
- Expert name, brand, and known context captured
- Source collection begun (minimum 15 pieces, 50K words target)
- Environment: PPLX_API_KEY, MASTERYBOOK_API_KEY, workspace initialized
Step-by-Step Execution
Step 0: Initialize Workspace
boarding-orchestrator init {expert-slug} --recon
Creates:
_workspaces/{expert-slug}/
sources/raw-sources/ ← Upload expert content here
sources/ ← Deep research outputs land here
extractions/ ← Structured extraction JSONs
rubrics/ ← Quality scoring rubrics
build/ ← Compiled clone artifacts
validation/ ← Test results + audit sheets
kanban.json ← 24-card pipeline tracker
Step 1: Layer 0 — Recon
Agent: deep-research → expert-recon → masterybook-sync
Human: Review intelligence report. Upload any additional source files the expert provides (transcripts, courses, books, frameworks). Aim for 20-50 source files total.
Quality gate: Recon summary reviewed. Source count sufficient (≥15 pieces). MasteryBook notebook accessible to team.
Step 2: Layer 0.25 — Framework Foundation
Agent: expert-framework-creator reads ALL raw sources. Identifies unconscious quality gates. Produces expert-framework.json.
Human: CRITICAL REVIEW POINT. Read the proposed framework acronym. Ask: "Does this capture how [Expert] evaluates their own work?" Share with expert if available. Iterate until the framework feels right.
Quality gate: Framework acronym approved by human (and ideally by expert). Each letter has 3+ real examples from content. Recursive interconnections mapped.
Step 2.5: Layer 0.5 — Demo (Optional, Parallel)
Agent: demo-compiler produces minimal live demo.
Human: Review demo. Share with expert or prospect. Iterate if needed.
Quality gate: Demo feels like the expert (even if rough). Deployable URL works.
Step 3: Layer 1 — Quality Gates Setup
Agent: rubric-builder creates dual-standard rubrics from expert-framework.json + GOLDEN+SHARP.
Human: Review rubrics. Verify scoring criteria make sense. Adjust thresholds if needed.
Quality gate: Rubrics are specific to this expert (not generic). Auto-fail conditions defined for anti-patterns.
Step 4: Layer 2 — Extraction (Parallel)
Agent: 5 extractors run in parallel: soul, voice, framework, resource, offer. Each scores output against rubrics before saving.
Human: Spot-check extraction outputs. Flag anything that "doesn't sound like them." Add source material for thin areas.
Quality gate: All 5 extraction files present. Rubric scores above threshold on each. No critical contradictions.
Step 5: Layer 2.5 — Extraction Audit
Agent: gap-analyzer audits completeness, consistency, and contradictions across all 5 extraction files.
Human: Review gaps.json. Answer follow-up questions or gather additional source material. Critical gaps MUST be resolved before proceeding.
Quality gate: Zero critical gaps. Important gaps documented with resolution plan.
Step 6: Layer 3 — Build
Agent: clone-compiler produces system prompt + knowledge files + tool configs. lead-magnet-builder + onboarding-builder run in parallel.
Human: Read compiled system prompt. Does it feel like the expert? Are the tool menus correct? Is the offer integration natural?
Quality gate: 12-section system prompt complete. Knowledge files cover all active phases. Lead magnet assessment functional. Onboarding flow routes correctly.
Step 7: Layer 3.5 — Validation
Agent: clone-tester runs scripted scenarios. Compares to real expert samples. Produces audit sheet with 8-dimension scoring.
Human: CRITICAL REVIEW POINT. Review audit sheet. Verify tone, answer quality, framework alignment. If below 85%, identify specific weaknesses and iterate (re-extract, re-compile, re-test).
Quality gate: 85%+ aggregate score. 90%+ on tone and framework alignment. Zero anti-pattern violations. Human signs off on audit sheet.
Step 8: Deploy
Agent (future): Push to platform. Publish sales pages. Publish lead magnets. Configure experiences.
Human (current): Manual deployment to MasteryOS. Publish pages via NowPage CLI. Configure platform manually. Invite alpha users (5).
Quality gate: Alpha users can: complete lead magnet → enter platform → run first tool → access resources. Zero dead ends.
Iteration Loop
After alpha deployment, the loop tightens:
- Collect user feedback: "Does this sound like [Expert]?" "Where did it drift?" "What frameworks are missing?"
- Feed feedback into extraction updates (re-run specific extractors on new/additional sources)
- Re-compile clone with updated extractions
- Re-test with clone-tester (regression + new scenarios)
- Deploy update
- Repeat until quality stabilizes at 90%+
Section 8
Vision: From CLI to Full Automation
Phase 1: CLI + Skills (Current — Building Now)
All 18 skills as Claude Code SKILL.md files. Boarding-orchestrator manages kanban. Human drives via CLI. Dogfooding with Samuel/Align360 as zero-to-one.
Phase 2: Platform Integration (After Samuel Ships)
Connect pipeline to MasteryOS frontend/backend repos. Clone-tester pushes directly to platform on pass. Sales pages auto-publish to expert subdomains. Lead magnets deploy to NowPage. Experiences and resources auto-configure.
- Expert subdomain: align360.io for Samuel, [expert].io for each new partner
- Auto-publish: Sales pages, lead magnets, resource libraries, onboarding flows
- Platform push: Clone config (system prompt, knowledge files, tool configs) pushed directly
- Human checkpoints: Framework review, extraction audit, clone-tester sign-off only
Phase 3: Standalone GUI (After 2-3 Experts Validated)
Web application that replaces CLI for non-technical operators (Derek, future BD team):
- Visual kanban board showing pipeline progress
- File upload interface for expert sources
- Human step cards with rules and gathering inputs
- Quality score dashboards with audit sheet viewer
- One-click deployment after validation passes
- Expert management (multiple pipelines in parallel)
- Demo generator (one-click from deep research)
Phase 4: Full Automation (5+ Experts Running)
End-to-end automated pipeline with three human checkpoints:
- Framework review (Layer 0.25) — Human confirms "this captures the expert's quality intuition"
- Extraction audit (Layer 2.5) — Human reviews gaps and approves extraction quality
- Clone-tester sign-off (Layer 3.5) — Human reviews audit sheet and approves for deployment
Everything else runs autonomously: research, framework creation, rubric building, extraction, compilation, testing, deployment, publishing.
The endgame: Derek has a discovery call with Expert X on Monday. By Tuesday, deep research is complete and a live demo URL exists. By Friday, extractions are done and the clone is in testing. By next Monday, alpha users are onboarded. Total human time: ~8 hours of review across 5 days. Total agent time: ~2 hours of API calls. Total cost: ~$25 in API fees. That's the asymmetric leverage.
Reference
File & Skill Registry
Skill Files (All at ~/.claude/skills/)
| Skill | Status | Layer |
|---|---|---|
| deep-research | Built | L0 |
| expert-recon | Built | L0 |
| masterybook-sync | Built | L0 |
| expert-framework-creator | To Build | L0.25 |
| demo-compiler | To Build | L0.5 |
| rubric-builder | Expand | L1 |
| soul-extractor | Built | L2 |
| voice-extractor | Expand | L2 |
| framework-extractor | Expand | L2 |
| resource-extractor | Built | L2 |
| offer-extractor | To Build | L2 |
| gap-analyzer | Built | L2.5 |
| clone-compiler | Expand | L3 |
| lead-magnet-builder | Built | L3 |
| onboarding-builder | Built | L3 |
| clone-tester | To Build | L3.5 |
| boarding-orchestrator | Update | L4 |
| design-system-extractor | Built | Utility |
Key Source Documents (E:\align360\Extractions\)
| Document | What It Contains | Relevance |
|---|---|---|
| 2hat v6.23 | Jason's recursive identity emulator (master version) | Architecture template for all expert clones |
| 2hat v6.21 | Expert-validated modular version (7-expert panel) | Clean modular design reference |
| GOLDEN_SHARP_TESTER_KF | Universal quality testing system (125 simulations) | Reference for clone-tester skill |
| Expert Thinking Clone Workflow | 7-phase complete standalone cloning process | End-to-end process reference |
| SOUL+FLOW Frameworks (Matt) | Matt Gottesman's custom quality framework | Gold standard for expert-framework-creator output |
| Modules 2-8 (Robust Extraction) | 8-module extraction system | Reference for extractor expansion |
| Expert Boarding Workflow HTML | JV boarding process (Discover → Launch) | Business process reference |
| Brad Himel Intelligence Synthesis | Gold standard research output | Reference for deep-research quality |
| Recursive Clone & Artificial Cognition | Academic paper: ACS theory mapping | Theoretical grounding for architecture |
Published Pages (align360.asapai.net)
| Path | Content |
|---|---|
| /recursive-clone-v2 | THIS DOCUMENT — Complete architecture, build roadmap, execution playbook |
| /boarding-architecture | v1 architecture PRD (14 skills, will be superseded) |
| /samuel-ngu-intel | Samuel Ngu intelligence report (deep research output) |
| /a360-command-center | Command center dashboard |
| /wiring-for-impact | Lead magnet assessment |
| /launch-roadmap | Sprint plan |
| /positioning-brief | Positioning strategy |
| /boarding-pack | Athio boarding template |
Footer
Document Control
| Field | Value |
|---|---|
| Version | 2.0 |
| Date | March 11, 2026 |
| Authors | Jason MacDonald (Athio) + Claude Opus 4.6 |
| Status | Active — Living Document |
| Supersedes | boarding-architecture v1 (12 skills, 5 layers) |
| Next update | After Samuel/Align360 dogfood complete |
This document captures the complete system design as of March 2026. It serves as the source of truth for both system development and per-expert execution. It will be recursively updated as we learn from the Samuel/Align360 build and subsequent expert deployments.