Architecture v2.0 — Deep Dive

Recursive Clone System: Complete Architecture, Build Roadmap & Execution Playbook

The complete design for automated expert cloning — from first principles through production deployment. Incorporating lessons from Jason (:2hat), Matt Gottesman (SOUL+FLOW), Brad Himel (TigerQuestOS), and the Align360 pipeline. Two documents in one: how to BUILD the system, and how to EXECUTE it per expert.

18 Skills 8 Layers ~24 Kanban Cards/Expert 85%+ Fidelity Target March 2026

First Principles: What We're Actually Building

We are building an automated system that produces a recursive thinking clone of any expert. Not a Delphi-style voice mimic. Not a ChatGPT wrapper with a custom system prompt. A system that thinks like the expert — applies their frameworks, makes decisions the way they would, and gets better at helping each user over time.

The Six Properties of a Real Expert Clone

  1. Recursive thinking, not pattern matching. The clone runs the expert's actual decision heuristics (the Hat Debate pattern: generate options, critique through expert's lens, override if quality is insufficient). This is what separates "sounds like them" from "thinks like them."
  2. Framework application, not information delivery. The clone doesn't just explain the expert's methodology — it RUNS it with the user. Tool-by-tool, step-by-step, producing real artifacts the user keeps.
  3. Progressive personalization. Every interaction enriches the user profile. Artifacts from Tool A become inputs for Tool B. The 50th session is dramatically more valuable than the 1st. This makes the OS useful and sticky.
  4. Natural business integration. The expert's offers (courses, coaching, masterminds, events) surface at the right moment — when the user genuinely needs more than the AI can provide. Good for user + good for expert's revenue. Variable ratio: 85% value / 15% business by default.
  5. Progressive deployment. From a 5-minute demo that closes JV deals, to alpha with 5 users, to production at scale. Each stage generates more data, more confidence, more revenue.
  6. 85%+ fidelity, validated. Not vibes — quantitative scoring against the expert's OWN quality framework. A user or evaluating LLM would swear the outputs came from the original expert. The remaining 20% is the upsell to the real person.

The moat: Everyone can copy "sounds like them." Nobody can copy "thinks like them." The recursive decision architecture — extracting HOW an expert chooses their words, not just WHAT words they choose — is the intellectual property that makes this defensible. We extract the writer's room, not the script.

The Two Users of This System

UserWhat They NeedWhat They Get
The Expert (JV Partner)Scale their thinking without their time. Create recurring revenue from existing IP. Extend value of live coaching/events.An AI OS that handles 80% of their audience's questions. Sales pages, lead magnets, onboarding flows. Business intelligence from user interactions.
The End User (Expert's Audience)Access to the expert's frameworks 24/7. Personalized guidance, not generic advice. Real artifacts they can use in their life/business.A coaching OS that applies the expert's actual tools to their specific situation. Gets smarter about them over time. Naturally suggests next steps including expert's premium offers.

Comparative Analysis: Four Builds

Each implementation taught us something different. The current architecture synthesizes all four.

Jason (:2hat v3.1 → v6.23)

Meta-System

Built iteratively over months. Jason cloning Jason. Created the cognitive architecture template: 6-phase recursive loop, GOLDEN+SHARP quality filters, Hat Debate (LLM → Critique → Override), MCP conversion engine, Memory Integrity system.

Key insight: :2hat is not a persona — it's a universal template for constraining any LLM to think like a specific person. Every expert needs their own GOLDEN+SHARP equivalent.

What transferred: The architecture pattern. Recursive validation. Identity Lock. Modular separation of concerns.

Matt Gottesman (SOUL+FLOW)

First External Proof

Full 7-phase Expert Thinking Clone Workflow. All 8 extraction modules. Created SOUL (input validation) + FLOW (output validation) to replace GOLDEN+SHARP. _2hat Matt Edition v6. 1000-simulation validation. 97% consistency claimed.

Key insight: Phase 4 (Framework Acronym Extraction) is the critical step. Each expert's quality framework must be extracted from THEIR content, not imposed from Jason's.

Gap identified: Entirely manual. No agent orchestration. No progressive demo path. Weeks of work per expert.

Brad Himel (TigerQuestOS)

Business Model Proof

Intelligence synthesis completed (gold standard output). Value ladder created. Gap analysis done. Expert boarding workflow documented. But clone didn't reach production.

Key insight: The boarding workflow HTML is the business process bible — discover, qualify, score, onboard, build, launch. The "80% pitch": we sell the imperfection (the last 20% is the upsell to the real expert).

Gap identified: Content was "all in documents, not yet AI-ified." No expert-specific quality framework created.

Bridger (Failure Case)

Critical Lessons

Early test case. UX failure: "I created an account and didn't know what to do next." Pivoted ICP mid-process. Limited content. Ran out of credits in 3 messages.

Key lessons: Lock ICP early. Solve blank-stare UX. 15+ pieces / 50K+ words minimum. Demo accounts need real credits. The welcome flow is THE most critical UX moment.

What the Original 8-Module System Got Right

Jason's pre-Claude Code methodology (the Robust Extraction System) decomposed expert cloning into 8 dimensions. Here's how they map to our current pipeline:

Original ModuleCurrent SkillCoverageGap
Module 2: Voice & Stylevoice-extractor~80%Missing pattern-breaking (Module 7)
Module 3: CTA PsychologyNONE0%Entire module missing
Module 4: Embedded IPsoul-extractor~60%Thin on proprietary thinking patterns
Module 5: Modularization Unitsframework-extractor~50%Missing micro-units (prompts, reframes, templates)
Module 6: Meta-Structuresframework-extractor~40%Missing temporal scaffolding / progression logic
Module 7: Pattern-Breakingvoice-extractor (partial)~30%Contrarian positions, anti-generic safeguards
Module 8: Extractable Promptsclone-compiler~70%Missing identity lock depth
Phase 4: Framework AcronymNONE0%Expert's own quality framework not created
Phase 7: 1000 SimulationsNONE0%No validation/testing system
Offer Ecosystem MappingNONE0%No business intelligence extraction
User Psychographic Profilingonboarding-builder (partial)~40%No progressive user profile system

Architecture v2: Resequenced Layer Map

Critical resequencing from v1: The expert's quality framework (their GOLDEN+SHARP equivalent) now comes FIRST — at Layer 0.25 — because it defines the standard that everything else is scored against. Rubrics are built at Layer 1 BEFORE extraction, so every extractor output is quality-gated from the start. This is better than how Matt was built (framework came after extraction).

Layer 0 — Recon (3 skills) deep-research → expert-recon → masterybook-sync // Gather all public intelligence + raw sources Layer 0.25 — Framework Foundation (1 skill) NEW expert-framework-creator // Reads RAW sources. Creates expert's quality framework (SOUL+FLOW equivalent) // This is THE standard everything downstream gets scored against Layer 0.5 — Demo Fast Lane (1 skill, parallel, non-blocking) NEW demo-compiler // Takes deep-research + surface voice → minimal live demo in minutes // Purpose: JV sales. "Here's what your AI would look like" Layer 1 — Quality Gates Setup (1 skill) rubric-builder (expanded) // Builds scoring rubrics FROM expert's framework + GOLDEN+SHARP meta-rubric // Rubrics are ACTIVE during extraction, not applied after Layer 2 — Extraction (5 skills, parallel, scored against rubrics) soul-extractor | voice-extractor* | framework-extractor* | resource-extractor | offer-extractor // Each extraction scored against rubrics before acceptance // *expanded scope: voice includes pattern-breaking, framework includes micro-units Layer 2.5 — Extraction Audit (1 skill) gap-analyzer // Cross-extraction consistency + completeness + contradiction check // Generates specific follow-up questions for missing data Layer 3 — Build (3 skills) clone-compiler → lead-magnet-builder | onboarding-builder // Compile system prompt, knowledge files, tool configs // Build lead magnet + onboarding flow in parallel Layer 3.5 — Validation (1 skill) NEW clone-tester // Run scripted scenarios through compiled clone // Compare outputs to real expert samples // Score: tone, style, word count, vibe, answer quality, framework alignment // 85%+ threshold for deployment. Human audit sheet. Layer 4 — Orchestration (1 skill) boarding-orchestrator // Chief of Staff. Manages kanban. Lazy-loads skills. Enforces gates. Layer 5 — Deploy & Publish (automated) // Push clone to platform // Publish sales pages to expert subdomain (e.g., align360.io) // Publish lead magnets, onboarding flows // Configure experiences/resources in OS frontend

Why Framework-First (Layer 0.25) Changes Everything

In the Matt build, the expert's quality framework (SOUL+FLOW) was created AFTER extraction (Phase 4 of 7). This means the extractors had no quality standard during their work — they extracted blind, and the framework was reverse-engineered from the outputs.

In v2, the framework is created from RAW SOURCES (transcripts, documents, deep research output) — not from extraction outputs. This means:

Complete Skill Inventory (18 Skills)

L0 Recon

Gather all public intelligence and raw source materials before any extraction begins.

deep-research
expert-recon
masterybook-sync
SkillInputOutputEngine
deep-researchExpert name + known contextdeep-research-report.md + deep-research.json + sources.jsonPerplexity Sonar Deep Research API (5 queries)
expert-recondeep-research output + WebSearch/WebFetchrecon.json + recon-summary.md + raw source filesOrchestrates deep-research + web scraping + source collection
masterybook-syncWorkspace sourcesMasteryBook notebook URL + IDMasteryBook API (notebook creation + source upload)
L0.25 Framework Foundation NEW

Create the expert's own quality framework — the standard everything downstream is scored against. Reads RAW sources, not extraction outputs.

expert-framework-creator
SkillInputOutputProcess
expert-framework-creatorAll raw sources (transcripts, docs, research output)expert-framework.json (custom acronym + 1-5 scoring per letter + recursive interconnections + validation examples) 1. Analyze expert content across all dimensions
2. Identify unconscious quality gates (what criteria they ACTUALLY apply)
3. Create custom framework acronym (like SOUL+FLOW, GOLDEN+SHARP)
4. Define 1-5 scoring criteria per letter with examples from content
5. Map recursive interconnections between elements
6. Create input validation framework (source authenticity) + output validation framework (transformation quality)
7. Score known expert content against framework to validate accuracy

Example: For Matt Gottesman, this produced SOUL (Soul-aligned, Organic flow, Unified integration, Leverage through being) as input validation and FLOW (Force/flow clarity, Leverage recognition, Outcome momentum, Wisdom integration) as output validation. Each letter has deep context examples from Matt's actual content. Samuel Ngu will get his own equivalent derived from FLC Wisdom Framework patterns.

L0.5 Demo Fast Lane (parallel, non-blocking) NEW

Produce a minimal but impressive demo from deep research alone. Used for JV sales. Does not block the full pipeline.

demo-compiler
SkillInputOutputPurpose
demo-compilerdeep-research output + surface voice patterns + design systemMinimal live demo: 1-2 tool interactions, expert voice applied, branded design. Deployable URL.Derek sends this during/after JV call. "Your mind is blown. The only obvious answer is yes." Collapses sales cycle from weeks to minutes.
L1 Quality Gates Setup

Build scoring rubrics from the expert's framework BEFORE extraction begins. Rubrics are active quality gates during extraction, not post-hoc scores.

rubric-builder (expanded)
SkillInputOutputChanges from v1
rubric-builderexpert-framework.json + GOLDEN+SHARP definitionsrubrics.json with dual validation: expert's framework (philosophical fidelity) + GOLDEN+SHARP (infrastructure quality)Now takes expert's custom framework as PRIMARY input. Creates rubrics that extractors use DURING extraction, not just for post-scoring. Dual standard: "Is this what the expert would approve?" AND "Does this meet Jason's build quality?"
L2 Extraction (parallel, scored against rubrics)

Five parallel extractors, each producing a structured JSON. Every output is scored against rubrics before acceptance.

soul-extractor
voice-extractor*
framework-extractor*
resource-extractor
offer-extractor
SkillExtractsExpanded Scope
soul-extractorGoverning framework, values, anti-patterns, background behaviors, canonical statementsAdd: proprietary thinking patterns (Module 4 depth), decision heuristics
voice-extractor*Tone, vocabulary, energy spectrum, sentence structure, contrast-pair principlesAdd: pattern-breaking elements (Module 7) — contrarian positions, zero-fluff markers, anti-generic safeguards, surprise injection points
framework-extractor*Phases, stacks, cross-phase bridges, routing logicAdd: micro-units (Module 5) — micro-prompts, reasoning templates, transformational reframes, mini-lessons. Add: meta-structures (Module 6) — temporal scaffolding, progression logic, content architecture
resource-extractorBooks, podcasts, videos, courses, external frameworks, mentorsNo change needed
offer-extractorNEW: Expert's business offers, pricing, CTA psychology, introduction gates, variable ratio controlProduces offers.json: every offer (name, price, URL, audience fit), CTA patterns, introduction gates (what user state triggers which offer), anti-patterns (never push during vulnerable moments)
L2.5 Extraction Audit
gap-analyzer

Cross-extraction consistency check. Schema completeness audit. Contradiction detection. Generates prioritized follow-up questions. Critical gaps BLOCK compilation.

L3 Build
clone-compiler
lead-magnet-builder
onboarding-builder

Compile all extractions into system prompt (12-section structure), knowledge files, tool configs, lead magnet assessment, and gamified onboarding flow. Clone-compiler now includes user profile schema and offer integration directives.

L3.5 Validation NEW
clone-tester
SkillProcessScoring DimensionsThreshold
clone-tester 1. Take compiled clone + holdout set of real expert content
2. Generate test scenarios (questions the expert has actually answered)
3. Run clone through each scenario
4. Compare clone output vs real expert output
5. Score on 8 dimensions
6. Produce human-readable audit sheet
7. Go/No-Go decision + top 3 improvements
1. Tone/voice match
2. Word count/density match
3. Vibe/energy match
4. Answer quality (correctness per methodology)
5. Framework alignment (right tool applied?)
6. Offer integration naturalness
7. Expert framework score (custom SOUL+FLOW equivalent)
8. Anti-collapse score (didn't drift to generic ChatGPT?)
85%+ aggregate for production deployment. 90%+ on tone and framework alignment. Zero tolerance on anti-patterns. Human audit sheet reviewed before shipping.
L4 Orchestration
boarding-orchestrator

Chief of Staff agent. Manages kanban board (~24 cards per expert). Lazy-loads skills on demand. Enforces quality gates between layers. Handles both --recon (full pipeline from Layer 0) and warm boarding (sources already exist, skip to Layer 0.25).

User Profile System (Architectural, Not a Separate Skill)

Baked into clone-compiler's output architecture. Not extracted — designed.

ComponentDescriptionWhere It Lives
User Profile SchemaCommunication preference, journey stage, completed tools, generated artifacts, stated goals, constraintsSystem prompt Section 12 (Background Systems)
Progressive ProfilingEach tool interaction enriches the profile. First name, urgency, context captured naturally.Tool activation protocol (Section 5)
Cross-Tool Data BridgesArtifact carry-forward: Tool A's output becomes Tool B's input automatically.Cross-Phase Intelligence (Section 8)
Personalization HooksAI adjusts depth, pace, examples, and offer timing based on what it knows.Pathfinder Mode (Section 7)
User Dashboard"Your OS knows these things about you" — transparency + stickiness.Platform frontend feature

Second-Order Effects

For End Users (Stickiness & Value)

  1. Flywheel effect. More use = better alignment = better results = more use. The 50th session is 10x more valuable than the 1st because the OS knows the user's context, completed tools, artifacts, and communication preference.
  2. Artifact compounding. Users create real deliverables (career maps, design briefs, financial plans) that carry across tools. Switching to a competitor means losing all accumulated context.
  3. Natural business integration. Instead of cold CTAs, the AI knows when the user genuinely needs more than it can provide. "Based on where you are in Phase 2, Samuel's live masterclass on this exact topic is next week." Users experience this as helpful, not salesy.

For Experts (Business & Scale)

  1. Time asymmetry. Expert invests ~20 hours in boarding (content collection, framework definition, reviews). Gets back 10,000+ hours of AI coaching delivery per year.
  2. IP documentation as side effect. The extraction process produces soul.json, voice.json, frameworks.json — the most structured documentation of the expert's IP they've ever had. Their "IP insurance policy."
  3. Market intelligence. User interaction data (with consent) reveals which tools get used, where users get stuck, what questions come up. The expert gets smarter about their own audience.
  4. Content generation. Micro-units, reasoning templates, and reframes extracted during the process become social media content, course modules, and training materials.
  5. Revenue without time. The OS handles 80% of audience questions. Premium offers surface naturally. Expert focuses on the 20% that requires their personal touch.

For Athio (Platform & Infrastructure)

  1. Network effects across experts. Each extraction improves the system. Patterns learned from Samuel improve future onboarding. By expert #5, the pipeline is 3x faster.
  2. Cross-expert discovery. Users who love Expert A might benefit from Expert B's complementary methodology. Platform-level routing creates a marketplace effect.
  3. Expert rivalry as growth engine. When Expert A's clone performs well, Expert B's audience notices. "Why doesn't MY coach have this?"
  4. Agentic cost curve. Deep research ~$1-2. Full extraction ~$5-10. Compilation ~$2-3. Testing ~$3-5. Total: $15-25/expert in API calls. Compare to weeks of manual work.
  5. Quality bar as competitive moat. 85%+ validated fidelity with human audit. Others ship "close enough." We ship "proven."
  6. Graduated revenue model. Demo (free, sells JV) → Alpha (5 users, validates) → Beta (50 users, scales) → Production (unlimited). Expert risk is zero until production.

System Build Roadmap: How to Build This

This section is the development roadmap. It describes how to build the system itself — the skills, orchestrator, testing framework, and deployment automation. This is done ONCE, then the system is used repeatedly per expert (see Section 7: Execution Playbook).

Phase A: Foundation (Skills that exist, need modification)

A1

Build expert-framework-creator skill

The keystone new skill. Reads raw sources, identifies unconscious quality gates, produces custom framework acronym with 1-5 scoring and recursive interconnections. Reference: Phase 4 of Expert Thinking Clone Workflow + SOUL+FLOW as gold standard output.

File: ~/.claude/skills/expert-framework-creator/SKILL.md

Depends on: Nothing. Standalone design. First to build.

A2

Expand rubric-builder to accept expert framework

Modify rubric-builder to take expert-framework.json as primary input. Create dual validation rubrics: expert's framework (philosophical) + GOLDEN+SHARP (infrastructure). Rubrics must be usable DURING extraction as live quality gates.

File: ~/.claude/skills/rubric-builder/SKILL.md (modify)

Depends on: A1 (needs expert-framework.json schema)

A3

Expand voice-extractor with pattern-breaking

Add explicit Module 7 extraction: contrarian positions, zero-fluff markers, anti-generic safeguards, surprise injection points. These prevent model collapse — the clone drifting toward the LLM's statistical mean.

File: ~/.claude/skills/voice-extractor/SKILL.md (modify)

Depends on: Nothing. Can be done in parallel with A1/A2.

A4

Expand framework-extractor with micro-units + meta-structures

Add Module 5 micro-units (micro-prompts, reasoning templates, transformational reframes, mini-lessons) and Module 6 meta-structures (temporal scaffolding, progression logic, content architecture). These are the LEGO bricks the AI uses in real-time conversation.

File: ~/.claude/skills/framework-extractor/SKILL.md (modify)

Depends on: Nothing. Can be done in parallel.

Phase B: New Skills

B1

Build offer-extractor skill

Extract business offers, pricing, CTA psychology, introduction gates, variable ratio control. Produces offers.json. This is what makes the AI both helpful and commercially viable for the expert.

File: ~/.claude/skills/offer-extractor/SKILL.md

Reference: Module 3 (CTA Psychology) + Brad's ecosystem map + Expert Boarding Workflow Phase 5

B2

Build clone-tester skill

Run scripted scenarios, compare to real expert samples, score on 8 dimensions, produce human audit sheet, Go/No-Go at 85%+. This is the quality gate between "built" and "deployed."

File: ~/.claude/skills/clone-tester/SKILL.md

Reference: GOLDEN_SHARP_TESTER_KF + Expert Thinking Clone Phase 7 (1000 simulations)

B3

Build demo-compiler skill

Lightweight compiler that takes deep-research + surface voice patterns + design system and produces a deployable mini-demo (1-2 tool interactions, branded, live URL). Purpose: close JV deals.

File: ~/.claude/skills/demo-compiler/SKILL.md

Reference: Expert Boarding Workflow "Real-Time Proposal System" section

Phase C: Integration & Orchestration

C1

Update boarding-orchestrator for v2 layers

Update kanban template: ~24 cards across 8 layers. Add Layer 0.25, 0.5, 1, 3.5 cards. Update dependency chains. Support --recon (full), --warm (skip Layer 0), and --demo (fast lane only) modes.

File: ~/.claude/skills/boarding-orchestrator/SKILL.md (modify)

Depends on: A1-A4, B1-B3 complete

C2

Update clone-compiler for user profile + offers

Add user profile schema to system prompt output (Section 12). Add offer integration directives (Section 8 cross-phase intelligence). Add variable ratio control to pathfinder mode.

File: ~/.claude/skills/clone-compiler/SKILL.md (modify)

C3

Dogfood: Run full pipeline on Samuel Ngu / Align360

Execute the complete v2 pipeline on Samuel as the zero-to-one test. Document every friction point. Update skills in real-time as issues surface. This IS the validation of the system itself.

Phase D: Automation & Scaling

D1

Build deployment automation

After clone-tester passes, automatically: push clone config to MasteryOS platform, publish sales pages to expert subdomain (e.g., align360.io), publish lead magnets, configure experiences/resources in frontend. Requires OS frontend/backend code repos.

D2

Build standalone GUI

Web interface that: shows extraction progress, provides human steps and rules, gathers inputs, tells where to place files, visualizes quality scores, shows audit sheets, manages kanban. This replaces CLI-only workflow for non-technical operators.

D3

Full automation mode

After validating with Samuel (zero-to-one) and one more expert (one-to-many), the system runs end-to-end with human checkpoints only at: framework review (Layer 0.25), extraction audit (Layer 2.5), and clone-tester review (Layer 3.5). Everything else is automated.

Expert Execution Playbook: How to Run This Per Expert

This section is the operational playbook. It describes how to execute the system for each new expert, step by step. Both LLM agents and human operators use this document. It will be recursively updated as we dogfood with Samuel and subsequent experts.

Prerequisites

Step-by-Step Execution

Step 0: Initialize Workspace

boarding-orchestrator init {expert-slug} --recon

Creates:
  _workspaces/{expert-slug}/
    sources/raw-sources/    ← Upload expert content here
    sources/                ← Deep research outputs land here
    extractions/            ← Structured extraction JSONs
    rubrics/                ← Quality scoring rubrics
    build/                  ← Compiled clone artifacts
    validation/             ← Test results + audit sheets
    kanban.json             ← 24-card pipeline tracker

Step 1: Layer 0 — Recon

Agent: deep-research → expert-recon → masterybook-sync

Human: Review intelligence report. Upload any additional source files the expert provides (transcripts, courses, books, frameworks). Aim for 20-50 source files total.

Quality gate: Recon summary reviewed. Source count sufficient (≥15 pieces). MasteryBook notebook accessible to team.

Step 2: Layer 0.25 — Framework Foundation

Agent: expert-framework-creator reads ALL raw sources. Identifies unconscious quality gates. Produces expert-framework.json.

Human: CRITICAL REVIEW POINT. Read the proposed framework acronym. Ask: "Does this capture how [Expert] evaluates their own work?" Share with expert if available. Iterate until the framework feels right.

Quality gate: Framework acronym approved by human (and ideally by expert). Each letter has 3+ real examples from content. Recursive interconnections mapped.

Step 2.5: Layer 0.5 — Demo (Optional, Parallel)

Agent: demo-compiler produces minimal live demo.

Human: Review demo. Share with expert or prospect. Iterate if needed.

Quality gate: Demo feels like the expert (even if rough). Deployable URL works.

Step 3: Layer 1 — Quality Gates Setup

Agent: rubric-builder creates dual-standard rubrics from expert-framework.json + GOLDEN+SHARP.

Human: Review rubrics. Verify scoring criteria make sense. Adjust thresholds if needed.

Quality gate: Rubrics are specific to this expert (not generic). Auto-fail conditions defined for anti-patterns.

Step 4: Layer 2 — Extraction (Parallel)

Agent: 5 extractors run in parallel: soul, voice, framework, resource, offer. Each scores output against rubrics before saving.

Human: Spot-check extraction outputs. Flag anything that "doesn't sound like them." Add source material for thin areas.

Quality gate: All 5 extraction files present. Rubric scores above threshold on each. No critical contradictions.

Step 5: Layer 2.5 — Extraction Audit

Agent: gap-analyzer audits completeness, consistency, and contradictions across all 5 extraction files.

Human: Review gaps.json. Answer follow-up questions or gather additional source material. Critical gaps MUST be resolved before proceeding.

Quality gate: Zero critical gaps. Important gaps documented with resolution plan.

Step 6: Layer 3 — Build

Agent: clone-compiler produces system prompt + knowledge files + tool configs. lead-magnet-builder + onboarding-builder run in parallel.

Human: Read compiled system prompt. Does it feel like the expert? Are the tool menus correct? Is the offer integration natural?

Quality gate: 12-section system prompt complete. Knowledge files cover all active phases. Lead magnet assessment functional. Onboarding flow routes correctly.

Step 7: Layer 3.5 — Validation

Agent: clone-tester runs scripted scenarios. Compares to real expert samples. Produces audit sheet with 8-dimension scoring.

Human: CRITICAL REVIEW POINT. Review audit sheet. Verify tone, answer quality, framework alignment. If below 85%, identify specific weaknesses and iterate (re-extract, re-compile, re-test).

Quality gate: 85%+ aggregate score. 90%+ on tone and framework alignment. Zero anti-pattern violations. Human signs off on audit sheet.

Step 8: Deploy

Agent (future): Push to platform. Publish sales pages. Publish lead magnets. Configure experiences.

Human (current): Manual deployment to MasteryOS. Publish pages via NowPage CLI. Configure platform manually. Invite alpha users (5).

Quality gate: Alpha users can: complete lead magnet → enter platform → run first tool → access resources. Zero dead ends.

Iteration Loop

After alpha deployment, the loop tightens:

  1. Collect user feedback: "Does this sound like [Expert]?" "Where did it drift?" "What frameworks are missing?"
  2. Feed feedback into extraction updates (re-run specific extractors on new/additional sources)
  3. Re-compile clone with updated extractions
  4. Re-test with clone-tester (regression + new scenarios)
  5. Deploy update
  6. Repeat until quality stabilizes at 90%+

Vision: From CLI to Full Automation

Phase 1: CLI + Skills (Current — Building Now)

All 18 skills as Claude Code SKILL.md files. Boarding-orchestrator manages kanban. Human drives via CLI. Dogfooding with Samuel/Align360 as zero-to-one.

18
Skills
~24
Cards/Expert
CLI
Interface
1
Expert (Samuel)

Phase 2: Platform Integration (After Samuel Ships)

Connect pipeline to MasteryOS frontend/backend repos. Clone-tester pushes directly to platform on pass. Sales pages auto-publish to expert subdomains. Lead magnets deploy to NowPage. Experiences and resources auto-configure.

Phase 3: Standalone GUI (After 2-3 Experts Validated)

Web application that replaces CLI for non-technical operators (Derek, future BD team):

Phase 4: Full Automation (5+ Experts Running)

End-to-end automated pipeline with three human checkpoints:

  1. Framework review (Layer 0.25) — Human confirms "this captures the expert's quality intuition"
  2. Extraction audit (Layer 2.5) — Human reviews gaps and approves extraction quality
  3. Clone-tester sign-off (Layer 3.5) — Human reviews audit sheet and approves for deployment

Everything else runs autonomously: research, framework creation, rubric building, extraction, compilation, testing, deployment, publishing.

The endgame: Derek has a discovery call with Expert X on Monday. By Tuesday, deep research is complete and a live demo URL exists. By Friday, extractions are done and the clone is in testing. By next Monday, alpha users are onboarded. Total human time: ~8 hours of review across 5 days. Total agent time: ~2 hours of API calls. Total cost: ~$25 in API fees. That's the asymmetric leverage.

File & Skill Registry

Skill Files (All at ~/.claude/skills/)

SkillStatusLayer
deep-researchBuiltL0
expert-reconBuiltL0
masterybook-syncBuiltL0
expert-framework-creatorTo BuildL0.25
demo-compilerTo BuildL0.5
rubric-builderExpandL1
soul-extractorBuiltL2
voice-extractorExpandL2
framework-extractorExpandL2
resource-extractorBuiltL2
offer-extractorTo BuildL2
gap-analyzerBuiltL2.5
clone-compilerExpandL3
lead-magnet-builderBuiltL3
onboarding-builderBuiltL3
clone-testerTo BuildL3.5
boarding-orchestratorUpdateL4
design-system-extractorBuiltUtility

Key Source Documents (E:\align360\Extractions\)

DocumentWhat It ContainsRelevance
2hat v6.23Jason's recursive identity emulator (master version)Architecture template for all expert clones
2hat v6.21Expert-validated modular version (7-expert panel)Clean modular design reference
GOLDEN_SHARP_TESTER_KFUniversal quality testing system (125 simulations)Reference for clone-tester skill
Expert Thinking Clone Workflow7-phase complete standalone cloning processEnd-to-end process reference
SOUL+FLOW Frameworks (Matt)Matt Gottesman's custom quality frameworkGold standard for expert-framework-creator output
Modules 2-8 (Robust Extraction)8-module extraction systemReference for extractor expansion
Expert Boarding Workflow HTMLJV boarding process (Discover → Launch)Business process reference
Brad Himel Intelligence SynthesisGold standard research outputReference for deep-research quality
Recursive Clone & Artificial CognitionAcademic paper: ACS theory mappingTheoretical grounding for architecture

Published Pages (align360.asapai.net)

PathContent
/recursive-clone-v2THIS DOCUMENT — Complete architecture, build roadmap, execution playbook
/boarding-architecturev1 architecture PRD (14 skills, will be superseded)
/samuel-ngu-intelSamuel Ngu intelligence report (deep research output)
/a360-command-centerCommand center dashboard
/wiring-for-impactLead magnet assessment
/launch-roadmapSprint plan
/positioning-briefPositioning strategy
/boarding-packAthio boarding template

Document Control

FieldValue
Version2.0
DateMarch 11, 2026
AuthorsJason MacDonald (Athio) + Claude Opus 4.6
StatusActive — Living Document
Supersedesboarding-architecture v1 (12 skills, 5 layers)
Next updateAfter Samuel/Align360 dogfood complete

This document captures the complete system design as of March 2026. It serves as the source of truth for both system development and per-expert execution. It will be recursively updated as we learn from the Samuel/Align360 build and subsequent expert deployments.