First Principles & Second-Order Effects
▼What Are We Actually Building?
▼An automated pipeline that takes a subject-matter expert's raw intellectual property (transcripts, documents, frameworks, brand assets) and converts it into a fully functioning AI coaching Operating System — complete with system prompts, knowledge files, tool configurations, lead magnets, onboarding flows, and quality rubrics.
The pipeline is modular, skill-based, and orchestrated by a Chief of Staff agent that manages a kanban board. Each step has explicit quality gates to prevent "AI collapse" — the gradual degradation of fidelity to the expert's actual thinking patterns when an LLM generates without grounding.
Core Insight
The value of a coaching OS is NOT the chat interface. It's the structured frameworks + ethical governance + persistent memory + expert voice fidelity that make the AI act as the expert's digital twin, not a generic assistant. Every extraction step must preserve these qualities.
The "Clone" Pattern
Athio builds "cognitive infrastructure" — AI that thinks like a specific expert. The boarding system is the factory that produces these clones. Each clone follows the MasteryOS architecture: expert IP nested inside Athio's technology stack.
First Principles
▼- Fidelity over speed. A clone that doesn't sound like the expert is worse than no clone. Extraction quality is the primary constraint, not pipeline velocity.
- Governance is non-negotiable. Every expert has ethical boundaries, anti-patterns, and "Laws of Babylon" they explicitly resist. These must be extracted FIRST and applied as constraints on all downstream generation.
- Rubrics prevent collapse. Every generated artifact (system prompt, knowledge file, tool config) must be scored against extracted patterns before advancing. No artifact moves from "Build" to "Review" without passing its rubric.
- Voice is separate from soul. An expert's thinking patterns (soul) and speaking patterns (voice) are distinct extraction targets. Both must be captured independently and merged at compile time.
- Frameworks are the product. Tools, sequences, taxonomies, and dependencies are what users actually run. The chat wrapper is delivery; the frameworks are the value.
- Lazy-load, not eager-load. The orchestrator should only activate skills when needed. Skills remain dormant until their kanban card is claimed.
- State is serializable. The entire boarding process for an expert must be captured in a single JSON kanban board that can be paused, resumed, audited, and transferred between sessions.
Second-Order Effects
▼Desired (Explicit)
- New partners can be onboarded in days, not months
- Quality floor is enforced by rubrics, preventing the "bad clone" problem
- LLM sessions remain productive across context boundaries via serialized state
- Each boarding produces reusable artifacts (lead magnets, onboarding flows) alongside the core OS
Desired (Implied)
- The pipeline itself becomes Athio's moat — competitors can build a chatbot, but not a quality-gated extraction-to-clone factory
- Rubric data accumulates across partners, improving extraction heuristics over time
- The kanban system doubles as an audit trail for partner reporting
Risks to Monitor
- Over-extraction: Pulling patterns that aren't real from insufficient source material
- Voice flattening: Reducing a unique voice to generic "coaching speak"
- Rubric gaming: Generating artifacts that score well but miss the expert's actual intent
- Pipeline rigidity: Building so much process that small partners feel over-served
Complete Architecture
▼Phase 1: CLI Skills + File-Based State (Now)
▼The immediate implementation uses Claude CLI skills and file-based state. This is the proven pattern from the design-system-extractor skill already built.
| Component | Location | Purpose |
|---|---|---|
| Skills | ~/.claude/skills/{skill-name}/SKILL.md | 14 specialized recon/extraction/build skills |
| Skill References | ~/.claude/skills/{skill-name}/references/ | Supporting docs, templates, rubric definitions |
| Expert Workspace | {project-root}/_workspaces/{expert-slug}/ | All extraction outputs per expert |
| Kanban Board | {workspace}/kanban.json | Board state per expert |
| Extraction Outputs | {workspace}/extractions/ | soul.json, voice.json, frameworks.json, etc. |
| Build Artifacts | {workspace}/artifacts/ | System prompt, knowledge files, tool configs |
| Rubric Scores | {workspace}/rubrics/ | Scoring data per artifact |
| Source Files | {workspace}/sources/ | Raw input files (PDFs, transcripts, etc.) |
Skill Structure Pattern
~/.claude/skills/{skill-name}/
SKILL.md # Main skill definition (YAML frontmatter + instructions)
references/
templates.md # Output templates
rubric-definitions.md # Scoring heuristics (for extractors/builders)
examples.md # Reference examples
Phase 2: Vercel Service + Supabase (3+ Partners)
▼When Athio scales to 3+ partners, the same skill logic wraps into MCP tools served from a Vercel edge function, with state moving to Supabase.
| CLI (Phase 1) | MCP (Phase 2) |
|---|---|
| SKILL.md instructions | MCP tool handler with same logic |
| kanban.json on filesystem | Supabase table with realtime subscriptions |
| Extraction JSON files | Supabase JSONB columns + Vercel Blob for large outputs |
| Task tool spawns agents | Webhook-triggered workflows via n8n or inngest |
| hc-publish.js for output | NowPage API direct from MCP handler |
The migration path is additive — Phase 1 skills continue working, Phase 2 wraps them in API surfaces.
5-Layer Architecture
▼| Layer | Purpose | Skills | Input | Output |
|---|---|---|---|---|
| 0. Recon | Pre-meeting deep research, web scraping, IP assessment & MasteryBook sync | deep-research, expert-recon, masterybook-sync | Expert name, known URLs, brand | deep-research-report.md, recon.json, raw-sources/, recon-summary.md, MasteryBook notebook |
| 1. Extractors | Pull patterns from raw IP | soul, voice, framework, resource, design-system | Transcripts, PDFs, docs, URLs + recon data | Structured JSON extractions |
| 2. Synthesizers | Generate quality assurance | rubric-builder, gap-analyzer | Extraction outputs | Rubrics + gap reports |
| 3. Builders | Generate artifacts scored by rubrics | clone-compiler, lead-magnet, onboarding | Extractions + rubrics | System prompt, KF, lead magnet, onboarding flow |
| 4. Orchestrator | Coordinates everything | boarding-orchestrator | Kanban board + workspace | Completed expert clone |
Source Files Analyzed
▼This section catalogs every source file analyzed to build this architecture, ensuring LLM continuity.
Reference Implementation: Align360
▼| File | What It Contains | Extraction Relevance |
|---|---|---|
Align360_System_Prompt_v6.1.md |
685-line system prompt with 12 sections: Identity, FLC Wisdom Framework (5 governing layers + Clarity Path + Tri-Filter + 7 absolute rules), Personality/Tone/Character, Mode System, Tool Activation Protocol, Phase Menus, Pathfinder, Cross-Phase Intelligence, Guardrails, Canonical Statements, Recommended Pathways, Background Systems (8 invisible layers) | KEYSTONE. This is the target output of the entire pipeline. Every extraction skill exists to produce a document like this. The soul-extractor captures the FLC Wisdom Framework equivalent; voice-extractor captures Section 3; framework-extractor captures Sections 4-6. |
Align360_Knowledge_File_Part1.md |
936-line knowledge file covering Phase 0 (7 stacks) + Phase 1 (8 stacks) + Supporting Content (Seven Redemptive Gifts detailed profiles + Guardrails). Each stack has: Purpose, Key Inputs, Framework, UX Outputs, Prompt Template. | PRIMARY. This is the reference for what framework-extractor produces. Each "stack" maps to a tool config with inputs, process, and outputs. |
Align360_Knowledge_File_Part2.md |
654-line knowledge file covering Phase 2 (7 stacks), Phase 3 (7 stacks), Phase 4 (7 stacks) + Cross-Phase Integration + Formation Resources. Total: 36 stacks across 5 phases. | REFERENCE. Shows full scope of what a mature clone looks like. Framework-extractor must handle extracting this many tools from raw IP. |
Align360_Background_Tools_Overview_v2.md.pdf |
PDF overview of 14 background tools (invisible layer systems) | IMPORTANT. Soul-extractor must identify which expert behaviors become background systems vs. user-facing tools. |
Align360 Governance Document.docx |
Governance values: Clarity, Integrity, Empathy, Balance, Growth, Contribution, Joy. Anti-patterns ("Laws of Babylon"). | CRITICAL for soul-extractor. Every expert has equivalent governance values that must be extracted first. |
Branding/ |
Logo (A360logo.jpg) + Branding doc with colors (#2e3c45, #edefe8, #7aa49c, #e09b67, #e45742) and font (Quicksand) | design-system-extractor target. Already built and proven. |
Architecture & Strategy Files
▼| File | What It Contains | Key Takeaways |
|---|---|---|
first-principles.md |
OS build strategy + dependency map. Reference architecture (FreedomLife). Module mapping (FreedomLife → Align360). Dependencies (Must-Have for MVP/Retention/Growth/Future). | The three-column layout pattern, module mapping approach, and dependency chains inform how clone-compiler structures its output. |
align360-details.md |
Phase details (all 5 phases with stack counts), background tools breakdown, partnerships (B3lieve, Africa, YM), pricing model, anti-patterns, governance values. | Pricing and partnership patterns will inform onboarding-builder's monetization section. Anti-patterns are soul-extraction targets. |
freedomlife-os-analysis.md |
Detailed analysis of FreedomLife OS screenshots — the reference MasteryOS implementation. | The three-column layout, chat interface patterns, and tool activation UX patterns inform clone-compiler's output structure. |
Existing Skill: design-system-extractor
▼| File | What It Contains |
|---|---|
SKILL.md |
320-line skill definition with: YAML frontmatter (name, description), 4-step workflow (Source Analysis → Token Extraction → System Generation → Publication), Color Derivation Rules, LLM Instruction Block format, Code Block format, Output Requirements, Cross-Platform Notes. |
references/architecture.md |
1628-line complete HTML template with placeholder tokens, CSS architecture, JS interactivity, section content templates. |
references/color-science.md |
Color derivation algorithms for HSL manipulation, contrast checking, palette generation. |
This skill establishes the pattern all other skills follow: YAML frontmatter → Overview → Step-by-step Workflow → Output format → References folder with templates.
Extraction Pipeline Design
▼Pipeline Flow
▼
Expert Raw IP (transcripts, docs, PDFs, URLs, brand assets)
|
v
[soul-extractor] -- thinking loops, values, governance, anti-patterns
[voice-extractor] -- tone, vocabulary, energy, sentence patterns
[framework-extractor] -- tools, sequences, taxonomy, dependencies
[resource-extractor] -- books, videos, articles, courses, mentors
[design-system-extractor] -- colors, typography, spacing, components
|
v (all extractions complete)
[rubric-builder] -- scoring heuristics from extracted patterns
[gap-analyzer] -- completeness audit, missing info questions
|
v (rubrics + gaps resolved)
[clone-compiler] -- system prompt + knowledge files + tool configs
[lead-magnet-builder] -- interactive assessment from primary framework
[onboarding-builder] -- gamified card flow + pathfinder routing
|
v (all artifacts scored by rubrics)
REVIEW GATE -- human review of compiled clone
|
v
DEPLOYED CLONE -- published to MasteryOS
Extraction Output Schemas
▼soul.json
{
"expert": "Samuel Ngu",
"governing_framework": {
"name": "FLC Wisdom Framework",
"layers": [...],
"processing_path": [...],
"output_filter": {...}
},
"values": ["Clarity", "Integrity", "Empathy", ...],
"anti_patterns": ["manufactured urgency", "identity erosion", ...],
"governance_rules": [...],
"background_behaviors": [
{ "name": "Epistemic Drift Detection", "triggers": [...], "responses": [...] },
{ "name": "Pastoral Discernment", "activation_signals": [...] }
],
"canonical_statements": {
"completion": "This is complete for your stated goal...",
"no_pressure": "We'll never rush you, track you, or pressure you..."
},
"thinking_loops": {
"before_responding": ["Pause", "Understand", "Simplify", "Guide", "Reflect"],
"before_recommending": ["check workload", "check season", "check energy"]
}
}
voice.json
{
"expert": "Samuel Ngu",
"character": "digital mentor — steady, kind, objective",
"tone_principles": [
{ "name": "Warm, not sentimental", "definition": "...", "example": "..." },
{ "name": "Direct, not harsh", "definition": "...", "example": "..." }
],
"language_rules": {
"avg_sentence_length": "12-18 words",
"reading_level": "8th grade",
"vocabulary_style": "universal, no jargon, no corporate speak",
"spiritual_framing": "present where it fits, never forced"
},
"never_say": ["You should do this...", "This is the right decision...", ...],
"opening_patterns": {...},
"closing_patterns": {...},
"energy_spectrum": {
"high": "engaged, forward-moving, tool execution",
"low": "reflective, rest-offering, completion-naming"
}
}
frameworks.json
{
"expert": "Samuel Ngu",
"phases": [
{
"id": 0, "name": "DesignSuite", "promise": "Discover how you're wired before you build",
"stacks": [
{
"id": 1, "name": "Wiring for Impact",
"purpose": "Identify primary and secondary Redemptive Gifts",
"inputs": ["assessment_responses", "current_role", "career_context"],
"framework": { "structure": "10 adaptive questions", "scoring": "multi-dimensional" },
"ux_outputs": ["Gift Profile Card", "Design Summary", "Strengths Breakdown", ...],
"prompt_template": "..."
}
]
}
],
"cross_phase_bridges": [
{ "from": "DesignSuite", "to": "Career Navigator", "data": ["spiritual_gift", "orientation_profile"] }
],
"taxonomy": { "phases": 5, "stacks": 36, "background_tools": 14 }
}
resources.json
{
"expert": "Samuel Ngu",
"recommended_reading": [
{ "title": "The Go-Giver", "author": "Bob Burg", "season": "Stabilize", "relevance": "..." }
],
"recommended_podcasts": [...],
"external_frameworks": ["Romans 12:6-8", "Kiyosaki Cashflow Quadrants", "Dave Ramsey Baby Steps"],
"governance_rules_for_resources": [
"Always present as companions, not requirements",
"Never recommend resources that promote shame, urgency, or fear"
]
}
Quality Gates (Anti-Collapse Mechanism)
▼Every artifact must pass rubric scoring before advancing on the kanban board. This prevents "AI collapse" where generated content gradually loses fidelity to the expert's actual patterns.
| Gate | What's Checked | Pass Threshold |
|---|---|---|
| Extraction → Scoring | Completeness (all fields populated), Source fidelity (citations to raw material), Consistency (no contradictions between extractions) | All fields populated + at least 3 source citations per major section |
| Scoring → Compilation | Rubric coverage (heuristics exist for every extraction category), Gap resolution (all critical gaps addressed) | No critical gaps remaining + rubric covers 80%+ of extraction fields |
| Build → Review | Voice fidelity score, Framework accuracy score, Governance compliance score, Completeness score | All scores ≥ 7/10 + zero governance violations |
Agentic Kanban System
▼Board Structure
▼kanban.json Schema
▼{
"version": "1.0.0",
"expert": {
"slug": "samuel-ngu",
"name": "Samuel Ngu",
"brand": "Feeling Like Chocolate",
"created": "2026-03-09T00:00:00Z"
},
"columns": ["backlog","intake","extraction","scoring","compilation","build","review","done"],
"cards": [
{
"id": "card-001",
"title": "Upload source files",
"column": "done",
"skill": null,
"assignee": "human",
"created": "2026-03-09T00:00:00Z",
"moved": "2026-03-09T01:00:00Z",
"outputs": ["sources/transcripts/", "sources/docs/"],
"score": null,
"blocked_by": []
},
{
"id": "card-002",
"title": "Extract soul patterns",
"column": "extraction",
"skill": "soul-extractor",
"assignee": "agent",
"created": "2026-03-09T00:00:00Z",
"moved": null,
"outputs": [],
"score": null,
"blocked_by": ["card-001"]
}
],
"history": [
{ "timestamp": "...", "card_id": "card-001", "from": "intake", "to": "done", "agent": "human" }
]
}
18 Pre-Templated Cards (with --recon)
▼When the orchestrator runs init --recon, all 18 cards are created. Without --recon, only cards 1-15 are created (warm boarding).
| # | Card Title | Skill | Column Start | Blocked By |
|---|---|---|---|---|
| 0a | Run deep research | deep-research | backlog | — |
| 0b | Run expert recon | expert-recon | backlog | 0a |
| 0c | Sync to MasteryBook | masterybook-sync | backlog | 0b |
| 1 | Upload source files | human | intake | 0b (with --recon) or — (without) |
| 2 | Extract soul patterns | soul-extractor | extraction | 1 |
| 3 | Extract voice patterns | voice-extractor | extraction | 1 |
| 4 | Extract frameworks & tools | framework-extractor | extraction | 1 |
| 5 | Extract resources & references | resource-extractor | extraction | 1 |
| 6 | Extract design system | design-system-extractor | extraction | 1 |
| 7 | Build scoring rubrics | rubric-builder | scoring | 2, 3, 4 |
| 8 | Run gap analysis | gap-analyzer | scoring | 2, 3, 4, 5 |
| 9 | Resolve gaps (human + AI) | human + gap-analyzer | scoring | 8 |
| 10 | Compile clone (system prompt + KF) | clone-compiler | compilation | 7, 9 |
| 11 | Score compiled clone | rubric-builder | compilation | 10 |
| 12 | Build lead magnet | lead-magnet-builder | build | 10 |
| 13 | Build onboarding flow | onboarding-builder | build | 10 |
| 14 | Score all build artifacts | rubric-builder | build | 12, 13 |
| 15 | Human review & approval | human | review | 11, 14 |
14 Skills — Complete Specification
▼LLM Implementation Guide
Each skill below is specified at the level needed to build a SKILL.md file. The skill file should follow the pattern established by design-system-extractor: YAML frontmatter (name, description) → Overview → Step-by-step Workflow → Output format → References folder with templates/rubrics/examples.
Layer 0: Reconnaissance
0a. deep-research
Purpose: Heavy-lifting research engine that wraps the Perplexity Sonar Deep Research API. Runs 3-5 sequential deep research queries to build comprehensive intelligence on an expert's public presence. Falls back to Claude's native WebSearch + WebFetch when Perplexity is unavailable.
Inputs: Expert name, brand/company name, known URLs, vertical
Outputs: deep-research-report.md (full narrative), deep-research.json (structured proto-findings with confidence scores), deep-research-sources.json (all citation URLs)
Key capabilities:
- 5 query categories: background/credentials, offerings/pricing, interviews/philosophy, competitors/landscape, recent activity
- Perplexity Sonar Deep Research API (~$0.40-1.30 per run)
- Automatic fallback to WebSearch + WebFetch if PPLX_API_KEY unavailable
- Structured findings matching recon.json proto-schema with confidence scores
- Source quality assessment (high/medium/low authority)
Integration: Delegated to by expert-recon as the first step. Outputs feed into supplemental scraping and pattern pre-extraction.
0b. expert-recon
Purpose: Layer 0 orchestrator — coordinates deep research, supplemental web scraping, pattern pre-extraction, IP assessment, and MasteryBook sync. Produces the complete recon package.
Inputs: Expert name, known URLs (LinkedIn, YouTube, website), brand/company name
Outputs: recon.json (merged proto-findings from deep-research + scraping), raw-sources/ folder, recon-summary.md (human-readable brief for onboarding call)
Key capabilities:
- Delegates to
deep-researchfor Perplexity-powered intelligence - Supplemental web scraping across 9+ platforms for content deep-research can't reach
- YouTube transcript extraction for interview/talk content
- Merged pattern pre-extraction with confidence scoring (confirms across both methods)
- IP Footprint Assessment: Volume, Depth, Consistency, Uniqueness, Extractability (1-5 each)
- Boarding Readiness Score (1-10) determines how much manual upload is needed
- Delegates to
masterybook-syncfor team-accessible RAG notebook
Integration: Pre-populates _workspaces/{expert}/sources/, pre-seeds extraction context for all Layer 1 extractors, enriches gap analysis. Orchestrator's init --recon flag triggers this before standard kanban cards.
0c. masterybook-sync
Purpose: Syncs expert workspace sources to a MasteryBook notebook for team-accessible RAG-based Q&A. Creates a notebook, uploads text/URL/YouTube/PDF sources, and returns a shareable notebook URL.
Inputs: Workspace path, optional notebook ID, expert name
Outputs: masterybook-sync-status.json (notebook ID, upload counts), masterybook-summary.md (RAG-generated executive summary)
Key capabilities:
- MasteryBook API integration (FastAPI backend at notebooklm-api.vercel.app)
- Uploads text, URLs, YouTube, and PDF sources via appropriate endpoints
- Generates executive summary via RAG chat query
- Graceful degradation: If MasteryBook API is unavailable, reports status and continues — never blocks the pipeline
Integration: Delegated to by expert-recon as the final step. Notebook URL included in recon-summary.md for team access.
Layer 1: Extractors
1. soul-extractor
Purpose: Extract thinking loops, values, governance rules, anti-patterns, background behaviors, and canonical statements from expert's raw IP.
Inputs: Transcripts, governance docs, system prompts, interviews
Outputs: soul.json — governing framework, values, anti-patterns, background behaviors, thinking loops, canonical statements
Key extraction targets:
- Governing framework (equivalent to FLC Wisdom Framework)
- Processing path (equivalent to Clarity Path: Pause → Understand → Simplify → Guide → Reflect)
- Output filter (equivalent to Tri-Filter: Truth, Clarity, Impact)
- Absolute governance rules (equivalent to "Zero Coercion", "Completion is Sacred", etc.)
- Background system behaviors (equivalent to Epistemic Drift Detection, Pastoral Discernment, etc.)
- Anti-patterns ("Laws of Babylon" the expert explicitly resists)
2. voice-extractor
Purpose: Extract tone, vocabulary, energy spectrum, sentence patterns, and character description from expert's communications.
Inputs: Transcripts, blog posts, emails, presentations, social media
Outputs: voice.json — character description, tone principles, language rules, never-say list, opening/closing patterns, energy spectrum
Key extraction targets:
- Character description (equivalent to "digital mentor — steady, kind, objective")
- Tone principles as contrast pairs (equivalent to "Warm, not sentimental")
- Average sentence length, reading level, vocabulary style
- Phrases the expert NEVER uses (equivalent to "What You Never Say")
- Signature phrases and recurring metaphors
3. framework-extractor
Purpose: Extract tools, sequences, taxonomies, phase structures, and inter-tool dependencies from expert's methodology.
Inputs: System prompts, knowledge files, course materials, training docs
Outputs: frameworks.json — phases, stacks (with purpose/inputs/framework/outputs/prompt template), cross-phase bridges, taxonomy
Key extraction targets:
- Phase/module structure (how many phases, what progression)
- Individual tools/stacks with full specification
- Data flow between tools (cross-phase bridges)
- Recommended pathways and routing logic
- Tool activation triggers (what user says → which tool runs)
4. resource-extractor
Purpose: Extract recommended books, podcasts, videos, courses, external frameworks, and mentors referenced in expert's IP.
Inputs: Knowledge files, transcripts, course materials
Outputs: resources.json — categorized resources with relevance context, external frameworks referenced, governance rules for resource recommendations
Key extraction targets:
- Books/podcasts/videos organized by season/phase
- External frameworks the expert builds on (e.g., Kiyosaki, Ramsey, StoryBrand)
- Rules for how resources should be recommended
5. design-system-extractor
Purpose: Extract colors, typography, spacing, components from expert's brand.
Status: Already built and working at ~/.claude/skills/design-system-extractor/
Layer 2: Synthesizers
6. rubric-builder
Purpose: Generate scoring heuristics from extracted patterns. These rubrics score all downstream artifacts for fidelity to the expert's actual IP.
Inputs: soul.json, voice.json, frameworks.json
Outputs: rubrics.json — scoring criteria for: voice fidelity, framework accuracy, governance compliance, completeness
Rubric categories:
- Voice Fidelity (0-10): Does the artifact sound like the expert? Check tone principles, sentence patterns, vocabulary, never-say violations.
- Framework Accuracy (0-10): Are tools, sequences, and logic correct? Check against frameworks.json for step accuracy.
- Governance Compliance (0-10): Does the artifact respect ALL governance rules? Zero tolerance for anti-pattern violations.
- Completeness (0-10): Are all required sections present? Check against extraction output schemas.
7. gap-analyzer
Purpose: Audit extraction completeness and generate follow-up questions for missing information.
Inputs: All extraction outputs (soul, voice, frameworks, resources)
Outputs: gaps.json — missing fields, weak areas, follow-up questions for the expert/team, priority ranking
Gap categories:
- Critical: Missing governance rules, no voice samples, incomplete tool specs
- Important: Thin background behaviors, few canonical statements, sparse resource list
- Nice-to-have: Additional voice samples, more tool activation triggers, edge case coverage
Layer 3: Builders
8. clone-compiler
Purpose: Compile extractions into a complete AI coaching OS: system prompt + knowledge files + tool configurations. This is the core output of the entire pipeline.
Inputs: soul.json, voice.json, frameworks.json, resources.json, rubrics.json
Outputs:
system-prompt.md— Full system prompt following the v6.1 section structureknowledge-file-part1.md— Active phase stacks with full tool specificationsknowledge-file-part2.md— Future phase stacks (if applicable)tool-configs.json— MasteryOS tool configuration format
Compilation rules:
- System prompt MUST follow the 12-section structure from v6.1
- Voice from voice.json applied to ALL prose sections
- Soul from soul.json becomes Sections 2, 9, 12
- Frameworks from frameworks.json become Sections 4-6, 8, 11 + Knowledge Files
- Every output scored against rubrics before advancing
9. lead-magnet-builder
Purpose: Build an interactive assessment (like Wiring for Impact) as a standalone lead magnet HTML page with email gate.
Inputs: frameworks.json (primary assessment tool), voice.json, design system
Outputs: lead-magnet.html — self-contained assessment page with: questions, scoring, result types, email capture, CTA to full platform
Reference: align360.asapai.net/wiring-for-impact (published example)
10. onboarding-builder
Purpose: Build a gamified, Netflix-style card-based onboarding flow that routes new users to the right starting tool.
Inputs: frameworks.json (tool menu + pathfinder logic), voice.json, design system
Outputs: onboarding-flow.html — card-based UI with: "what's urgent" + "where's the dissonance" questions, pathfinder routing logic, tool recommendations
Design principles: Netflix-style gamified cards, no linear forced path, "what's urgent" + "where's the dissonance" discovery questions.
Layer 4: Orchestrator
11. boarding-orchestrator
Purpose: The Chief of Staff agent. Manages the kanban board, lazy-loads skills on demand, enforces quality gates, tracks progress, and coordinates the entire extraction-to-clone pipeline.
Inputs: Expert workspace path, kanban.json
Capabilities:
- Initialize: Create a new expert workspace with kanban.json (18 cards with
--recon, or 15 cards without) - Status: Read and display current kanban state
- Run next: Identify the next unblocked card, invoke its skill, write outputs, update board
- Score: After a build skill completes, invoke rubric-builder to score the output
- Gate: Enforce quality thresholds before advancing cards
- Report: Generate a progress summary for human review
Orchestrator rules:
- Never run two extraction skills simultaneously on the same source file
- Always run gap-analyzer after all extractors complete
- Never advance to compilation without rubric scores
- The orchestrator is the ONLY skill that writes to kanban.json
Build Order & Dependencies
▼| Order | Skill | Layer | Depends On | Rationale |
|---|---|---|---|---|
| 0a | deep-research | Recon | None (standalone) | Perplexity-powered intelligence. Runs first to seed all downstream research. No dependencies. |
| 0b | masterybook-sync | Recon | None (standalone) | MasteryBook notebook sync. No dependencies on other skills. Gracefully degrades if API unavailable. |
| 1 | soul-extractor | Extractor | design-system-extractor (pattern) | KEYSTONE. Governance must exist before any other extraction makes sense. Other extractors reference soul for consistency. |
| 2 | voice-extractor | Extractor | soul-extractor (pattern) | Voice is independent of frameworks but benefits from soul context. Can be built in parallel with framework-extractor. |
| 3 | framework-extractor | Extractor | soul-extractor (pattern) | The largest extractor. Tool specs are the core product value. Can be built in parallel with voice-extractor. |
| 4 | resource-extractor | Extractor | framework-extractor (context) | Simpler extractor. Resources map to phases/tools from frameworks.json. |
| 5 | rubric-builder | Synthesizer | soul, voice, framework extractors | Can't build scoring criteria until you know what you're scoring against. |
| 6 | gap-analyzer | Synthesizer | All extractors | Needs complete extraction picture to identify what's missing. |
| 7 | clone-compiler | Builder | All extractors + rubric-builder | The core compilation step. Needs everything upstream. |
| 8 | lead-magnet-builder | Builder | framework-extractor + clone-compiler | Builds the standalone assessment. Can be parallel with onboarding-builder. |
| 9 | onboarding-builder | Builder | framework-extractor + clone-compiler | Builds the gamified onboarding. Can be parallel with lead-magnet-builder. |
| 12 | boarding-orchestrator | Orchestrator | All other skills | Built last because it needs to know all skill interfaces. Now supports 18 cards with --recon Layer 0 or 15 cards without. |
Verification Criteria
▼| Criterion | How to Verify | Status |
|---|---|---|
| PRD published | Accessible at cowork.asapai.net/boarding-architecture | Pending publish |
| Skills buildable from PRD | Each skill section contains enough detail to create a SKILL.md without additional context | Self-contained in Section 06 |
| Kanban testable | boarding-orchestrator can initialize a workspace, create kanban.json, and process at least one card | Pending skill build |
| Align360 as reference | Running the full pipeline against Align360 source files produces outputs comparable to the existing v6.1 system prompt | Integration test (future) |
| LLM continuity | A new Claude session reading only this PRD + skill files can continue building from where the last session left off | Architecture verified |
Session Continuity Protocol
When starting a new session to continue building the boarding system:
- Read this PRD (boarding-architecture.html or the published URL)
- Read the MEMORY.md file for project context
- Check
~/.claude/skills/for which skills are already built - Check any active expert workspace's
kanban.jsonfor pipeline state - Resume from the next unbuild skill or unprocessed kanban card