Irreducibly Human Series · TIC TOC · Course Architecture Document

AImagineering: full table of contents and chapter specifications

What AI can't do — the graduate engineering design pipeline

Bear Brown & Company / Kindle Direct Publishing, 2026
Version 0.2  ·  March 2026  ·  Reviewed by Dev the Dev

Contents

  1. Book Concept and Thesis
  2. Learner Profile
  3. Three-Act Learning Arc
  4. Course-to-Chapter Mapping
  5. Learning Outcomes
  6. Chapter-by-Chapter Specifications
  7. Assessment Architecture
Section 1

Book Concept and Thesis

One-sentence version
This book teaches graduate engineers to do the parts of design that AI cannot — from reframing the brief before generating, to committing their name to a solution worth deploying.

Thesis: Ideation is now a tool operation, not a cognitive achievement — which means everything that happens before and after ideation is now the work of design.

The book succeeds if: a reader who completes it can stand in front of a design problem, resist the pull to generate immediately, ask the right human questions first, direct AI tools with authority rather than dependency, and ultimately commit to a course of action they can defend on its merits — not just its polish.
Section 2

Learner Profile

Primary reader

A graduate engineering student who prompts fluently, accepts the brief as given, generates prolifically, struggles to choose among outputs, has never been asked to reframe a problem rather than solve it, and has never been required to commit to a decision in a way that makes them personally accountable for its consequences. They mistake fluent generation for good judgment.

Prior knowledge map

PrerequisiteSafe to assume?
AI tool operation at Course 1 levelSafe
Basic design thinking vocabularyProbably safe
Engineering domain expertise (at least one field)Safe
Neri Oxman's Krebs Cycle of CreativityNOT SAFE — introduced Ch. 2

Prior misconceptions the course must address

Misconception
What the course argues instead
"More ideas = better design process"
The course inverts this
"AI ideation is the hard part"
The course argues it is now the easy part
"Design Thinking ends at prototype"
The course adds Commit as the missing stage
"Commitment means certainty"
Commitment requires accepting uncertainty, not eliminating it
Section 3

Three-Act Learning Arc

Arc statement: This book takes the reader from fluent generator to accountable designer by first revealing what AI cannot see in the brief, then developing the human capacities that produce what AI cannot generate, then demanding the commitment that AI can never make.
ActWeeksChaptersWhat the student gains
Act One — The Material and the Human 1–7 Ch. 1–4 Knows exactly what their tools do and don't do; has one finding about their user that no simulation would have surfaced; can reframe a brief on explicit criteria
Act Two — The Human Capacities 8–13 Ch. 5–7 Can build something real, evaluate whether it works, and articulate what the data shows and what it cannot show
Act Three — The Commit 14–15 Ch. 8–10 Can commit to a specific course of action with their name on it, explain the metacognitive switch that made the commit possible, and demonstrate the full pipeline on a new brief

Transition conditions

Act One → Act Two

Student has documented the grain of their primary tool AND conducted one genuine empathy investigation

Act Two → Act Three

Student has a defended problem reframe, an ideation curation, a prototype, and test results with interpretive judgment

Act Three → Completion

Student has produced a Commit document that survives peer critique, and can narrate every human judgment call in their process

Section 4

Course-to-Chapter Mapping

StageWeeksChapterCore claimKey deliverable
Orientation1Ch. 1 — The Thirty-Minute DesignerYou already know how to generate. That is not the same as knowing how to design.30-min experiment audit
The Material2–3Ch. 2 — Finding the GrainEvery tool has a grain. Working with it is craft. Ignoring it is waste.Grain documentation
Empathize4–5Ch. 3 — What Simulation Cannot FeelAI simulates users. It cannot meet them.Empathy investigation
Define6–7Ch. 4 — The Brief Is a HypothesisThe brief you were handed is a hypothesis. Reframe it before you generate.Three reframes + defense
Ideate8Ch. 5 — One Week for the DreamerAI is the best Dreamer tool ever built. Curation is still yours.30+ concepts + curation defense
Prototype9–11Ch. 6 — The Realist BuildsAI accelerates the build. The human decides what to build.Prototype + specification doc
Test12–13Ch. 7 — The Critic TestsThe data tells you if it works. You decide if it should.Interpretive judgment document
Commit14–15Ch. 8 — The CommitDesign Thinking ends at Test. This is the stage it omits.Commit document (draft → critique → final)
Synthesis15Ch. 9 — The Metacognitive SwitchThe capacity that directs the others. No AI equivalent.Process reflection
CapstoneTBDCh. 10 — The Full PipelineCan you do it without the scaffolding?Full pipeline presentation
Section 5

Learning Outcomes

By the end of this course, students will be able to:

  1. Identify the grain of at least three AI tools — what each does naturally, what it resists, and where the human must supply what the tool cannot Apply · Tier 4
  2. Conduct an empathy investigation that produces findings an AI simulation of the user could not have generated Apply · Tier 3
  3. Reframe a given design brief into at least two alternative problem definitions, selecting and defending one on explicit criteria Analyze · Tier 4
  4. Direct an AI ideation session producing a defined quantity of concepts, then apply human curatorial judgment to select and develop the most promising Apply + Evaluate · Tiers 1 + 4
  5. Construct a prototype using AI acceleration while documenting the identification decisions that the AI could not supply Apply · Tiers 4 + 5
  6. Evaluate prototype test results using the three legitimacy types — pragmatic, moral, and cognitive — identifying where human interpretive judgment is required beyond what the data shows Evaluate · Tier 4
  7. Commit to a design direction by staking their name on a specific course of action, documenting accountability and unresolved uncertainty Create · Tier 7
  8. Demonstrate through every deliverable that they can name one judgment call that required their values, domain knowledge, or accountability that an AI could not have made on their behalf Metacognitive · all levels

Outcome-to-chapter map

ChapterBloom's levelAssessable?Maps to course need?
Ch. 1AnalyzeYes — 500-word auditYes — establishes the inversion
Ch. 2ApplyYes — grain documentationYes — tool authority
Ch. 3ApplyYes — empathy investigationYes — irreducible human contact
Ch. 4AnalyzeYes — reframe defenseYes — problem formulation
Ch. 5EvaluateYes — curation defenseYes — judgment over generation
Ch. 6ApplyYes — specification docYes — identification decisions
Ch. 7EvaluateYes — interpretive judgment docYes — legitimacy types
Ch. 8CreateYes — Commit documentYes — the course's core claim
Ch. 9MetacognitiveYes — process reflectionYes — directs the others
Ch. 10CreateYes — pipeline presentationYes — transfer demonstration
Section 6

Chapter-by-Chapter Specifications

Act One · Weeks 1–7 · Chapters 1–4

The Material and the Human

The tools, the user, and the brief — before a single concept is generated

Chapter 1 The Thirty-Minute Designer What you learned to do and why it is now the easy part
What the student learns to do Distinguish between the fluency of generation and the judgment of design — specifically, identify what questions were not asked in a fully AI-assisted 30-minute design session.
Learning outcomes
  • Distinguish generation fluency from design judgment Analyze
  • Identify at least five categories of human judgment skipped in a standard AI-assisted design session Apply
  • Articulate the course argument in one sentence in their own words Understand
Chapter opening

In medias res. No theory. Students open a brief, open their AI suite, and have 30 minutes. The output arrives. The chapter begins: "The output is good. That is not the point."

Core content blocks
  1. The thirty-minute experiment — what was produced and what was assumed
  2. The questions that weren't asked — a taxonomy of the human judgment the session skipped
  3. The Tier 1 map — what AI does at superhuman level; what this means for the engineer
  4. The cognitive forklift reframed for design — applied to the design studio
  5. The course argument stated directly — this course teaches the five things the forklift cannot do
Worked example
Brief: "Design an onboarding experience for a new employee." AI produces a complete, polished 5-step onboarding flow in under 30 minutes. The analysis reveals: no questions were asked about which type of new employee, what organization culture actually is vs. what HR says it is, what previous onboarding experiences felt like, or what "success" means to someone 90 days in. The output is generically correct and specifically wrong.
Assessable exercises
  1. Run the thirty-minute experiment with your own brief. Write a 500-word audit of what the session assumed that it never asked. Apply HHD required
  2. Compare your AI-generated output to one you would produce after two weeks of user research. What is different? What is missing? Analyze
  3. In one sentence, state the course argument in your own words. Not the textbook's words. Understand
Bridge to Ch. 2 "The output assumes a user. You haven't met them yet. Before you do — you need to know what your tools actually do."
 

Act One continued

Chapter 2 Finding the Grain What your tools do naturally, what they resist, and why it matters before you touch a brief
What the student learns to do Identify the grain of at least two AI tools — the natural affordances and resistances — and document what this means for when to use them and when to work against them.
Learning outcomes
  • Map the grain of at least two AI tools against a specific design brief Apply
  • Identify at least one design decision that changes because of grain awareness Analyze
  • Place AI tools within the Krebs Cycle of Creativity and explain where each is strongest and weakest Understand
Chapter opening

The carpenter and the wood grain. Crawford's Shop Class as Soulcraft. The tool that has a grain is not the enemy of craft — ignoring the grain is.

Core content blocks
  1. The grain metaphor applied to AI tools — what each does easily, what it fights
  2. The Krebs Cycle of Creativity (Oxman) — Science → Engineering → Design → Art. AI is strong at Engineering. Weakest at Art — converting behavior into new perceptions. Prerequisite note: Krebs Cycle introduced here — not assumed.
  3. Mapping the grain of three tools: a language model, an image generator, a code assistant
  4. The grain as a design resource — working with affordances rather than against them
  5. Platform awareness as a design competency — choosing tools based on what the brief needs, not habit
Worked example
A student designing a community health intervention. The language model's grain: produces coherent, complete, Western-normative health communication. Its resistance: cannot produce communication that feels like it came from within the community rather than addressed to it. Working with the grain means using it for structural scaffolding. Working against it means expecting cultural authenticity it cannot supply.
Assessable exercises
  1. Document the grain of your primary AI tool for your capstone domain — what it does naturally, what it resists, and one specific design decision that changes because of this knowledge. Apply
  2. Place your primary tool on the Krebs Cycle. Where is it strongest? Where does human judgment have to compensate? Analyze
  3. Given your capstone brief, which tool is best suited for which stage of the design process? Defend the assignment. Evaluate
Bridge to Ch. 3 "You know your tools. You still don't know your user."
Chapter 3 What Simulation Cannot Feel The empathy investigation and the gap between user modeling and user contact
What the student learns to do Conduct an empathy investigation that produces at least one finding that an AI simulation of the user could not have generated.
Learning outcomes
  • Conduct an empathy investigation (minimum 2 human contacts) using observation, interview, and artifact analysis Apply
  • Document at least one finding that would not appear in any AI-generated persona for the user group Apply
  • Explain why plausible user simulations are insufficient for design that serves specific people in specific contexts Analyze
Chapter opening

An AI-generated user persona next to a field note from an actual conversation. They are not the same document.

Core content blocks
  1. The difference between user simulation and user contact — what each produces
  2. Why AI personas are plausible and wrong — the training data problem for edge cases
  3. The empathy investigation protocol — observation, interview, artifact analysis
  4. What to look for that simulation misses — the unexpected, the contradictory, the embodied
  5. Tier 3 named directly — AI simulates interpersonal intelligence; humans live it
Worked example
Students designing for elderly users in low-income housing. The AI persona produces a coherent, internally consistent elderly user with predictable needs. The field investigation finds: the user has a specific relationship with a particular window in her apartment that organizes her entire daily routine in ways no needs assessment would surface. That finding changes the design.
Assessable exercises
  1. Conduct one genuine empathy investigation (minimum 2 human contacts). Document one finding that would not appear in any AI-generated persona for your user group. Apply
  2. Compare your AI-generated persona to your field findings. Where does the persona succeed? Where does it fail? What type of failure is it? Analyze
  3. What is the design consequence of relying on the AI persona alone? Name a specific decision that would have been wrong. Evaluate
Bridge to Ch. 4 "You know your tools. You know something real about your user. Now you have to decide what problem is worth solving."
Chapter 4 The Brief Is a Hypothesis Problem formulation and the art of reframing before generating
What the student learns to do Reframe a given design brief into at least two alternative problem definitions, selecting and defending one on explicit criteria.
Learning outcomes
  • Generate at least three alternative problem definitions from a single brief Apply
  • Evaluate each reframe against explicit criteria — user impact, feasibility, alignment with empathy findings Evaluate
  • Select and defend a reframe in a 3-minute presentation, fielding peer objections Create
Chapter opening

The classic brief: "Design a faster horse." The reframe: "Help people move between places efficiently." The AI response to each brief is radically different. The choice of brief is a human judgment.

Core content blocks
  1. The rationalist vs. co-evolutionary models of design — solving the given problem vs. reframing it
  2. Primary generators — the conceptual anchors that shape problem formulation
  3. The reframe protocol — five questions that produce alternative problem definitions
  4. Criteria for choosing among reframes — who decides, on what basis
  5. AI as a reframe stress-tester — using AI to find the failure modes of each reframe before committing
Worked example
Brief: "Reduce food waste in university dining halls." Three reframes with different solution spaces and different values implications. The choice is not a design decision — it is a values decision. The student must defend it.
Assessable exercises
  1. Produce three reframes of your capstone brief using the reframe protocol. Apply
  2. Evaluate each reframe against explicit criteria. Which survives the stress-test? Evaluate
  3. Select and defend your reframe in a 3-minute presentation to peers. Field two objections. Revise or hold your position with stated reasons. Create
Bridge to Ch. 5 "You have a problem worth solving. Now the Dreamer gets one week."
Act Two · Weeks 8–13 · Chapters 5–7

The Human Capacities

One pipeline stage per chapter. Where the judgment builds.

Chapter 5 One Week for the Dreamer AI-assisted ideation, human curation, and the judgment that separates 50 ideas from 3
What the student learns to do Direct an AI ideation session that produces a defined quantity of concepts, then apply human curatorial judgment to select and develop the most promising three.
Learning outcomes
  • Direct an AI ideation session that produces minimum 30 genuine concepts (not variations on one concept) Apply
  • Conduct a documented curation session reducing 30+ concepts to 3 with explicit criteria Evaluate
  • Write a defense of each selected concept that explains what made it worth developing over the alternatives Create
Chapter opening

The Dreamer/Realist/Critic framework (Disney Imagineering). This is the Dreamer's week. The Dreamer's job is not to evaluate — it is to generate without premature constraint. AI is the best Dreamer tool ever built. This week is about learning to use it without becoming dependent on it.

Core content blocks
  1. The Dreamer/Realist/Critic framework — what each mode does and when each is needed
  2. AI as Dreamer — why this is the right role and the only week it owns
  3. The ideation session protocol — directing AI generation toward genuine divergence, not repetition
  4. The curation problem — why choosing among 50 plausible options is harder than generating them
  5. Curatorial criteria — what makes a concept worth developing beyond its plausibility
Worked example
50 concepts generated in one afternoon for the food waste reframe. Curation reveals: 40 are variations on the same social comparison mechanic; 8 are interesting but infeasible in a semester; 2 are genuinely surprising. Those two get developed. The analysis of why the surprising ones survived is the lesson.
Assessable exercises
  1. Run an AI ideation session producing minimum 30 concepts for your capstone brief. Apply
  2. Conduct a documented curation session. Select 3 concepts. Write a 300-word defense of each selection. Evaluate
  3. Name one judgment call in the curation process that required your values, your domain knowledge, or your accountability — that an AI tool could not have made on your behalf. Create HHD required
Bridge to Ch. 6 "You have three concepts worth building. The Realist takes over."
Chapter 6 The Realist Builds AI-accelerated prototyping and the identification decisions that only humans make
What the student learns to do Construct a prototype using AI acceleration while documenting the identification decisions — what to build and why — that the AI could not supply.
Learning outcomes
  • Produce a low-fidelity prototype of a primary concept using AI acceleration Apply
  • Document minimum 5 identification decisions — choices the AI could not make without human input Analyze
  • Articulate the relationship between prototype fidelity and the specific question it is designed to answer Evaluate
Chapter opening

The Realist's job is to take the Dreamer's concepts and make them real. AI accelerates the build. The human decides what to build and what to test.

Core content blocks
  1. The Realist's cognitive job — translate concept into testable artifact
  2. AI as build accelerator — code, design, content, structure
  3. The identification decisions — every prototype encodes assumptions about what matters; AI cannot supply these
  4. The specification document — translating human judgment into a build brief for AI tools
  5. Knowing when to stop building — the prototype answers a specific question; it is not finished product
Worked example
The dining hall receipt system. The AI builds the interface in an afternoon. The identification decisions the human must supply: which waste metric is most motivating, where to surface it in the dining hall journey, individual vs. aggregated data, what happens when a student sees demoralizing data. None of these are in the AI's build brief until the human puts them there.
Assessable exercises
  1. Build a low-fidelity prototype of your primary concept using AI acceleration. Apply
  2. Submit a specification document identifying every decision the AI could not make — minimum 5 explicit identification decisions. Analyze
  3. What specific question does this prototype answer? What question does it deliberately not answer yet? Evaluate
Bridge to Ch. 7 "You have something real. Now the Critic asks whether it actually works."
Chapter 7 The Critic Tests Interpretive judgment, the three legitimacy types, and what the data cannot tell you
What the student learns to do Evaluate prototype test results using the three legitimacy types — pragmatic, moral, and cognitive — identifying where human interpretive judgment is required beyond what the data shows.
Learning outcomes
  • Conduct prototype testing with minimum 5 users and document results Apply
  • Evaluate results against all three legitimacy types — not just pragmatic Evaluate
  • Identify at least one finding the data cannot resolve and specify what additional judgment or evidence would be required Analyze
Chapter opening

The test results are in. They show the prototype works. The question the data cannot answer: should it?

Core content blocks
  1. The three legitimacy types — pragmatic (does it work), moral (should it exist), cognitive (can it be trusted)
  2. Why AI achieves pragmatic legitimacy and struggles with the others
  3. The interpretive leap — what the data shows and what it implies are different things
  4. Reading test results as a designer — what counts as evidence, what counts as noise
  5. The failure modes of interpretation — over-trusting data, under-trusting human judgment, confusing metric with meaning
Worked example
The dining hall receipt prototype tests at 73% motivating. The interpretive judgment required: which 27% found it demotivating and why; whether short-term motivation translates to behavior change; whether the intervention is equitable across income levels; unintended consequences. None of these are in the 73% figure.
Assessable exercises
  1. Conduct prototype testing with minimum 5 users. Document results. Apply
  2. Submit an interpretive judgment document that addresses all three legitimacy types. Evaluate
  3. Name one finding the data cannot resolve. What would you need — in the way of evidence, expertise, or judgment — to address it? Analyze
Bridge to Ch. 8 "You know what the data shows and what it doesn't. Now you have to decide."
Act Three · Weeks 14–15 · Chapters 8–10

The Commit

The stage Design Thinking omits

Chapter 8 — strengthened spec v0.2 The Commit Staking your name, acknowledging uncertainty, and the judgment that AI can never make
What the student learns to do Commit to a design direction by staking their name on a specific course of action — through a draft, peer critique, and revised final — documenting what they are accountable for and what uncertainty remains.
Learning outcomes
  • Produce a draft Commit document that specifies a course of action, its evidential basis, acknowledged uncertainty, and stated accountability Create
  • Critique a peer's Commit document using the four structural criteria, identifying where specificity, accountability, or honesty about uncertainty is missing Evaluate
  • Revise and finalize a Commit document that survives peer scrutiny Create
  • Articulate the difference between commitment under uncertainty and certainty, and explain why AI cannot make the Commit on the designer's behalf Analyze
Chapter opening

Design Thinking ends at Test. This chapter is the stage it omits. AI iterates forever. Humans commit. The Commit is not certainty — it is the decision to act despite uncertainty, with your name attached to the consequences.

Core content blocks
  1. Why Design Thinking omits Commit — it was designed as an ideation methodology, not a deployment methodology. The omission was invisible when prototyping was hard. AI has made it catastrophic.
  2. What Commit actually requires — a specific course of action (not "explore further"); evidence from the test phase; acknowledged uncertainty; stated accountability for what happens if it fails.
  3. The bad Commit — before peer critique — the food waste worked example shown first as a vague, over-qualified, accountability-free draft. This is the model for what peer critique is designed to find and fix.
  4. The peer critique protocol — four diagnostic questions: Specificity, Evidential basis, Honesty about uncertainty, Accountability.
  5. The good Commit — after peer critique — the same food waste example revised: specific, evidentially grounded, honest about uncertainty, with a named accountable designer.
  6. Phronesis as the meta-capacity — the peer critique round is not just a quality gate. Students who watch their Commit fail under peer scrutiny before it fails in the real world are developing phronesis. Students who only write the document once are practicing confidence, not judgment.
The Commit document — required five-element structure
The decision
A specific course of action — not a direction, a recommendation, or an option
The evidence
Direct citations from the test phase — what the data showed
The uncertainty
What the designer does not know and cannot know before deployment
The accountability
What the designer is responsible for if it fails — stated specifically, bounded explicitly
The revision condition
What new information would change this decision
The peer critique protocol — four diagnostic questions
  • Specificity: Could someone act on this recommendation without a follow-up conversation?
  • Evidential basis: Is every claim traceable to the test phase, or is some of it confidence dressed as evidence?
  • Honesty about uncertainty: Does this document say what the designer doesn't know, or only what they know?
  • Accountability: Is there a name attached to specific consequences — or is the document engineered to distribute blame?
Bad draft — before peer critique

"Based on our testing, we recommend exploring the dining hall receipt concept further, with additional user research to refine the messaging and address equity concerns. The concept showed promising results and has potential for positive behavior change."

What peer critique finds: No specific action. No bounded accountability. Uncertainty acknowledged but used to defer rather than to scope. "Exploring further" is a Commit document's equivalent of iterating forever.

Revised Commit — after peer critique

"I am recommending the dining hall receipt system with dollar-value framing, integrated with the existing POS infrastructure, piloted in one dining hall for one semester. I am accountable for: accurate waste calculation, clear user communication, and equitable experience across income levels. I am not accountable for behavior change beyond the pilot semester, or outcomes in dining halls with different demographics. This recommendation should be revisited if the pilot shows differential impact by income level exceeding 15 percentage points."

What the revision demonstrates: Specificity that enables action. Accountability that is bounded and honest. Uncertainty scopes rather than protects. A revision condition concrete enough to trigger.

Assessable exercises
  1. Produce a draft Commit document for your capstone design using the required five-element structure. Create
  2. Apply the four peer critique diagnostic questions to a partner's Commit document. Deliver written feedback naming every place where specificity, evidential basis, honesty about uncertainty, or accountability is missing. Evaluate
  3. Revise and finalize your Commit document. Name one judgment call in the Commit that required your values, your domain knowledge, or your accountability — that an AI tool could not have made on your behalf. Create HHD required
Weighting flag (OQ-006): The Commit document is weighted at 10% in the current assessment distribution. Consider whether this reflects the chapter's importance as the course's most original contribution. A 10% weighting may signal to students that the Commit is not the course's center of gravity.
Bridge to Ch. 9 "You made a commitment. What made that possible? The answer is not confidence, expertise, or data. It is the capacity to know which mode the moment required — and to switch to it."
Chapter 9 The Metacognitive Switch Knowing which mode the room needs, and the intelligence that directs the others
What the student learns to do Identify which of the three modes (Dreamer/Realist/Critic) a design situation requires and articulate the metacognitive switch — the judgment that directs the process rather than executes it.
Learning outcomes
  • Identify at least two moments in their capstone process where a mode switch was required and describe what triggered the recognition Analyze
  • Explain why the metacognitive switch has no AI equivalent Analyze
  • Articulate what each mode costs when it runs too long — and what the switch produced in their specific case Evaluate
Chapter opening

The most dangerous moment in any design process is when the Dreamer won't stop dreaming, or the Critic won't let the Dreamer start. The capacity to recognize which mode is needed — and to switch — is what the conductor does that no instrument can do for itself.

Core content blocks
  1. The metacognitive switch defined — knowing which mode the moment requires, not just executing a mode well
  2. Signs that each mode has gone on too long — the Dreamer who can't curate, the Realist who won't stop building, the Critic who prevents the Commit
  3. AI as a mode amplifier — it will Dream forever, Realize indefinitely, Critique without resolution. The human is the circuit breaker.
  4. The human as director of the process — the role that has no AI equivalent, and why Tier 4 is the directing intelligence, not just a capacity among others
  5. Developing the switch — retrospective analysis as the primary development mechanism; the switch cannot be taught prospectively, only recognized retrospectively and then anticipated
Worked example — placeholder (OQ-002)
To be drawn from student capstone process reflection. Placeholder: a design process where the Dreamer phase extended two weeks past the intended endpoint because the AI kept producing compelling variations — and the cost was a compressed Realist phase that produced a weaker prototype.
Assessable exercises
  1. Write a 500-word process reflection documenting at least two moments when you had to switch modes — what triggered the recognition, what the switch cost, and what it produced. Analyze
  2. Name one moment when you failed to switch and describe the consequence. Analyze
  3. What is the mode you default to? What does that default cost you in a design process? Evaluate
Bridge to Ch. 10 "You can name the switch. Now: can you run the whole pipeline without the scaffolding?"
Chapter 10 — completed spec v0.2 The Full Pipeline Transfer, demonstration, and the argument the series is making
What the student learns to do Apply the full AImagineering pipeline to a new brief in condensed form — demonstrating that the capacities developed across the course are internalized as judgment rather than executed as procedure.
The transfer test — why this chapter exists: A student who can run the pipeline with the book open has learned a procedure. A student who can run it on a new brief, in compressed time, without step-by-step scaffolding, has internalized the judgment. If the capacities are not present in Chapter 10 performance, they were never present — they were compliance.
Learning outcomes
  • Apply the full pipeline to a new brief in condensed form (one week), with each stage explicitly narrated Create
  • Identify, unprompted, where in the condensed process the human judgment layers were required and what they produced Analyze
  • Articulate the series argument — what is irreducibly human in AI-augmented design, and why — in their own terms, not the book's Evaluate
  • Produce a pipeline narrative that could be delivered to a client, employer, or review committee Create
Chapter opening

This chapter does not summarize the book. It tests it. A new brief. No chapter numbers to consult. No worked example to follow. The question is not "did you learn the pipeline?" It is "can you use it?"

Core content blocks
  1. The pipeline as a single argument — not a sequence of methods but one integrated claim: the human judgment layers are not supplements to AI-assisted design; they are the design.
  2. The condensed pipeline brief — a new design problem introduced in class, deliberately chosen to prevent direct transfer of the capstone worked examples. Domain unfamiliar enough that domain expertise cannot substitute for process. Complex enough to require genuine reframing. Consequential enough to require a real Commit.
  3. What has changed for this student — specifically — a structured reflection protocol that requires concrete behavioral change, not generic growth statements. "I now ask three reframe questions before touching a tool" is evidence. "I have a deeper appreciation for human judgment" is not.
  4. The professional application — students name the specific role they are moving into and describe one decision in that role where the pipeline applies directly.
  5. The series argument and what comes next — where this course sits in the Irreducibly Human series; what the next course takes on; why the Commit in this course becomes a moral architecture in the next.
Worked example — placeholder (OQ-002)
To be drawn from student capstone presentations. Placeholder structure: one pipeline narrative showing the full sequence of human judgment calls — from grain identification through the Commit — presented as the model of what the Chapter 10 assessment requires.
Assessable exercises
  1. Apply the full pipeline to the condensed brief introduced in class. Produce a pipeline narrative document that covers every stage and names every human judgment call. Create
  2. In your pipeline narrative, identify the three moments where you were most tempted to let the AI decide. What did you do instead? What did it produce? Analyze
  3. Name one judgment call anywhere in your full course capstone that required your values, your domain knowledge, or your accountability — that an AI could not have made on your behalf. This is your graduation statement from AImagineering. Create HHD — final
  4. Describe the role you are moving into. Name one decision in that role where this pipeline applies directly. What does the pipeline require you to do that you would not have done before this course? Evaluate — Professional Application
Final presentation criteria

Students present their capstone process as a pipeline narrative — not the output, but every human judgment call, every metacognitive switch, and the Commit document they stand behind. Assessment criteria: specificity of judgment call narration, evidence of metacognitive switching, quality of Commit document, credibility of the Human Half Declaration.

Section 7

Assessment Architecture

The Human Half Declaration appears in every major deliverable. Not optional. A submission that cannot fill it has not done the human half of AImagineering.
DeliverableChapterWeightBloom's level
30-min experiment auditCh. 15%Analyze
Grain documentationCh. 210%Apply
Empathy investigationCh. 315%Apply
Reframe defenseCh. 415%Evaluate / Create
Ideation curation + defenseCh. 510%Evaluate
Prototype + specification docCh. 615%Apply / Analyze
Interpretive judgment documentCh. 710%Evaluate
Commit document (draft → critique → final)Ch. 810%Create
Process reflectionCh. 95%Analyze
Full pipeline presentationCh. 105%Create
Weighting note (OQ-006): Weights sum to 100%. The Commit document is currently weighted at 10% — the same as grain documentation and ideation curation. Given that the Commit is the course's most original contribution and its stated center of gravity, consider whether 10% accurately signals its importance to students. A rebalancing toward the back half of the course may be warranted in v0.3.
Irreducibly Human: AImagineering — TIC TOC v0.2 · March 2026
Chapter specs complete. Worked examples for Ch. 9–10 pending capstone material (OQ-002). Resolve OQ-001 (co-instructor) and OQ-002 (capstone domain) before content production begins.