Irreducibly Human Series

AImagineering: The Full Design Pipeline

What AI can't do — and what that means for engineers who use it

Course [XXXX 5XXX]  ·  4 Credit Hours  ·  Fall [Year]  ·  In-person
Instructor: Nik Bear Brown  ·  [email protected]
Version 1.0  ·  [Distribution Date]  ·  Reviewed by Dev the Dev

Contents

  1. Welcome
  2. The Irreducibly Human Series
  3. Course Information
  4. Learning Outcomes
  5. Required Materials
  6. Assessment and Grading
  7. Course Schedule
Section 1

Welcome

I've spent years watching capable engineers generate — fluently, prolifically, impressively. They can prompt. They can iterate. They can produce a polished design artifact in thirty minutes that would have taken a week five years ago. And I've watched the same engineers stand in front of a client, a review committee, or a deployment decision and not be able to answer the question that actually matters: Why this? Why now? And what happens if it fails?

The failure mode I've seen most consistently in AI-augmented design isn't a broken tool. It's an engineer who accepted the brief as given, generated options without reframing the problem, and committed to a direction because the output looked good — not because they could defend the decision on its merits. The work is polished. The judgment underneath it is borrowed.

That gap is what this course closes.

AImagineering is not a course about AI tools. You already know how to use them. It is a course about everything that happens before and after the tool runs — which is, increasingly, everything that matters. Before: knowing what problem is worth solving, meeting the human who has it, and reframing the brief before you generate. After: choosing among fifty plausible options, building the thing that tests the right question, reading what the data actually shows, and committing to a course of action with your name on the consequences.

The thesis of this course is simple and uncomfortable: ideation is now a tool operation, not a cognitive achievement. Which means the hard part of design — the part that takes judgment, accountability, and a human being willing to be wrong in a specific way — is no longer the week you spend generating. It is every week before and after.

What you will leave with: the complete AImagineering pipeline — Empathize, Define, Ideate, Prototype, Test, Commit — applied to a real design problem in your domain, with explicit documentation of every human judgment call that an AI tool could not have made on your behalf.

Here is what to do before we meet: read Chapter 1 of Irreducibly Human: AImagineering. Bring a brief — any brief, anything you've been handed or assigned or invented — and be ready to spend thirty minutes with it. The experiment happens in the first session. The analysis is the rest of the course.

— Nik Bear Brown | [email protected]
Section 2

The Irreducibly Human Series

We are in the early years of the most powerful ideation tools ever built. AI systems can generate more design options in an afternoon than a team could produce in a sprint. They are genuinely poor at knowing which problem is worth solving, meeting the person who has it, choosing among outputs with judgment rather than plausibility, and committing to a direction in a way that makes anyone accountable for what happens next.

The Irreducibly Human series develops exactly those capacities — the forms of reasoning and judgment that AI tools require humans to supply, and that your competitors who only learned to generate will not have.

The series entry point — Botspeak — builds the complete architecture for understanding what you are collaborating with: the tier taxonomy, the Five Modes, the cognitive nature of AI systems. This course — AImagineering — goes deeper on the design layer specifically, developing the pipeline capacities that Botspeak names but does not fully build: the empathy investigation that cannot be simulated, the reframe that changes what gets built, the Commit that no tool can make on your behalf.

The companion courses go deeper on adjacent layers. Conducting AI builds the full Tier 4 supervisory toolkit for engineers who deploy AI systems. Causal Reasoning builds the capacity to construct a defensible model of what causes what in a domain, and to know what that model can and cannot support. The courses can be taken in any order after Botspeak; each is self-contained while pointing toward the others.

Section 3

Course Information

Course Identifiers

FieldValue
Course TitleIrreducibly Human: What AI Can't Do — AImagineering: The Full Design Pipeline
Course Number[XXXX 5XXX — assigned at CourseLeaf submission]
Credit Hours4
TermFall [Year]
Mode of DeliveryIn-person
ComponentsLecture/Seminar (1× weekly) + TA-led Studio Lab (1× weekly in-class)
Department[TBD]

Meeting Information

Lecture/Seminar Sessions

Days and times: [TBD]  ·  Location: [Building, Room]

Studio Lab (TA-led)

Day and time: [TBD]  ·  Location: [Building, Room]

The Studio Lab is a required course component, not an optional recitation. It is where pipeline stages become practiced skills. Chapter 4's reframe defense cannot be rehearsed by reading alone — it requires live presentation with peer objection and structured critique. Chapter 8's peer critique of Commit documents requires supervised practice with real stakes. Missing the lab is not equivalent to missing a lecture — it is missing the part of the course where judgment consolidates.

Instructor

FieldValue
NameNik Bear Brown
Email[email protected]
Response timeWithin 48 hours on weekdays. Put URGENT in subject line for time-sensitive questions.
Office / Zoom[TBD]
Student hours[Days, times, location] — booking link TBD
Preferred contactEmail for logistics. Student hours for anything that takes more than two sentences to answer well.

I hold student hours for you — not only for students with emergencies. Come to pressure-test your reframe before the defense, unpack what a grain-mapping exercise surfaced, understand where the Commit fits into professional practice, or see what a finished pipeline narrative looks like. The most productive conversations I have with students happen outside scheduled sessions.

Teaching Assistant

FieldValue
NameTBA
EmailTBA
Studio Lab hoursTBA

The TA runs the weekly Studio Lab — designing exercises, facilitating reframe defenses and critique sessions, running the Commit peer review round, and returning written feedback on lab submissions. For questions about pipeline stages in practice, grain documentation, empathy investigation protocol, and weekly exercise work, the TA is your first resource. Tool and platform questions go to the TA first; if unresolved, the TA forwards to the professor.

Prerequisites

Official prerequisites: Botspeak (Course 1 — Irreducibly Human series) or equivalent AI fluency foundation; Graduate standing in Engineering or related field (exact CourseLeaf string TBD)

What this course assumes you know

You understand the difference between pattern completion and knowledge retrieval. You have used AI tools at Botspeak proficiency — specification, delegation, conversation, discernment, diligence. You have access to at least one AI tool (Claude, ChatGPT, Gemini, or equivalent).

What this course does not assume

Prior design thinking training. Prior engineering design coursework. Any background in human-centered design, UX, or product development. This course introduces the full pipeline from the ground up.

A note for students with design backgrounds

Students who arrive with prior design thinking training sometimes find the early weeks the most disorienting — specifically, the course's insistence that Design Thinking as typically taught ends at Test and omits the most consequential stage. That disorientation is the course working as intended. The Commit stage is not a harder version of the prototype review. It is a different cognitive and professional operation. Students who treat AImagineering as an advanced prompting course will produce technically correct exercises and miss the course. Students who approach the Commit as genuinely new terrain — regardless of prior design training — will get the most from it.

If you are missing a prerequisite, contact the instructor before the first week.

Section 4

Learning Outcomes

By the end of this course, students will be able to:

  1. Identify the grain of at least three AI tools — what each does naturally, what it resists, and where the human must supply what the tool cannot — and apply this knowledge to tool selection for specific design stages
  2. Conduct an empathy investigation that produces at least one finding an AI simulation of the user could not have generated, and explain why plausible user simulations are insufficient for design that serves specific people in specific contexts
  3. Reframe a given design brief into at least two alternative problem definitions, evaluate each against explicit criteria, and defend a selection in a live presentation that fields peer objection
  4. Direct an AI ideation session that produces a defined quantity of genuinely distinct concepts, then apply human curatorial judgment to select and develop the most promising three with written defense
  5. Construct a prototype using AI acceleration while documenting the identification decisions — what to build and why — that the AI could not supply without human input
  6. Evaluate prototype test results using the three legitimacy types — pragmatic, moral, and cognitive — identifying where human interpretive judgment is required beyond what the data shows
  7. Commit to a design direction by producing a Commit document that specifies a course of action, its evidential basis, acknowledged uncertainty, and stated accountability — and that survives structured peer critique
  8. Identify the metacognitive switch — the recognition of which mode (Dreamer/Realist/Critic) a design moment requires — through retrospective analysis of their own process
  9. Apply the full AImagineering pipeline to a new brief in condensed form, demonstrating that the capacities developed across the course are internalized as judgment rather than executed as procedure
  10. Name, in every major deliverable, one judgment call that required their values, domain knowledge, or accountability that an AI tool could not have made on their behalf
Section 5

Required Materials

Textbook

FieldValue
TitleIrreducibly Human: What AI Can't Do — AImagineering: The Full Design Pipeline
AuthorNik Bear Brown
PublisherBear Brown & Company / Kindle Direct Publishing, 2026
Availability[Amazon Kindle / print link — TBD at publication]
Cost[TBD]
EditionFirst edition. No prior edition exists.

Supplementary Readings

Distributed throughout the semester at no cost. Required supplementary readings are marked [Required] in the weekly schedule. Optional readings are marked [Recommended] and are genuinely optional.

Required Technology

AI tools (free tiers available — no purchase required)

Students may use any combination. The grain documentation exercises in Chapter 2 require working with at least two distinct tools. You are not required to use all three — you are required to understand that the grain differs across tools and that this difference is a design decision.

Prototyping and documentation tools (free, browser-based)

Course platforms

Section 6

Assessment and Grading

Point Summary

AssessmentPointsQuality/Portfolio
Reading Responses (5 × 30 pts)150✓ 20 pts each
Weekly Studio Exercises (8 × 25 pts, drop lowest of 9)200✓ 20 pts each
Studio Lab Participation100✓ 20 pts component
Midterm100
Final Project — Pipeline Protocol Checkpoint100✓ 20 pts
Final Project — Peer Review Checkpoint100✓ 20 pts
Final Project — Full Pipeline Presentation250✓ 20 pts
Total1000

AI-Based Grading Approach

Due to the widespread use of Generative AI, grading is structured as follows:

800+ points — relative scale
Top 25%A
Next 25%A–
Next 25%B+
Final 25%B
Below 800 — absolute scale
780–799C+
730–779C
700–729C–
600–699D
Below 600F
Students below 800 points cannot earn a grade higher than B–, even if the relative curve would otherwise place them higher. The instructor reserves the right to make minor adjustments for fairness.

Quality/Portfolio Score (20 points — on all qualifying assignments)

Every assignment carrying the Quality/Portfolio component is evaluated on a relative 20-point scale comparing your work to peers, emphasizing depth of design judgment, specificity of human judgment call identification, and evidence that the irreducibly human reasoning was performed by you — not delegated to a tool.

Percentile BandScore
Bottom 25%5 pts
26–50th percentile10 pts
51–75th percentile15 pts
Top 25%20 pts

Full band descriptions for each assignment type are distributed with each assignment prompt.

AI Use in Assignments

You are encouraged to use generative AI tools on every assignment. Citation is required. Undisclosed AI use is an academic integrity violation. Disclosed AI use is not.

Every submission must include an AI Use Disclosure block:

AI USE DISCLOSURE
Tool(s) used:
Portions assisted:
How used:
What I changed:
What the AI could not do: [name at least one judgment call that required
your values, domain knowledge, or accountability — this field is not optional]
The last field is the Human Half Declaration. A disclosure that cannot name one thing the AI could not do has not demonstrated that the student performed the irreducibly human layer of the design work. This is not a formality. It is the assessment.

Drop Policy

The lowest-scoring Studio Exercise is dropped. Eight of nine exercises count toward the final grade. This absorbs one week where a pipeline stage didn't click. It does not absorb a pattern of non-engagement.

Section 7

Course Schedule

The schedule maps each week to a chapter in Irreducibly Human: AImagineering. Read the assigned chapter before Session A. Come to Session A with the case in your head. Come to Session B ready to use the concept. Come to the Studio Lab ready to apply it.

Reading time per chapter: approximately 45–75 minutes  ·  ⚑ = graded deliverable due  ·  ★ = transition week

Act One · Weeks 1–7 · Chapters 1–4

The Material and the Human

The tools, the user, and the brief — before a single concept is generated

Week 1 The Thirty-Minute Designer Chapter 1
By the end of this week: Identify at least five categories of human judgment that a fully AI-assisted 30-minute design session skips — using your own experiment as the evidence.
Session AIn medias res. No theory. A brief is on the table. Thirty minutes. AI tools open. The output arrives. The output is good. That is not the point.
Session BWhat was produced — and what was assumed. A taxonomy of the human judgment the session skipped. The course argument stated directly: AI has made ideation easy, which means everything before and after ideation is now the work of design.
Studio LabThirty-minute experiment debrief — what categories of skipped judgment appeared across the room? (Ungraded — prepares for Exercise #1.)
⚑ Reading Response #1 (30 pts)
Run the thirty-minute experiment with your own brief. Write a 500-word audit of what the session assumed that it never asked. Identify at least five categories of human judgment the session skipped. Due before Session A, Week 2.
Week 2 Finding the Grain Chapter 2
By the end of this week: Map the grain of at least two AI tools against a specific design brief — what each does naturally, what it resists, and what this means for when to use it.
Session AThe carpenter and the wood grain. A community health intervention designed entirely with a language model produces communication that is coherent, complete, and addressed to the community rather than from it. The grain as the explanation.
Session BThe grain metaphor applied to AI tools. The Krebs Cycle of Creativity (Oxman) — Science → Engineering → Design → Art. Where each tool is strongest; where the human must compensate. Platform awareness as a design competency.
Studio LabGrain exploration session — students map one tool's grain against their capstone domain brief. (Ungraded — prepares for Exercise #1.)
Week 3 ★ Finding the Grain (continued) Chapter 2 continued
By the end of this week: Produce a grain documentation for your primary AI tool, including one specific design decision that changes because of grain awareness.
Session AThe Krebs Cycle worked through: mapping three tools — language model, image generator, code assistant — against a single design brief. Where each accelerates; where each misleads.
Session BChoosing tools based on what the brief needs, not habit. The grain as a resource — working with affordances rather than against them. Bridge: "You know your tools. You still don't know your user."
Studio LabStudio Exercise #1 — grain documentation workshop.
⚑ Studio Exercise #1 (25 pts)
Document the grain of your primary AI tool for your capstone design domain: (1) what it does naturally against a specific brief; (2) what it resists; (3) one specific design decision that changes because of this knowledge; (4) placement on the Krebs Cycle with justification. Human Half Declaration required. Due before Studio Lab, Week 4.
⚑ Reading Response #2 (30 pts)
Return to your thirty-minute experiment brief from RR1. Apply grain awareness: which tool's grain shaped the output most? Where did working against the grain produce the wrong kind of output — and how would you have done it differently? Due before Session A, Week 4.
Week 4 What Simulation Cannot Feel Chapter 3
By the end of this week: Describe the specific difference between an AI-generated user persona and a genuine empathy finding — using your own domain as the example.
Session AAn AI-generated user persona next to a field note from an actual conversation. They are not the same document. A student designing for elderly residents in low-income housing: the AI produces a coherent, internally consistent user. The field investigation finds the window.
Session BWhy AI personas are plausible and wrong — the training data problem for edge cases. The empathy investigation protocol: observation, interview, artifact analysis. What to look for that simulation misses — the unexpected, the contradictory, the embodied.
Studio LabEmpathy investigation design workshop — students plan their investigation approach for their capstone user. (Ungraded — prepares for Exercise #2.)
Week 5 ★ What Simulation Cannot Feel (continued) Chapter 3 continued
By the end of this week: Conduct a genuine empathy investigation and identify at least one finding an AI persona would not have produced.
Session AWhat counts as a genuine empathy finding. How to document surprise. The design consequence of relying on AI simulation alone — naming a specific decision that would have been wrong.
Session BComparing AI persona outputs to field findings — where the persona succeeds, where it fails, and what type of failure it is. Bridge: "You know your tools. You know something real about your user. Now you have to decide what problem is worth solving."
Studio LabStudio Exercise #2 — empathy findings debrief and comparison.
⚑ Studio Exercise #2 (25 pts)
Submit your empathy investigation: (1) documentation of minimum 2 human contacts using observation, interview, or artifact analysis; (2) at least one finding that would not appear in any AI-generated persona for your user group; (3) comparison of your AI persona to your field findings — where it succeeds, where it fails, and what type of failure it is; (4) one specific design decision that would have been wrong if you had relied on the persona alone. Human Half Declaration required. Due before Studio Lab, Week 6.
⚑ Reading Response #3 (30 pts)
Describe the most important thing your empathy investigation found that surprised you. What does this finding imply for your capstone design brief? What would the AI persona have told you to do instead? Due before Session A, Week 6.
Week 6 The Brief Is a Hypothesis Chapter 4
By the end of this week: Generate at least three alternative problem definitions from your capstone brief using the reframe protocol.
Session AThe classic brief: "Design a faster horse." The reframe: "Help people move between places efficiently." The AI response to each brief is radically different. The choice of brief is a human judgment — and it is made before any tool runs.
Session BThe rationalist vs. co-evolutionary models of design. The reframe protocol — five questions that produce alternative problem definitions. Criteria for choosing among reframes: user impact, feasibility, alignment with empathy findings. AI as a reframe stress-tester.
Studio LabReframe protocol workshop — students produce three reframes of their capstone brief with peer pressure-testing. (Ungraded — prepares for Exercise #3.)
Week 7 ★ The Brief Is a Hypothesis (continued) Chapter 4 continued
By the end of this week: Select and defend a reframe in a live 3-minute presentation, field peer objections, and revise or hold with stated reasons.
Session AThe food waste case: three reframes with different solution spaces and different values implications. The choice is not a design decision — it is a values decision. What makes a reframe defensible vs. arbitrary.
Session BEvaluating reframes against explicit criteria. What changes when the problem statement changes. The reframe as a commitment — not a constraint to work around but a hypothesis to test. Bridge: "You have a problem worth solving. Now the Dreamer gets one week."
Studio LabStudio Exercise #3 — reframe defense presentations with structured peer critique.
⚑ Studio Exercise #3 (25 pts)
Reframe defense: (1) three reframes of your capstone brief produced using the reframe protocol; (2) evaluation of each reframe against explicit criteria — user impact, feasibility, alignment with empathy findings; (3) a 3-minute live defense of your selected reframe; (4) written response to peer objections — revise or hold, with stated reasons. Human Half Declaration required. Due before Studio Lab, Week 8.

⚠ Midterm (Week 7/8 — flex) · 100 pts

Multi-stage brief analysis. A novel design situation is provided with no annotation about which pipeline stages apply. Demonstrate Act One fluency as practice: document what the brief assumes that it never asked; map the grain of the tool most appropriate for ideation in this domain; identify the empathy investigation that would change the problem statement; produce two reframes and select one with criteria-based defense; name the human judgment call that determines which reframe is worth pursuing.

No pipeline recitation. No framework description. Application only. This is the Act One → Act Two gate.

Act Two · Weeks 8–13 · Chapters 5–7

The Human Capacities

One pipeline stage per chapter. Where the judgment builds. You enter Act Two able to define the problem worth solving. You leave with something built, tested, and interpreted — and the judgment to know what the data shows and what it cannot.

Week 8 ★⚑ One Week for the Dreamer Chapter 5
By the end of this week: Direct an AI ideation session producing minimum 30 genuine concepts, then apply human curatorial judgment to reduce them to 3 with explicit defense.
Session AThe Dreamer/Realist/Critic framework. This is the Dreamer's week — the only week AI owns the session. The Dreamer's job is not to evaluate. It is to generate without premature constraint. What directing AI generation toward genuine divergence looks like vs. repetition.
Session BThe curation problem — why choosing among 50 plausible options is harder than generating them. Curatorial criteria: what makes a concept worth developing beyond its plausibility. Why the surprising concepts survive when the plausible ones shouldn't.
Studio LabStudio Exercise #4 — ideation session and initial curation workshop.
⚑ Reading Response #4 (30 pts)
After your ideation session: what proportion of your 30+ concepts were genuine variants vs. repetitions of the same mechanic? What made the two or three most interesting concepts interesting? What would the curation look like if "most surprising" was your only criterion? Due before Session A, Week 9.
⚑ Studio Exercise #4 (25 pts)
Ideation curation: (1) directed AI ideation session producing minimum 30 concepts — document with prompts used and concept count; (2) documented curation session reducing 30+ concepts to 3 with explicit criteria; (3) 300-word written defense of each selected concept. Human Half Declaration required: name one judgment call in the curation that required your values, domain knowledge, or accountability. Due before Studio Lab, Week 9.
Week 9 The Realist Builds Chapter 6
By the end of this week: Understand the identification decisions in prototype work — what the Realist must supply that no build accelerator can provide.
Session AThe dining hall receipt system. AI builds the interface in an afternoon. The identification decisions the human must supply: which waste metric is most motivating, where to surface it in the journey, individual vs. aggregated data, what happens when a student sees demoralizing data. None of these are in the build brief until the human puts them there.
Session BThe Realist's cognitive job: translate concept into testable artifact. The specification document — translating human judgment into a build brief for AI tools. Knowing when to stop building — the prototype answers a specific question; it is not finished product.
Studio LabStudio Exercise #5 — specification document workshop for primary concept.
⚑ Studio Exercise #5 (25 pts)
Specification document: write a complete build specification for your primary concept that makes every identification decision explicit. Required: (1) the specific question this prototype is designed to answer; (2) minimum 5 identification decisions — choices the AI could not make without human input, with rationale for each; (3) the question this prototype is deliberately not answering yet. Due before Studio Lab, Week 10.
Week 10 The Realist Builds (continued) Chapter 6 continued
By the end of this week: Build a low-fidelity prototype using AI acceleration, with all identification decisions documented.
Session ABuild session: translating the specification document into a working prototype. What AI accelerates in the build; where identification decisions keep surfacing that weren't in the spec.
Session BThe relationship between prototype fidelity and the specific question it answers. What "done" means for a prototype — not finished, but testable. Preparation for the Critic's week.
Studio LabPrototype build work session — TA available for specification feedback and build support.
Week 11 ★ The Realist Builds (continued) Chapter 6 continued
By the end of this week: Complete and submit the prototype with full specification document.
Session APrototype review session — students present work-in-progress with identification decisions to peers. What decisions changed during the build that weren't in the specification?
Session BFinalizing the prototype for testing. What a Critic needs from a Realist — what information must travel with the artifact into the test phase. Bridge: "You have something real. Now the Critic asks whether it actually works."
Studio LabStudio Exercise #6 — prototype submission and documentation review.
⚑ Studio Exercise #6 (25 pts)
Prototype submission: (1) low-fidelity prototype of your primary concept built using AI acceleration; (2) completed specification document with minimum 5 explicit identification decisions and rationale; (3) one paragraph on what specific question this prototype answers and what question it deliberately does not answer yet. Human Half Declaration required. Due before Studio Lab, Week 12.
Week 12 ★⚑ The Critic Tests Chapter 7
By the end of this week: Conduct prototype testing with minimum 5 users and understand the three legitimacy types before applying them to your data.
Session AThe test results are in. They show the prototype works. The question the data cannot answer: should it? The dining hall receipt prototype tests at 73% motivating. The interpretive judgment required: which 27% found it demotivating and why; whether short-term motivation translates to behavior change; whether the intervention is equitable across income levels.
Session BThe three legitimacy types: pragmatic (does it work), moral (should it exist), cognitive (can it be trusted). Why AI achieves pragmatic legitimacy and struggles with the others. Reading test results as a designer — what counts as evidence, what counts as noise.
Studio LabTest session debrief — students share test findings and identify where the three legitimacy types surface.
⚑ Reading Response #5 (30 pts)
After testing: what is the finding your data cannot resolve? Identify which legitimacy type it falls under — pragmatic, moral, or cognitive. What would you need — in the way of evidence, expertise, or judgment — to address it? Due before Session A, Week 13.
⚑ Final Project — Pipeline Protocol Checkpoint (100 pts)
Your pipeline design protocol: (1) reframed problem statement, precisely stated; (2) empathy finding that drove the reframe — why this problem is worth solving for this specific user; (3) grain-informed tool selection with rationale; (4) curation criteria applied to ideation output; (5) identification decisions that shaped the prototype; (6) preliminary test findings with legitimacy type analysis begun; (7) one paragraph identifying the single most important irreducibly human judgment in your design process so far that AI could not have made. Go/no-go reviewed before Week 13. Due end of Week 12.
Week 13 The Critic Tests (continued) Chapter 7 continued
By the end of this week: Produce an interpretive judgment document that addresses all three legitimacy types and names what the data cannot resolve.
Session AIn-class design work session. Instructor role: ask the questions the data won't ask. ("What would this intervention look like to the user who found it demotivating?" "Is this result equitable across the range of users in your domain?" "What assumption in your test design would change this finding if it were wrong?")
Session BThe failure modes of interpretation — over-trusting data, under-trusting human judgment, confusing metric with meaning. Preparation for the Commit: what the Critic hands to the Committer. Bridge: "You know what the data shows and what it doesn't. Now you have to decide."
Studio LabOpen consultation — TA available for interpretive judgment document review.

No new reading assigned this week. Testing and interpretation work in progress.

Act Three · Weeks 14–15 · Chapters 8–10

The Commit

The stage Design Thinking omits. You enter Act Three with a tested prototype and interpreted results. You leave with a Commit document that has survived peer scrutiny — and a pipeline narrative that demonstrates every human judgment call from brief to deployment.

Week 14 ★⚑ The Commit Chapter 8
By the end of this week: Produce a draft Commit document using the five-element structure, critique a peer's draft using the four diagnostic questions, and understand the difference between a Commit and a recommendation.
This week requires a mandatory supervised Studio Lab session. The peer critique round cannot be completed by reading alone — it requires reviewing a real document with real stakes and delivering written feedback that names specific failures of specificity, evidence, honesty about uncertainty, and accountability.
Session ADesign Thinking ends at Test. This is the stage it omits. Why: it was designed as an ideation methodology, not a deployment methodology. The omission was invisible when prototyping was hard. AI has made it catastrophic. The bad Commit — vague, over-qualified, accountability-free — shown before peer critique.
Session BThe five-element Commit structure: the decision, the evidence, the uncertainty, the accountability, the revision condition. The four peer critique diagnostic questions. The good Commit — the same example revised. Phronesis as the meta-capacity that peer critique develops.
Studio LabStudio Exercise #7 — supervised peer critique of Commit document drafts.
⚑ Studio Exercise #7 (25 pts)
Commit document peer critique: (1) written peer review of one partner's Commit document draft applying all four diagnostic questions — specificity, evidential basis, honesty about uncertainty, accountability — with at least one specific finding per criterion; (2) one overall recommendation; (3) one question the reviewer cannot answer from the submitted materials alone. Due end of Studio Lab, Week 14.
Week 15 ★⚑ The Commit + Metacognitive Switch + Full Pipeline Chapters 8, 9, and 10
By the end of this week: Finalize and present your Commit document; articulate the metacognitive switch that made the Commit possible; demonstrate the full pipeline on a condensed new brief.
Session AFull pipeline presentations (8–10 minutes each): the capstone process narrated as a pipeline — not the output, but every human judgment call, every metacognitive switch, and the Commit document the student stands behind.
Session BThe metacognitive switch — what it is, what it costs when it fails, and why it has no AI equivalent. The series argument: what is irreducibly human in AI-augmented design, and why. What the next course (Conducting AI / Causal Reasoning) takes on.
Studio LabOpen session — no new material.
⚑ Final Project — Peer Review Checkpoint (100 pts)
Written peer review of one classmate's pipeline protocol. Required: applied against the full pipeline rubric (grain, empathy, reframe, curation, identification decisions, legitimacy type analysis, Commit structure); at least one specific finding per pipeline stage; one overall recommendation; one question the reviewer cannot answer from the submitted materials alone. Due end of Week 14.
⚑ Final Project — Full Pipeline Presentation (250 pts)

Complete pipeline presentation demonstrating the full AImagineering pipeline on your capstone design. Required sections:

Pipeline narrative: Every stage documented — grain identification, empathy investigation findings, reframe defense, ideation curation, prototype identification decisions, test results with legitimacy type analysis, Commit document in final form. Not the output: the judgment.

The Commit document (five-element structure):

  • The decision: a specific course of action — not a direction, a recommendation, or an option
  • The evidence: direct citations from the test phase — what the data showed
  • The uncertainty: what you do not know and cannot know before deployment
  • The accountability: what you are responsible for if it fails — stated specifically, bounded explicitly
  • The revision condition: what new information would change this decision

The Irreducibly Human section (required; weighted at 50% of the total grade):

  • Three specific judgment calls that required your values, domain knowledge, or accountability — stated specifically, with reasoning, and with consequence named
  • One judgment call that was tried-as-delegation and then reclaimed — what happened when you delegated it, what you found when you reclaimed it
  • An honest assessment of the collaboration quality — where AI was genuinely useful, where it produced confident-sounding noise, and what you would do differently

The metacognitive switch: At least two moments in your process where a mode switch was required — what triggered the recognition, what the switch cost, and what it produced.

The pipeline graduation statement: "I can stand in front of a design problem, resist the pull to generate immediately, ask the right human questions first, direct AI tools with authority rather than dependency, and commit to a course of action I can defend on its merits — not just its polish." Name one moment in this course that made that sentence true for you.

Peer review response: Written response to the Week 14 peer review, submitted with the final project.

Due end of Week 15.

Schedule at a Glance

WeekChapterActMajor DeliverablePoints
1Ch. 1 — The Thirty-Minute DesignerOneReading Response #130
2Ch. 2 — Finding the GrainOne
3Ch. 2 — Finding the Grain (continued)OneStudio Exercise #1 + RR #225 + 30
4Ch. 3 — What Simulation Cannot FeelOne
5Ch. 3 — What Simulation Cannot Feel (continued)OneStudio Exercise #2 + RR #325 + 30
6Ch. 4 — The Brief Is a HypothesisOne
7Ch. 4 — The Brief Is a Hypothesis (continued)OneStudio Exercise #325
Midterm / FlexAct One gateMulti-stage brief analysis100
8Ch. 5 — One Week for the DreamerTwoStudio Exercise #4 + RR #425 + 30
9Ch. 6 — The Realist BuildsTwoStudio Exercise #525
10Ch. 6 — The Realist Builds (continued)Two
11Ch. 6 — The Realist Builds (continued)TwoStudio Exercise #625
12Ch. 7 — The Critic TestsTwoRR #5 + Pipeline Protocol Checkpoint30 + 100
13Ch. 7 — The Critic Tests (continued)TwoDraft + in-class consultation
14Ch. 8 — The CommitThreeStudio Exercise #7 + Peer Review Checkpoint25 + 100
15Ch. 8–9–10 — Commit + Switch + PipelineThreeFull Pipeline Presentation250

Studio Lab participation (100 pts) assessed continuously across all 15 weeks. Lowest Studio Exercise dropped — 8 of 9 count toward final grade.