Irreducibly Human Series · Northeastern University · College of Engineering
AImagineering: The Full Design Pipeline
What AI can't do — and what that means for engineers who use it
Version 1.0 · [Distribution Date] · Reviewed by Dev the Dev
Section 1
Welcome
I've spent years watching capable engineers generate — fluently, prolifically, impressively. They can prompt. They can iterate. They can produce a polished design artifact in thirty minutes that would have taken a week five years ago. And I've watched the same engineers stand in front of a client, a review committee, or a deployment decision and not be able to answer the question that actually matters: Why this? Why now? And what happens if it fails?
The failure mode I've seen most consistently in AI-augmented design isn't a broken tool. It's an engineer who accepted the brief as given, generated options without reframing the problem, and committed to a direction because the output looked good — not because they could defend the decision on its merits. The work is polished. The judgment underneath it is borrowed.
That gap is what this course closes.
AImagineering is not a course about AI tools. You already know how to use them. It is a course about everything that happens before and after the tool runs — which is, increasingly, everything that matters. Before: knowing what problem is worth solving, meeting the human who has it, and reframing the brief before you generate. After: choosing among fifty plausible options, building the thing that tests the right question, reading what the data actually shows, and committing to a course of action with your name on the consequences.
The thesis of this course is simple and uncomfortable: ideation is now a tool operation, not a cognitive achievement. Which means the hard part of design — the part that takes judgment, accountability, and a human being willing to be wrong in a specific way — is no longer the week you spend generating. It is every week before and after.
What you will leave with: the complete AImagineering pipeline — Empathize, Define, Ideate, Prototype, Test, Commit — applied to a real design problem in your domain, with explicit documentation of every human judgment call that an AI tool could not have made on your behalf.
Here is what to do before we meet: read Chapter 1 of Irreducibly Human: AImagineering. Bring a brief — any brief, anything you've been handed or assigned or invented — and be ready to spend thirty minutes with it. The experiment happens in the first session. The analysis is the rest of the course.
Section 2
The Irreducibly Human Series
We are in the early years of the most powerful ideation tools ever built. AI systems can generate more design options in an afternoon than a team could produce in a sprint. They are genuinely poor at knowing which problem is worth solving, meeting the person who has it, choosing among outputs with judgment rather than plausibility, and committing to a direction in a way that makes anyone accountable for what happens next.
The Irreducibly Human series develops exactly those capacities — the forms of reasoning and judgment that AI tools require humans to supply, and that your competitors who only learned to generate will not have.
The series entry point — Botspeak — builds the complete architecture for understanding what you are collaborating with: the tier taxonomy, the Five Modes, the cognitive nature of AI systems. This course — AImagineering — goes deeper on the design layer specifically, developing the pipeline capacities that Botspeak names but does not fully build: the empathy investigation that cannot be simulated, the reframe that changes what gets built, the Commit that no tool can make on your behalf.
The companion courses go deeper on adjacent layers. Conducting AI builds the full Tier 4 supervisory toolkit for engineers who deploy AI systems. Causal Reasoning builds the capacity to construct a defensible model of what causes what in a domain, and to know what that model can and cannot support. The courses can be taken in any order after Botspeak; each is self-contained while pointing toward the others.
Section 3
Course Information
Course Identifiers
| Field | Value |
| Course Title | Irreducibly Human: What AI Can't Do — AImagineering: The Full Design Pipeline |
| Course Number | [XXXX 5XXX — assigned at CourseLeaf submission] |
| Credit Hours | 4 |
| Term | Fall [Year] |
| Mode of Delivery | In-person |
| Components | Lecture/Seminar (1× weekly) + TA-led Studio Lab (1× weekly in-class) |
| Department | College of Engineering |
Meeting Information
Lecture/Seminar Sessions
Days and times: [TBD] · Location: [Building, Room]
Studio Lab (TA-led)
Day and time: [TBD] · Location: [Building, Room]
The Studio Lab is a required course component, not an optional recitation. It is where pipeline stages become practiced skills. Chapter 4's reframe defense cannot be rehearsed by reading alone — it requires live presentation with peer objection and structured critique. Chapter 8's peer critique of Commit documents requires supervised practice with real stakes. Missing the lab is not equivalent to missing a lecture — it is missing the part of the course where judgment consolidates.
Instructor
| Field | Value |
| Name | Nik Bear Brown |
| Email | [email protected] |
| Response time | Within 48 hours on weekdays. Put URGENT in subject line for time-sensitive questions. |
| Office / Zoom | [TBD] |
| Student hours | [Days, times, location] — booking link TBD |
| Preferred contact | Email for logistics. Student hours for anything that takes more than two sentences to answer well. |
I hold student hours for you — not only for students with emergencies. Come to pressure-test your reframe before the defense, unpack what a grain-mapping exercise surfaced, understand where the Commit fits into professional practice, or see what a finished pipeline narrative looks like. The most productive conversations I have with students happen outside scheduled sessions.
Teaching Assistant
| Field | Value |
| Name | TBA |
| Email | TBA |
| Studio Lab hours | TBA |
The TA runs the weekly Studio Lab — designing exercises, facilitating reframe defenses and critique sessions, running the Commit peer review round, and returning written feedback on lab submissions. For questions about pipeline stages in practice, grain documentation, empathy investigation protocol, and weekly exercise work, the TA is your first resource. Tool and platform questions go to the TA first; if unresolved, the TA forwards to the professor.
Prerequisites
Official prerequisites: Botspeak (Course 1 — Irreducibly Human series) or equivalent AI fluency foundation; Graduate standing in Engineering or related field (exact CourseLeaf string TBD)
What this course assumes you know
You understand the difference between pattern completion and knowledge retrieval. You have used AI tools at Botspeak proficiency — specification, delegation, conversation, discernment, diligence. You have access to at least one AI tool (Claude, ChatGPT, Gemini, or equivalent).
What this course does not assume
Prior design thinking training. Prior engineering design coursework. Any background in human-centered design, UX, or product development. This course introduces the full pipeline from the ground up.
A note for students with design backgrounds
Students who arrive with prior design thinking training sometimes find the early weeks the most disorienting — specifically, the course's insistence that Design Thinking as typically taught ends at Test and omits the most consequential stage. That disorientation is the course working as intended. The Commit stage is not a harder version of the prototype review. It is a different cognitive and professional operation. Students who treat AImagineering as an advanced prompting course will produce technically correct exercises and miss the course. Students who approach the Commit as genuinely new terrain — regardless of prior design training — will get the most from it.
If you are missing a prerequisite, contact the instructor before the first week.
Section 4
Learning Outcomes
By the end of this course, students will be able to:
- Identify the grain of at least three AI tools — what each does naturally, what it resists, and where the human must supply what the tool cannot — and apply this knowledge to tool selection for specific design stages
- Conduct an empathy investigation that produces at least one finding an AI simulation of the user could not have generated, and explain why plausible user simulations are insufficient for design that serves specific people in specific contexts
- Reframe a given design brief into at least two alternative problem definitions, evaluate each against explicit criteria, and defend a selection in a live presentation that fields peer objection
- Direct an AI ideation session that produces a defined quantity of genuinely distinct concepts, then apply human curatorial judgment to select and develop the most promising three with written defense
- Construct a prototype using AI acceleration while documenting the identification decisions — what to build and why — that the AI could not supply without human input
- Evaluate prototype test results using the three legitimacy types — pragmatic, moral, and cognitive — identifying where human interpretive judgment is required beyond what the data shows
- Commit to a design direction by producing a Commit document that specifies a course of action, its evidential basis, acknowledged uncertainty, and stated accountability — and that survives structured peer critique
- Identify the metacognitive switch — the recognition of which mode (Dreamer/Realist/Critic) a design moment requires — through retrospective analysis of their own process
- Apply the full AImagineering pipeline to a new brief in condensed form, demonstrating that the capacities developed across the course are internalized as judgment rather than executed as procedure
- Name, in every major deliverable, one judgment call that required their values, domain knowledge, or accountability that an AI tool could not have made on their behalf
Section 5
Required Materials
Textbook
| Field | Value |
| Title | Irreducibly Human: What AI Can't Do — AImagineering: The Full Design Pipeline |
| Author | Nik Bear Brown |
| Publisher | Bear Brown & Company / Kindle Direct Publishing, 2026 |
| Availability | [Amazon Kindle / print link — TBD at publication] |
| Cost | [TBD] |
| Edition | First edition. No prior edition exists. |
Supplementary Readings
Distributed via Canvas throughout the semester at no cost. Required supplementary readings are marked [Required] in the weekly schedule. Optional readings are marked [Recommended] and are genuinely optional.
Required Technology
AI tools (free tiers available — no purchase required)
Students may use any combination. The grain documentation exercises in Chapter 2 require working with at least two distinct tools. You are not required to use all three — you are required to understand that the grain differs across tools and that this difference is a design decision.
Prototyping and documentation tools (free, browser-based)
- Google Docs or equivalent — iteration logs, specification documents, capstone deliverables
- Figma (free tier) or equivalent — low-fidelity prototype work in Chapter 6 (design engineering contexts); alternatives for other engineering domains provided by TA
Course platforms
Section 6
Assessment and Grading
Point Summary
| Assessment | Points | Quality/Portfolio |
| Reading Responses (5 × 30 pts) | 150 | ✓ 20 pts each |
| Weekly Studio Exercises (8 × 25 pts, drop lowest of 9) | 200 | ✓ 20 pts each |
| Studio Lab Participation | 100 | ✓ 20 pts component |
| Midterm | 100 | — |
| Final Project — Pipeline Protocol Checkpoint | 100 | ✓ 20 pts |
| Final Project — Peer Review Checkpoint | 100 | ✓ 20 pts |
| Final Project — Full Pipeline Presentation | 250 | ✓ 20 pts |
| Total | 1000 | |
AI-Based Grading Approach
Due to the widespread use of Generative AI, grading is structured as follows:
800+ points — relative scale
| Top 25% | A |
| Next 25% | A– |
| Next 25% | B+ |
| Final 25% | B |
Below 800 — absolute scale
| 780–799 | C+ |
| 730–779 | C |
| 700–729 | C– |
| 600–699 | D |
| Below 600 | F |
Students below 800 points cannot earn a grade higher than B–, even if the relative curve would otherwise place them higher. The instructor reserves the right to make minor adjustments for fairness.
Quality/Portfolio Score (20 points — on all qualifying assignments)
Every assignment carrying the Quality/Portfolio component is evaluated on a relative 20-point scale comparing your work to peers, emphasizing depth of design judgment, specificity of human judgment call identification, and evidence that the irreducibly human reasoning was performed by you — not delegated to a tool.
| Percentile Band | Score |
| Bottom 25% | 5 pts |
| 26–50th percentile | 10 pts |
| 51–75th percentile | 15 pts |
| Top 25% | 20 pts |
Full band descriptions for each assignment type are distributed with each assignment prompt.
AI Use in Assignments
You are encouraged to use generative AI tools on every assignment. Citation is required. Undisclosed AI use is an academic integrity violation. Disclosed AI use is not.
Every submission must include an AI Use Disclosure block:
AI USE DISCLOSURE
Tool(s) used:
Portions assisted:
How used:
What I changed:
What the AI could not do: [name at least one judgment call that required
your values, domain knowledge, or accountability — this field is not optional]
The last field is the Human Half Declaration. A disclosure that cannot name one thing the AI could not do has not demonstrated that the student performed the irreducibly human layer of the design work. This is not a formality. It is the assessment.
Drop Policy
The lowest-scoring Studio Exercise is dropped. Eight of nine exercises count toward the final grade. This absorbs one week where a pipeline stage didn't click. It does not absorb a pattern of non-engagement.
Section 7
Course Schedule
The schedule maps each week to a chapter in Irreducibly Human: AImagineering. Read the assigned chapter before Session A. Come to Session A with the case in your head. Come to Session B ready to use the concept. Come to the Studio Lab ready to apply it.
Reading time per chapter: approximately 45–75 minutes · ⚑ = graded deliverable due · ★ = transition week
By the end of this week: Identify at least five categories of human judgment that a fully AI-assisted 30-minute design session skips — using your own experiment as the evidence.
Session AIn medias res. No theory. A brief is on the table. Thirty minutes. AI tools open. The output arrives. The output is good. That is not the point.
Session BWhat was produced — and what was assumed. A taxonomy of the human judgment the session skipped. The course argument stated directly: AI has made ideation easy, which means everything before and after ideation is now the work of design.
Studio LabThirty-minute experiment debrief — what categories of skipped judgment appeared across the room? (Ungraded — prepares for Exercise #1.)
⚑ Reading Response #1 (30 pts)
Run the thirty-minute experiment with your own brief. Write a 500-word audit of what the session assumed that it never asked. Identify at least five categories of human judgment the session skipped. Due before Session A, Week 2.
By the end of this week: Map the grain of at least two AI tools against a specific design brief — what each does naturally, what it resists, and what this means for when to use it.
Session AThe carpenter and the wood grain. A community health intervention designed entirely with a language model produces communication that is coherent, complete, and addressed to the community rather than from it. The grain as the explanation.
Session BThe grain metaphor applied to AI tools. The Krebs Cycle of Creativity (Oxman) — Science → Engineering → Design → Art. Where each tool is strongest; where the human must compensate. Platform awareness as a design competency.
Studio LabGrain exploration session — students map one tool's grain against their capstone domain brief. (Ungraded — prepares for Exercise #1.)
By the end of this week: Produce a grain documentation for your primary AI tool, including one specific design decision that changes because of grain awareness.
Session AThe Krebs Cycle worked through: mapping three tools — language model, image generator, code assistant — against a single design brief. Where each accelerates; where each misleads.
Session BChoosing tools based on what the brief needs, not habit. The grain as a resource — working with affordances rather than against them. Bridge: "You know your tools. You still don't know your user."
Studio LabStudio Exercise #1 — grain documentation workshop.
⚑ Studio Exercise #1 (25 pts)
Document the grain of your primary AI tool for your capstone design domain: (1) what it does naturally against a specific brief; (2) what it resists; (3) one specific design decision that changes because of this knowledge; (4) placement on the Krebs Cycle with justification. Human Half Declaration required. Due before Studio Lab, Week 4.
⚑ Reading Response #2 (30 pts)
Return to your thirty-minute experiment brief from RR1. Apply grain awareness: which tool's grain shaped the output most? Where did working against the grain produce the wrong kind of output — and how would you have done it differently? Due before Session A, Week 4.
By the end of this week: Describe the specific difference between an AI-generated user persona and a genuine empathy finding — using your own domain as the example.
Session AAn AI-generated user persona next to a field note from an actual conversation. They are not the same document. A student designing for elderly residents in low-income housing: the AI produces a coherent, internally consistent user. The field investigation finds the window.
Session BWhy AI personas are plausible and wrong — the training data problem for edge cases. The empathy investigation protocol: observation, interview, artifact analysis. What to look for that simulation misses — the unexpected, the contradictory, the embodied.
Studio LabEmpathy investigation design workshop — students plan their investigation approach for their capstone user. (Ungraded — prepares for Exercise #2.)
By the end of this week: Conduct a genuine empathy investigation and identify at least one finding an AI persona would not have produced.
Session AWhat counts as a genuine empathy finding. How to document surprise. The design consequence of relying on AI simulation alone — naming a specific decision that would have been wrong.
Session BComparing AI persona outputs to field findings — where the persona succeeds, where it fails, and what type of failure it is. Bridge: "You know your tools. You know something real about your user. Now you have to decide what problem is worth solving."
Studio LabStudio Exercise #2 — empathy findings debrief and comparison.
⚑ Studio Exercise #2 (25 pts)
Submit your empathy investigation: (1) documentation of minimum 2 human contacts using observation, interview, or artifact analysis; (2) at least one finding that would not appear in any AI-generated persona for your user group; (3) comparison of your AI persona to your field findings — where it succeeds, where it fails, and what type of failure it is; (4) one specific design decision that would have been wrong if you had relied on the persona alone. Human Half Declaration required. Due before Studio Lab, Week 6.
⚑ Reading Response #3 (30 pts)
Describe the most important thing your empathy investigation found that surprised you. What does this finding imply for your capstone design brief? What would the AI persona have told you to do instead? Due before Session A, Week 6.
By the end of this week: Generate at least three alternative problem definitions from your capstone brief using the reframe protocol.
Session AThe classic brief: "Design a faster horse." The reframe: "Help people move between places efficiently." The AI response to each brief is radically different. The choice of brief is a human judgment — and it is made before any tool runs.
Session BThe rationalist vs. co-evolutionary models of design. The reframe protocol — five questions that produce alternative problem definitions. Criteria for choosing among reframes: user impact, feasibility, alignment with empathy findings. AI as a reframe stress-tester.
Studio LabReframe protocol workshop — students produce three reframes of their capstone brief with peer pressure-testing. (Ungraded — prepares for Exercise #3.)
By the end of this week: Select and defend a reframe in a live 3-minute presentation, field peer objections, and revise or hold with stated reasons.
Session AThe food waste case: three reframes with different solution spaces and different values implications. The choice is not a design decision — it is a values decision. What makes a reframe defensible vs. arbitrary.
Session BEvaluating reframes against explicit criteria. What changes when the problem statement changes. The reframe as a commitment — not a constraint to work around but a hypothesis to test. Bridge: "You have a problem worth solving. Now the Dreamer gets one week."
Studio LabStudio Exercise #3 — reframe defense presentations with structured peer critique.
⚑ Studio Exercise #3 (25 pts)
Reframe defense: (1) three reframes of your capstone brief produced using the reframe protocol; (2) evaluation of each reframe against explicit criteria — user impact, feasibility, alignment with empathy findings; (3) a 3-minute live defense of your selected reframe; (4) written response to peer objections — revise or hold, with stated reasons. Human Half Declaration required. Due before Studio Lab, Week 8.
⚠ Midterm (Week 7/8 — flex) · 100 pts
Multi-stage brief analysis. A novel design situation is provided with no annotation about which pipeline stages apply. Demonstrate Act One fluency as practice: document what the brief assumes that it never asked; map the grain of the tool most appropriate for ideation in this domain; identify the empathy investigation that would change the problem statement; produce two reframes and select one with criteria-based defense; name the human judgment call that determines which reframe is worth pursuing.
No pipeline recitation. No framework description. Application only. This is the Act One → Act Two gate.
By the end of this week: Direct an AI ideation session producing minimum 30 genuine concepts, then apply human curatorial judgment to reduce them to 3 with explicit defense.
Session AThe Dreamer/Realist/Critic framework. This is the Dreamer's week — the only week AI owns the session. The Dreamer's job is not to evaluate. It is to generate without premature constraint. What directing AI generation toward genuine divergence looks like vs. repetition.
Session BThe curation problem — why choosing among 50 plausible options is harder than generating them. Curatorial criteria: what makes a concept worth developing beyond its plausibility. Why the surprising concepts survive when the plausible ones shouldn't.
Studio LabStudio Exercise #4 — ideation session and initial curation workshop.
⚑ Reading Response #4 (30 pts)
After your ideation session: what proportion of your 30+ concepts were genuine variants vs. repetitions of the same mechanic? What made the two or three most interesting concepts interesting? What would the curation look like if "most surprising" was your only criterion? Due before Session A, Week 9.
⚑ Studio Exercise #4 (25 pts)
Ideation curation: (1) directed AI ideation session producing minimum 30 concepts — document with prompts used and concept count; (2) documented curation session reducing 30+ concepts to 3 with explicit criteria; (3) 300-word written defense of each selected concept. Human Half Declaration required: name one judgment call in the curation that required your values, domain knowledge, or accountability. Due before Studio Lab, Week 9.
By the end of this week: Understand the identification decisions in prototype work — what the Realist must supply that no build accelerator can provide.
Session AThe dining hall receipt system. AI builds the interface in an afternoon. The identification decisions the human must supply: which waste metric is most motivating, where to surface it in the journey, individual vs. aggregated data, what happens when a student sees demoralizing data. None of these are in the build brief until the human puts them there.
Session BThe Realist's cognitive job: translate concept into testable artifact. The specification document — translating human judgment into a build brief for AI tools. Knowing when to stop building — the prototype answers a specific question; it is not finished product.
Studio LabStudio Exercise #5 — specification document workshop for primary concept.
⚑ Studio Exercise #5 (25 pts)
Specification document: write a complete build specification for your primary concept that makes every identification decision explicit. Required: (1) the specific question this prototype is designed to answer; (2) minimum 5 identification decisions — choices the AI could not make without human input, with rationale for each; (3) the question this prototype is deliberately not answering yet. Due before Studio Lab, Week 10.
By the end of this week: Build a low-fidelity prototype using AI acceleration, with all identification decisions documented.
Session ABuild session: translating the specification document into a working prototype. What AI accelerates in the build; where identification decisions keep surfacing that weren't in the spec.
Session BThe relationship between prototype fidelity and the specific question it answers. What "done" means for a prototype — not finished, but testable. Preparation for the Critic's week.
Studio LabPrototype build work session — TA available for specification feedback and build support.
By the end of this week: Complete and submit the prototype with full specification document.
Session APrototype review session — students present work-in-progress with identification decisions to peers. What decisions changed during the build that weren't in the specification?
Session BFinalizing the prototype for testing. What a Critic needs from a Realist — what information must travel with the artifact into the test phase. Bridge: "You have something real. Now the Critic asks whether it actually works."
Studio LabStudio Exercise #6 — prototype submission and documentation review.
⚑ Studio Exercise #6 (25 pts)
Prototype submission: (1) low-fidelity prototype of your primary concept built using AI acceleration; (2) completed specification document with minimum 5 explicit identification decisions and rationale; (3) one paragraph on what specific question this prototype answers and what question it deliberately does not answer yet. Human Half Declaration required. Due before Studio Lab, Week 12.
By the end of this week: Conduct prototype testing with minimum 5 users and understand the three legitimacy types before applying them to your data.
Session AThe test results are in. They show the prototype works. The question the data cannot answer: should it? The dining hall receipt prototype tests at 73% motivating. The interpretive judgment required: which 27% found it demotivating and why; whether short-term motivation translates to behavior change; whether the intervention is equitable across income levels.
Session BThe three legitimacy types: pragmatic (does it work), moral (should it exist), cognitive (can it be trusted). Why AI achieves pragmatic legitimacy and struggles with the others. Reading test results as a designer — what counts as evidence, what counts as noise.
Studio LabTest session debrief — students share test findings and identify where the three legitimacy types surface.
⚑ Reading Response #5 (30 pts)
After testing: what is the finding your data cannot resolve? Identify which legitimacy type it falls under — pragmatic, moral, or cognitive. What would you need — in the way of evidence, expertise, or judgment — to address it? Due before Session A, Week 13.
⚑ Final Project — Pipeline Protocol Checkpoint (100 pts)
Your pipeline design protocol: (1) reframed problem statement, precisely stated; (2) empathy finding that drove the reframe — why this problem is worth solving for this specific user; (3) grain-informed tool selection with rationale; (4) curation criteria applied to ideation output; (5) identification decisions that shaped the prototype; (6) preliminary test findings with legitimacy type analysis begun; (7) one paragraph identifying the single most important irreducibly human judgment in your design process so far that AI could not have made. Go/no-go reviewed before Week 13. Due end of Week 12.
By the end of this week: Produce an interpretive judgment document that addresses all three legitimacy types and names what the data cannot resolve.
Session AIn-class design work session. Instructor role: ask the questions the data won't ask. ("What would this intervention look like to the user who found it demotivating?" "Is this result equitable across the range of users in your domain?" "What assumption in your test design would change this finding if it were wrong?")
Session BThe failure modes of interpretation — over-trusting data, under-trusting human judgment, confusing metric with meaning. Preparation for the Commit: what the Critic hands to the Committer. Bridge: "You know what the data shows and what it doesn't. Now you have to decide."
Studio LabOpen consultation — TA available for interpretive judgment document review.
No new reading assigned this week. Testing and interpretation work in progress.
By the end of this week: Produce a draft Commit document using the five-element structure, critique a peer's draft using the four diagnostic questions, and understand the difference between a Commit and a recommendation.
This week requires a mandatory supervised Studio Lab session. The peer critique round cannot be completed by reading alone — it requires reviewing a real document with real stakes and delivering written feedback that names specific failures of specificity, evidence, honesty about uncertainty, and accountability.
Session ADesign Thinking ends at Test. This is the stage it omits. Why: it was designed as an ideation methodology, not a deployment methodology. The omission was invisible when prototyping was hard. AI has made it catastrophic. The bad Commit — vague, over-qualified, accountability-free — shown before peer critique.
Session BThe five-element Commit structure: the decision, the evidence, the uncertainty, the accountability, the revision condition. The four peer critique diagnostic questions. The good Commit — the same example revised. Phronesis as the meta-capacity that peer critique develops.
Studio LabStudio Exercise #7 — supervised peer critique of Commit document drafts.
⚑ Studio Exercise #7 (25 pts)
Commit document peer critique: (1) written peer review of one partner's Commit document draft applying all four diagnostic questions — specificity, evidential basis, honesty about uncertainty, accountability — with at least one specific finding per criterion; (2) one overall recommendation; (3) one question the reviewer cannot answer from the submitted materials alone. Due end of Studio Lab, Week 14.
By the end of this week: Finalize and present your Commit document; articulate the metacognitive switch that made the Commit possible; demonstrate the full pipeline on a condensed new brief.
Session AFull pipeline presentations (8–10 minutes each): the capstone process narrated as a pipeline — not the output, but every human judgment call, every metacognitive switch, and the Commit document the student stands behind.
Session BThe metacognitive switch — what it is, what it costs when it fails, and why it has no AI equivalent. The series argument: what is irreducibly human in AI-augmented design, and why. What the next course (Conducting AI / Causal Reasoning) takes on.
Studio LabOpen session — no new material.
⚑ Final Project — Peer Review Checkpoint (100 pts)
Written peer review of one classmate's pipeline protocol. Required: applied against the full pipeline rubric (grain, empathy, reframe, curation, identification decisions, legitimacy type analysis, Commit structure); at least one specific finding per pipeline stage; one overall recommendation; one question the reviewer cannot answer from the submitted materials alone. Due end of Week 14.
⚑ Final Project — Full Pipeline Presentation (250 pts)
Complete pipeline presentation demonstrating the full AImagineering pipeline on your capstone design. Required sections:
Pipeline narrative: Every stage documented — grain identification, empathy investigation findings, reframe defense, ideation curation, prototype identification decisions, test results with legitimacy type analysis, Commit document in final form. Not the output: the judgment.
The Commit document (five-element structure):
- The decision: a specific course of action — not a direction, a recommendation, or an option
- The evidence: direct citations from the test phase — what the data showed
- The uncertainty: what you do not know and cannot know before deployment
- The accountability: what you are responsible for if it fails — stated specifically, bounded explicitly
- The revision condition: what new information would change this decision
The Irreducibly Human section (required; weighted at 50% of the total grade):
- Three specific judgment calls that required your values, domain knowledge, or accountability — stated specifically, with reasoning, and with consequence named
- One judgment call that was tried-as-delegation and then reclaimed — what happened when you delegated it, what you found when you reclaimed it
- An honest assessment of the collaboration quality — where AI was genuinely useful, where it produced confident-sounding noise, and what you would do differently
The metacognitive switch: At least two moments in your process where a mode switch was required — what triggered the recognition, what the switch cost, and what it produced.
The pipeline graduation statement: "I can stand in front of a design problem, resist the pull to generate immediately, ask the right human questions first, direct AI tools with authority rather than dependency, and commit to a course of action I can defend on its merits — not just its polish." Name one moment in this course that made that sentence true for you.
Peer review response: Written response to the Week 14 peer review, submitted with the final project.
Due end of Week 15.
Schedule at a Glance
| Week | Chapter | Act | Major Deliverable | Points |
| 1 | Ch. 1 — The Thirty-Minute Designer | One | Reading Response #1 | 30 |
| 2 | Ch. 2 — Finding the Grain | One | — | — |
| 3 | Ch. 2 — Finding the Grain (continued) | One | Studio Exercise #1 + RR #2 | 25 + 30 |
| 4 | Ch. 3 — What Simulation Cannot Feel | One | — | — |
| 5 | Ch. 3 — What Simulation Cannot Feel (continued) | One | Studio Exercise #2 + RR #3 | 25 + 30 |
| 6 | Ch. 4 — The Brief Is a Hypothesis | One | — | — |
| 7 | Ch. 4 — The Brief Is a Hypothesis (continued) | One | Studio Exercise #3 | 25 |
| — | Midterm / Flex | Act One gate | Multi-stage brief analysis | 100 |
| 8 | Ch. 5 — One Week for the Dreamer | Two | Studio Exercise #4 + RR #4 | 25 + 30 |
| 9 | Ch. 6 — The Realist Builds | Two | Studio Exercise #5 | 25 |
| 10 | Ch. 6 — The Realist Builds (continued) | Two | — | — |
| 11 | Ch. 6 — The Realist Builds (continued) | Two | Studio Exercise #6 | 25 |
| 12 | Ch. 7 — The Critic Tests | Two | RR #5 + Pipeline Protocol Checkpoint | 30 + 100 |
| 13 | Ch. 7 — The Critic Tests (continued) | Two | Draft + in-class consultation | — |
| 14 | Ch. 8 — The Commit | Three | Studio Exercise #7 + Peer Review Checkpoint | 25 + 100 |
| 15 | Ch. 8–9–10 — Commit + Switch + Pipeline | Three | Full Pipeline Presentation | 250 |
Studio Lab participation (100 pts) assessed continuously across all 15 weeks. Lowest Studio Exercise dropped — 8 of 9 count toward final grade.
Section 8
Course Policies
Attendance and Participation
This course has three weekly contact points: two lecture/seminar sessions and one TA-led Studio Lab. Each serves a different function. Missing any of them is not equivalent to missing the same thing twice.
Per College of Engineering MGEN policy, students are allowed a maximum of 2 absences per course. 3 or more absences result in an F. More than 3 unexcused Studio Lab absences will result in a failing participation grade regardless of Quality/Portfolio score. Chapter 4's reframe defense and Chapter 8's Commit peer critique cannot be made up by reading alone — these sessions depend on the presence of peers whose work you are responding to and who are responding to yours.
Students who do not attend during the first week risk being dropped from the course.
Participation means engagement — applying pipeline stages, presenting reframes, critiquing Commit documents, running empathy investigations, and connecting today's concept to your domain. Physical presence without engagement does not count as participation.
Please inform me of any anticipated absence before class. Religious observance requests must be submitted in writing within 14 calendar days of the first day of classes.
Late Work
- Assignments due by 11:59 PM on the due date
- 5% deduction per day, rounded up
- No credit after solutions are posted (posted the Monday after the due date)
- Extensions via email before the deadline with a specific proposed new due date
- Work submitted late without prior communication will not be graded
- The midterm cannot be made up without prior arrangements
- Exceptions for long-term illness or family emergencies must be approved by the professor — reach out early
Studio Exercises feed the following week's lab. A late submission that arrives after the lab has missed the feedback loop it was designed to produce.
Academic Integrity
What you submit is supposed to represent your judgment calls — the reframe you chose and why, the empathy finding that surprised you, the curation decision that reduced fifty options to three, the Commit you are willing to put your name on. Submitting borrowed judgment is not just an integrity violation — it is practicing the appearance of design capacity rather than developing it.
Violations include: submitting AI-generated work without citation; using another student's reframe, curation defense, specification document, or Commit section without attribution; submitting work substantially similar to a peer's submission. All violations will be reported to OSCCR.
Collaboration policy: You are encouraged to discuss pipeline stages, cases, and design reasoning strategies. You may not share reframe documents, curation defenses, specification documents, prototype materials, or any section of the pipeline presentation. Work you submit with your name on it must reflect your own design judgment in your own words. If you collaborated on ideas, list your collaborators clearly.
If you are unsure whether something crosses a line — ask. I would rather answer that question than navigate a violation.
Generative AI Policy
You are encouraged to use generative AI tools in this course. This is not a reluctant permission — it is the entire architecture of the course.
Use Claude to generate your first thirty concepts. Use ChatGPT to stress-test your reframe. Use Gemini to draft your prototype specification. Then ask: What did it produce that I hadn't decided yet? What constraint did it omit that I know, from meeting my user, is load-bearing? What did it generate confidently that I know, from my domain, is wrong for this specific person in this specific context? That gap — between what the tool produced and what your judgment supplies — is the irreducibly human part. Finding it, naming it, and defending it is not a side exercise in this course. It is the primary assessment.
Every submission requires the AI Use Disclosure block specified in Section 6. Undisclosed AI use is an academic integrity violation. The TA or instructor may ask you to walk through and explain any part of your submitted work. Inability to demonstrate understanding may result in grade penalties.
Instructor disclosure: I use generative AI tools in developing this course — for drafting case study scenarios, generating first-pass worked examples that I then evaluate and revise, and editing course materials. I document my own AI use in the same format I am asking of you.
Incomplete Grades
An incomplete grade may be reported when a student has failed to complete a major course component. Missing work must be submitted within 30 days of the term's end or the agreed-upon due date, or it receives no credit. Contact the instructor before the final week if circumstances warrant discussion.
Irreducibly Human: What AI Can't Do — AImagineering: The Full Design Pipeline
Syllabus v1.0 · Nik Bear Brown · Northeastern University · Fall [Year]
This syllabus reflects course information as of the distribution date. Learning outcomes, assessment architecture, and policies are stable. If meeting times, room assignments, or textbook availability change, updates will be posted to Canvas and communicated by email with at least one week's notice.
Two open questions remain pending resolution before content production: OQ-001 (co-i