Irreducibly Human Series · Course 3 of 3 · TIC TOC

Ethical Play: full table of contents and chapter specifications

What AI can and can't do — designing moral weight into game systems

Bear Brown & Company / Kindle Direct Publishing, 2026  ·  15-week semester
Version 0.1  ·  March 2026  ·  Reviewed by Dev the Dev

Contents

  1. Book Concept and Thesis
  2. Learner Profile
  3. Recurring Assessment Spine
  4. Sequencing Logic and Running Examples
  5. Bloom's Distribution
  6. Chapter-by-Chapter Specifications
  7. Open Design Questions
Section 1

Book Concept and Thesis

One-sentence concept
This book teaches engineering graduate students to design game systems that encode ethical frameworks as mechanical consequence structures — systems precise enough for an AI to audit and human enough to make players feel morally implicated — by building directly toward the gap between what an algorithm can evaluate and what a human must feel.

Central thesis: Moral weight in designed systems is not produced by describing ethical stakes. It is produced by encoding ethical frameworks into consequence structures that transfer responsibility to the agent making decisions. An AI can evaluate the architecture. Only a human can feel the weight. The gap between those two statements is what this book teaches you to build.

Success condition: The student can construct, build, and defend a game system whose ethical architecture is legible to an AI auditor and whose moral weight lands on a human player — and can identify the precise location where those two descriptions diverge.
Biggest unresolved structural question: Whether the Ethical Auditor assessment is a pedagogical mechanism or a pedagogical thesis. If both, the course risks teaching students to optimize for AI legibility rather than human moral weight — the editorializing failure in reverse.
FieldValue
Series positionCourse 3 of 3 · Irreducibly Human: What AI Can and Can't Do
Deployment context15-week graduate course
Section 2

Learner Profile

A graduate engineering student who can describe the trolley problem and has never driven the trolley. Has taken one programming course. Has opinions about AI ethics. Has never been asked to make someone else feel morally implicated by something they built.

Section 3

Recurring Assessment Spine

Present in every deliverable, every chapter. Not a single-chapter outcome.

The recurring question — required in every deliverable
"Name one judgment call in this work that required your values, your domain knowledge, or your accountability — that an AI could not have made on your behalf."

This is not a reflection prompt. It is the primary evidence that the student operated above Tier 1. If a student cannot answer it with specificity, the deliverable has not cleared the course's minimum threshold regardless of technical quality.

Section 4

Sequencing Logic and Running Examples

Primary model

Concrete to abstract

The trap game and VE arrive in Week 1. Students encounter a designed failure before they have vocabulary to name it. Frameworks arrive as language for something already felt. The analysis comes after the experience — never before.

Secondary model

Spiral curriculum (Bruner)

Both games are spiral objects introduced in Week 1 and returned to after every framework chapter. Each return must escalate — new analytical layer only possible because prior layers are in place. A return that only adds vocabulary is a repetition, not a spiral.

Week 1 response journal — required infrastructure

Students submit one paragraph of felt response to both games before the first framework chapter. No vocabulary. Felt experience only. This document is returned to students in Week 13. It is the only record of the pre-vocabulary response that makes the gap analysis meaningful.

⚑ Infrastructure flag: The Week 1 response journal is load-bearing. If the course management system does not support timed document return, the primary sequencing model has a structural hole.

Transition tests

Act One → Act Two · Week 5

Student can name how a specific mechanic transfers responsibility to the player — not in general terms, but with reference to a located design decision.

Act Two → Act Three · Week 11

Student has a playtestable beta build and documented human playtesting data. The Ethical Auditor session without this data has only one side of the gap.

⚑ Policy flag: Instructor policy on failed Week 5 transition tests must be stated before the course runs. Option 1: revise until passing — no build begins on incomplete architecture. Option 2: proceed with flagged specification — gap tracked through Week 13. This is a design decision, not an administrative one.
⚑ Highest breakdown risk — Week 3: Deontology's felt correlate — the experience of a moral violation versus a resource cost — is the hardest to access from the Week 1 game experience. The VE bribe escalation mechanic is the required opening case. The chapter opens with the moment where the punishment doesn't track the violation.

The two running examples

Voodoo Economics (VE) — aspirational architecture

A pre-production GDD (v0.7) encoding consequentialist, deontological, and contractarian architecture across a mobile political satire simulation. Students receive the GDD in Week 1 and return to it after each framework chapter.

The spiral object is not the game — it is the design document. Students analyze architecture without felt play, mirroring exactly what the AI Ethical Auditor does. The gap between GDD analysis and felt play is the course's argument made structural.

⚑ Flag: VE is v0.7 pre-production. No playable artifact exists at course delivery. The limitation is usable — the class encounters the same epistemic constraint as the AI — but the limitation must be named explicitly in Week 1, not discovered.
The trap game — diagnostic failure

A purpose-built game designed to fail in four specific ways simultaneously: (1) the editorializing failure — the game tells the player what to feel; (2) moral weight landing on the character, not the player; (3) framework described in text but absent from mechanics; (4) implication promised, smugness delivered.

The AI Ethical Auditor should pass this game — finding the framework and calling the architecture coherent. Human playtesters should report smugness. That result is the course's thesis demonstrated in a controlled case before students build their own.

⚑ Critical flag: The trap game does not yet exist. It must be built before the course runs. Without it, Week 1 has one game instead of two and the primary sequencing model has no designed failure to start from. This is a prerequisite, not a nice-to-have.
Section 5

Bloom's Distribution

Remember
0  (0%)
Understand
1  (2%)
Apply
9  (19%)
Analyze
16  (34%)
Evaluate
13  (28%)
Create
8  (17%)
Zero comprehension chapters. 98% of outcomes at Apply or above. One UNDERSTAND-level outcome only (LO-1.3 — acceptable as a grounding move; flag if it bleeds into Week 2). Distribution is correct for a graduate design capability course.
Section 6

Chapter-by-Chapter Specifications

Act One · Weeks 1–5 · Chapters 1–5

Philosophy as Design Vocabulary

Five ethical frameworks taught as game design constraints, not intellectual history. Each paired with a consequence architecture implication. Two spiral objects introduced in Week 1 and returned to after every chapter. Frameworks are precision instruments for something already felt.

Chapter 1 The Audit Gap Act One · Week 1
The student learns to distinguish what an AI can evaluate in a designed ethical system from what a human player must feel — and acquires the evaluative vocabulary the rest of the course depends on.
VE / Trap Game Thread

VE GDD and trap game distributed. Week 1 response journal submitted before end of class — one paragraph, felt response only, no vocabulary. First pass diagnostic question posed: what ethical framework is embedded in VE? No answer expected. The question runs all semester.

Opening Strategy

Open with both games played/read in sequence. No framework vocabulary introduced. Ask only: what did you feel, and what was different between the two? Collect the response journals before the lecture begins.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-1.1 Analyze Distinguish between an AI's structural analysis of a designed system and a human player's felt moral experience, using the VE GDD as the diagnostic case. Student produces a written distinction with one specific AI claim and one human experience claim the AI cannot make. Yes
LO-1.2 Apply Apply the implication/smugness distinction to a provided game example, specifying which design decisions produce implication and which produce smugness. Yes
LO-1.3 Understand Explain why encoding an ethical framework in a mechanic differs from describing one in a text — using one example of each. Yes
LO-1.4 Analyze Identify one design decision in the VE GDD that might embody an ethical framework and one that might only illustrate one — with reasoning for the distinction. Yes
⚑ Critical — prerequisite: The trap game must exist and be playable in Week 1. If it does not exist, this chapter loses its designed concrete experience and the primary sequencing model is compromised.
⚑ Note: LO-1.3 is the only UNDERSTAND-level outcome in the course. Acceptable as a grounding move. Flag if it bleeds into Week 2.
Bridge to Ch. 2 "This chapter raises the question: what is it that the AI can evaluate but cannot feel? The next chapter provides the first precision instrument for answering it."
Chapter 2 Consequentialism as Mechanic Act One · Week 2
The student learns to translate a consequentialist ethical position into a game's consequence structure — and to evaluate whether that structure produces legible causality or plausible-sounding noise.
VE Thread — First Structured Spiral Return

Return to VE GDD. Identify the consequentialist architecture. Which specific mechanics encode it? Which claim to encode it but don't? Students compare this analysis with their Week 1 felt response.

Opening Strategy

Open with the VE causal chain display: a 4-node consequence chain from a specific edict. Ask: is each node caused by the one before it, or does it follow plausibly? The difference is the chapter.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-2.1 Apply Translate a consequentialist ethical position into a faction meter and traceable consequence structure for a domain-specific scenario, specifying what counts as an outcome and who bears the cost. Yes
LO-2.2 Create Construct a 4-node consequence chain for one policy decision in the student's chosen project domain, meeting the Legible Causality standard: each node causally connected, each connection statable in one sentence. Yes
LO-2.3 Analyze Identify the consequentialist architecture in the VE GDD by locating specific mechanics that embody it — and identify at least one mechanic that claims consequentialist logic but operates differently. Yes
LO-2.4 Evaluate Evaluate a provided consequence chain against the Legible Causality pillar, specifying where causal logic holds and where it produces plausible-sounding outcomes without genuine causal connection. Yes
⚑ Note: LO-2.2 is the first CREATE-level outcome in the course. The 4-node chain is small enough for Week 2 and complex enough to require genuine causal reasoning. It is the first test of whether students can operate above Tier 1.
Bridge to Ch. 3 "A consequence chain can be structurally present but causally shallow. The next chapter asks: what happens when the player breaks a rule?"
Chapter 3 Deontology as Mechanic Act One · Week 3
The student learns to design a rule-breaking mechanic that produces the felt experience of a moral violation — not a resource cost — and to evaluate whether a given mechanic achieves that distinction.
VE Thread — Second Spiral Return

Return to VE GDD. Focus: the bribe escalation mechanic. Is it deontological or consequentialist? The answer is genuinely ambiguous. That is the point. Students now have two frameworks as lenses and can see the ambiguity they couldn't name in Week 1.

Opening Strategy

Open with the VE bribe escalation specifically: the moment where negotiating upward increases cost but the felt violation may not track the cost increase. Ask: was that moment a fine or a moral failure? The framework arrives as the answer.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-3.1 Analyze Distinguish a mechanical fine from a mechanical moral failure at the design level, identifying the specific structural features that produce each — using the VE bribe escalation mechanic as the test case. Yes
LO-3.2 Create Design a rule-breaking mechanic for a domain-specific scenario in which the cost of violation produces felt moral weight rather than strategic calculation — specifying the design decisions that create that distinction. Yes
LO-3.3 Evaluate Assess the VE bribe escalation mechanic: does it function as deontological architecture, consequentialist architecture, or both simultaneously — and does dual encoding strengthen or undermine the moral weight? Requires a defended position. Yes
LO-3.4 Analyze Identify the point in a rule-breaking mechanic's design where the player's experience shifts from moral reasoning to cost-benefit calculation, and specify the design decision that causes the shift. Yes
⚑ Highest breakdown risk chapter: Deontology's felt correlate is hardest to access from the Week 1 experience. The VE bribe escalation opening is the mitigation — but it must be written and tested before the course runs, not composed the week before.
⚑ Assessment flag — LO-3.4: The hardest outcome in Act One to assess reliably in writing. Consider pairing with a short in-class demonstration: student presents mechanic, class identifies the shift point.
Bridge to Ch. 4 "The bribe escalation ambiguity raises the question of dual-encoded mechanics. The next chapter introduces frameworks organized around time rather than moment."
Chapter 4 Virtue Ethics and Contractarianism as Mechanic Act One · Week 4
The student learns to design for character development across time and for the player who receives the worst outcome of a designed system — and evaluates whether VE's architecture reflects either position.
VE Thread — Third Spiral Return

Return to VE GDD. Focus: the Popularity score and the six outcome paths. Does the system design for the citizen who gets the worst outcome? Which mechanics encode virtue development and which record behavioral history only?

Opening Strategy

Open with the VE outcome card for 'Disappeared (Sovyetia only).' Ask: who designed this outcome — the player optimizing for survival, or the designer considering who lives in the game world? The chapter is the answer.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-4.1 Create Design a reward structure that tracks character development over time for a domain-specific scenario — specifying the difference between a system that records what the player did and one that encodes who the player is becoming. Yes
LO-4.2 Apply Apply the Rawlsian veil of ignorance as a design constraint to a provided game system, specifying the worst-case player outcome and the design decisions required to minimize it without eliminating the game's moral architecture. Yes
LO-4.3 Evaluate Evaluate the VE six-outcome structure against the contractarian standard: does the system design for the citizen who bears the cost of every edict, or for the player's strategic optimization? Name the specific design feature that answers the question. Yes
LO-4.4 Analyze Distinguish a game mechanic that encodes virtue development from one that records behavioral history, using examples from the VE GDD and one other game of the student's choice. Yes
Bridge to Ch. 5 "Both virtue ethics and contractarianism require reasoning about agents positioned in the system over time. Care ethics — the final framework — requires reasoning about the specific other rather than the category."
Chapter 5 — Bridge Chapter Care Ethics as Design Constraint Act One · Week 5
The student learns to design consequence structures where the morally significant units are relationships and people rather than resources and rules — and produces the preliminary moral architecture specification that will govern their build.
VE Thread — Final Act One Spiral Pass

Students now have five frameworks as lenses. Full first-pass VE framework identification submitted as a written diagnostic. This document is returned for comparison with the AI audit findings in Week 12 and the gap analysis in Week 13.

Opening Strategy

Open with a designed editorializing failure — not from VE, but from the trap game. By Week 5, students have vocabulary to name exactly what went wrong. The editorializing failure is the last concept before the build begins because it is the mistake most likely to appear in the first draft of every student's game.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-5.1 Analyze Identify the editorializing failure — a design that tells the player what to feel rather than producing felt moral weight — in a provided game example, specifying the exact design decision that causes it. Yes
LO-5.2 Create Design a relationship damage mechanic for a domain-specific scenario that makes loss legible to the player without emotional language — specifying how the mechanic communicates damage without moral commentary. Yes
LO-5.3 Evaluate Select one ethical framework for the semester project and defend the selection against three criteria: domain fit, capacity to produce implication rather than smugness, and AI-auditor legibility. Yes
LO-5.4 Create Produce a preliminary moral architecture specification: the ethical framework, the primary mechanic that embodies it, the consequence structure that transfers responsibility to the player, and one specific design decision predicted to produce implication in a human player. Yes
⚑ Load-bearing deliverable — LO-5.4: This is the gate to Act Two. A student whose specification cannot name how the mechanic transfers responsibility to the player has not yet designed an ethical architecture. Students who cannot produce LO-5.4 to specification do not begin the Week 6 build. Instructor policy on what happens next must be confirmed before the course runs.
Bridge to Act Two "The preliminary moral architecture specification closes Act One. Act Two opens with the GDD as an ethical argument — not a description of a game, but a claim about how the world works, encoded in mechanics."
Act Two · Weeks 6–11 · Chapters 6–11

Build

Students use Zelda (GDD tool) and Claude Code to build a web-based game encoding their chosen ethical dilemma in their chosen domain. The ethical framework is not disclosed in the GDD. The mechanic must embody the framework — not describe it.

Chapter 6 Design Lock Act Two · Week 6
The student produces a complete GDD that encodes an ethical framework in its consequence structure without naming the framework anywhere in the document.
VE Thread

VE GDD as structural model. Students are now writing a document of this type — an ethical argument encoded in design decisions. The question shifts from 'what framework is in VE?' to 'can I build a document that does what VE does?'

Opening Strategy

Distribute a redacted version of a student GDD from a prior course (or a purpose-written example). Class identifies the ethical framework from the mechanics alone. Then: what would it take to make the framework unidentifiable from the GDD? That failure mode is the design problem for the week.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-6.1 Create Produce a GDD that encodes the chosen ethical framework mechanically without naming or describing the framework anywhere in the document. The framework must be identifiable from the GDD by a reader who does not know which was chosen. Yes
LO-6.2 Evaluate Evaluate the preliminary moral architecture specification from Week 5 against the completed GDD, identifying divergence between intended architecture and designed mechanics — and specifying what revision closed each gap. Yes
LO-6.3 Analyze Identify the player experience goal the GDD's consequence structure is designed to achieve, stating it as the specific type of implication the player should feel — and the specific type of smugness the design actively avoids. Yes
Bridge to Ch. 7 "The GDD is the argument. The build tests whether the argument holds when implemented. Week 7 is the first moment a student discovers whether what they designed is what they built."
Chapter 7 Build I Act Two · Week 7
The student implements the core mechanic and begins tracking the design decisions that the AI tool cannot make.
VE Thread

VE as reference architecture. When students encounter a tool-generated solution that works mechanically but fails ethically, VE's Legible Causality pillar is the diagnostic standard: is each consequence caused by the decision, or does it only follow plausibly?

Opening Strategy

Open with a live demonstration: submit a VE-style mechanic specification to Claude Code, accept the output, run it, and ask the class: is this ethical architecture or a description of one? The demonstration should be designed to produce at least one ethically incoherent output.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-7.1 Create Implement the core mechanic of the semester project as a functional web-based prototype, with the consequence structure operable by a player. Yes
LO-7.2 Analyze Document one instance where Claude Code generated a mechanically correct solution that was ethically incoherent — specifying the incoherence and the design decision required to correct it. Yes
⚑ Evidence flag — LO-7.2: This outcome operationalizes the series premise. It requires the student to notice a Tier 1 success that is a Tier 3/4 failure. Consider requiring a code diff or build artifact as evidence — this outcome is too important to be completable by invention.
Bridge to Ch. 8 "The core mechanic exists. Week 8 is where the architecture reveals itself — where the student discovers what the game actually does versus what they designed."
Chapter 8 Build II Act Two · Week 8
The student implements the full consequence structure and confronts the gap between what they designed and what the game actually does.
VE Thread

VE's consequence engine as reference: three to four pre-authored consequence paths per edict, each causally coherent. Does the student's consequence structure meet the same standard? Self-audit against the VE design pillars.

Opening Strategy

Open with the self-audit as a structured exercise: students bring their GDD and their current build and identify the three largest gaps between them. The gaps are not failures — they are the data.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-8.1 Create Implement the full consequence structure of the semester project, including the mechanic encoding the ethical framework, the player choice architecture, and the cost structure. Playtestable build required. Yes
LO-8.2 Evaluate Conduct a self-audit of the implemented consequence structure against the GDD specification, identifying where implementation matches intended architecture and where it has diverged. Yes
LO-8.3 Analyze Identify the single mechanic in the current build that carries the most ethical weight — where the player's decision most directly transfers moral responsibility — and specify why it carries more weight than adjacent mechanics. Yes
Bridge to Ch. 9 "The build has a consequence structure. It has not been seen by a human player. Week 9 is the first moment someone other than the designer encounters the ethical architecture."
Chapter 9 Build III — Playtestable Alpha Act Two · Week 9
The student reaches a playtestable alpha and runs the first informal test of whether the game produces implication or something else.
VE Thread

VE's player experience goals (PX-1 through PX-8) as diagnostic frame. Students ask of their own game: which PX goals did my playtesters report experiencing? Which did they not? The VE PX goals are the vocabulary for this analysis.

Opening Strategy

Open with a structured debrief: one student presents their playtest data, class applies the implication/smugness distinction to the reports. The exercise models what Week 11 (peer playtesting) will require at scale.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-9.1 Apply Conduct an informal playtest with at least two players outside the course, collecting specific feedback on whether the game produced implication or smugness — using the evaluative distinction as a structured feedback instrument. Yes
LO-9.2 Evaluate Assess the informal playtest data against the design intent in the GDD, identifying the largest gap between intended player experience and reported player experience — located in a specific design decision. Yes
LO-9.3 Create Produce a build revision specification naming at most three design changes, each defended against the primary player experience goal. Three changes maximum is an acceptance criterion — prioritization is the skill. Yes
Bridge to Ch. 10 "The informal playtest data identifies the gap. The beta build in Week 10 attempts to close the most important gap before the formal peer playtesting and Ethical Auditor sessions."
Chapter 10 Build IV — Beta Act Two · Week 10
The student submits a beta build for peer review and produces the Ethical Auditor preparation document.
VE Thread

VE GDD as the model for the Ethical Auditor preparation document. The preparation document is a document of the same type as the VE GDD — an architectural description that encodes ethical position without naming it.

Opening Strategy

Distribute the Ethical Auditor prompt architecture. Walk through one example audit using the VE GDD as the input. Ask: what did the AI find? What did it miss? What would you have to change in the GDD to make it miss more?

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-10.1 Create Produce a beta build incorporating revisions from Week 9 playtesting, meeting minimum specification: ethical framework operable by a player, consequence structure transfers responsibility, game completable in a single session. Yes
LO-10.2 Create Produce an Ethical Auditor preparation document: an architectural description enabling AI evaluation of the consequence structure — without revealing the ethical framework label anywhere in the document. Yes
LO-10.3 Evaluate Predict, with specific reasoning, where the Ethical Auditor will correctly identify the embedded framework and where it will fail — and name the design decision most likely to mislead the AI. Yes
⚑ Collection flag — LO-10.3: The prediction document is returned for comparison with actual audit results in Week 12. If it is not collected and stored, the comparison cannot happen.
Bridge to Ch. 11 "The beta build exists. The Ethical Auditor preparation document is complete. Week 11 is the last moment before the AI and human evaluations converge."
Chapter 11 Peer Playtesting Act Two · Week 11
The student functions as both a playtester for another student's game and a designer receiving playtester data — and learns to distinguish felt experience data from structural observation data.
VE / Trap Game Thread — Calibration Exercise

Before playtesting each other's games, students apply the feedback instrument to the trap game. This establishes a shared standard for what specific feedback looks like — and gives students one more encounter with the designed failure before they assess each other. If students produce vague feedback on the trap game, the instrument needs refinement before peer playtesting begins.

Opening Strategy

The trap game is played first by everyone. Feedback collected using the same instrument students will use on each other's games. The trap game's feedback should be easy to produce — the failures are designed to be visible.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-11.1 Apply Apply the implication/smugness evaluative distinction as a structured feedback instrument while playtesting another student's game, producing specific, located feedback about design decisions rather than general impressions. Yes
LO-11.2 Analyze Distinguish, in the feedback received on your own game, between reports of felt experience and reports of structural observation — and identify which type is more useful for the Week 13 gap analysis. Yes
LO-11.3 Evaluate Evaluate the peer playtesting data against the prediction from LO-10.3, identifying where the prediction was accurate and where human player experience diverged from the expected audit result. Yes
⚑ Transition gate — Act Two → Act Three: Students without a playtestable beta build and human playtesting data cannot meaningfully complete the Week 12 Ethical Auditor session. The gap analysis will have only one side.
Bridge to Act Three "The human playtesting data is complete. The Ethical Auditor session in Week 12 provides the AI evaluation. Week 13 is where the two are compared."
Act Three · Weeks 12–15 · Chapters 12–15

Audit and Analysis

The Ethical Auditor session, the gap analysis, and the published games analysis. The course's thesis made visible as a classroom event: where does the machine's structural analysis diverge from human felt experience, and why?

Chapter 12 The Ethical Auditor Act Three · Week 12
The student submits the game to the AI Ethical Auditor and evaluates the audit report against both design intent and human playtesting data.
VE Thread — Shared Class Exercise

VE GDD submitted to the Ethical Auditor as a shared class exercise before individual submissions begin. Class sees the AI audit of the aspirational architecture — what it finds, what it misses, where it is confident and wrong. This calibrates expectations before students submit their own work.

Opening Strategy

Run the VE Ethical Auditor session in class, live. The audit is visible to everyone. Class discusses the results before individual sessions begin. This is the moment the AI's Tier 1 competence is demonstrated at scale — and its Tier 3/4 limits become visible.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-12.1 Apply Submit the beta build and Ethical Auditor preparation document to Claude acting as Ethical Auditor, receiving a structural analysis of the embedded framework and an evaluation of where moral weight lands — on the player or the character. Yes
LO-12.2 Evaluate Evaluate the AI audit report against GDD design intent, identifying where the structural analysis is correct, where it is correct but misses the human experience dimension, and where it is wrong. All three categories must be present. Yes
LO-12.3 Analyze Compare the AI audit findings with the Week 5 first-pass VE diagnostic, identifying whether analytical ability improved across the semester — and specifying what changed in the student's approach. Yes
Bridge to Ch. 13 "The AI audit is complete. The human playtesting data is in hand. Week 13 is where the two are compared — the course's central question made answerable."
Chapter 13 The Gap Analysis Act Three · Week 13
The student constructs the gap analysis — tracing specific design decisions to specific divergences between AI audit findings and human playtester experience — and produces the course's central deliverable.
VE Thread + Week 1 Journal Return

Week 1 response journals returned to students. The gap analysis begins with the felt response from Week 1 — before vocabulary — and ends with the AI audit findings from Week 12. The arc: felt it without naming it → named it with five frameworks → built it → the AI evaluated the structure → now name the gap between the structure and the feeling.

Opening Strategy

Distribute the Week 1 response journals at the start of class. Students read their own words from fifteen weeks ago. Ask: what did you feel then that you can now name? What did you feel then that you still cannot name — and is that gap the answer to the course's question?

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-13.1 Create Construct a gap analysis tracing at least three specific design decisions to specific divergences between AI audit findings and human playtester experience — naming the human variable the structural analysis could not reach. Three decisions, three divergences, one named human variable per divergence. Yes
LO-13.2 Evaluate Identify the one judgment call in the semester project that required values, domain knowledge, or accountability that an AI could not have made — with specific reference to the design decision, the alternative the AI would have generated, and why that alternative would have failed the player experience goal. Yes
LO-13.3 Create Propose one design revision that would close the largest gap between architectural legibility and felt moral weight — specifying the design decision and the predicted change in player experience. The prediction must be falsifiable. Yes
⚑ Assessment spine made explicit — LO-13.2: This is the moment the recurring deliverable question — "name one judgment call an AI could not have made" — is answered about the entire semester's work, not just a single deliverable.
⚑ Journal return infrastructure: The Week 1 response journal must be physically or digitally returned to students in this session. If it has not been stored and cannot be returned, the gap analysis loses its most important data point.
Bridge to Ch. 14–15 "The gap analysis closes the student's own work. Weeks 14 and 15 apply the same analytical framework to published games — asking whether the gap exists in systems other designers built, and whether it could have been closed."
Chapter 14 Published Games I — Reading as a Designer Act Three · Week 14
The student applies the course's full analytical framework to Papers Please and Spec Ops: The Line, reading as a designer rather than a player.
VE Thread

Students now apply to published games the same analysis they applied to their own builds. Papers Please and Spec Ops enter as the external calibration — games built by professionals with full production resources, evaluated against the same standards.

Opening Strategy

Play 15 minutes of Papers Please in class. Ask: at what moment did you feel implicated? Locate that moment in a specific mechanic. Then ask: was that the moment the designer intended? How would you know?

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-14.1 Analyze Identify the ethical framework embedded in Papers Please's consequence structure from mechanical evidence alone — specifying the mechanics that encode it and the design decisions that make it operable rather than described. Yes
LO-14.2 Evaluate Evaluate whether Spec Ops: The Line achieves player implication or produces a different moral response — specifically whether the structural subversion mechanic produces felt weight or produces commentary about felt weight in another medium. Position required, defended with specific design decisions. Yes
LO-14.3 Apply Apply the Ethical Auditor framework to Papers Please, producing a structural audit report in the format used in Week 12 — and compare the result against documented player experience data from published criticism and player reports. Yes
⚑ Content gap — OQ-L1-A: A fourth published game is needed that attempted player implication and produced smugness instead. The trap game fills this role for original design work. A published example of the same failure would strengthen the Week 14–15 analysis. Selection pending.
Bridge to Ch. 15 "Two games analyzed. One succeeded at implication (Papers Please). The evaluation of Spec Ops is genuinely contested — bring the contested reading into class. Week 15 completes the published games analysis and asks for the general principle."
Chapter 15 Published Games II + Synthesis Act Three · Week 15
The student completes the published games analysis and articulates the course's central claim as a general design principle — with evidence from the semester's own work and the published cases.
VE Thread — Optional Final Encounter

If a playable version of VE exists by Week 15, students play it for the first time and compare their felt response to their semester-long architectural analysis of the GDD. The gap between analyzing a system and experiencing it is the course's final argument demonstrated in real time.

Opening Strategy

Open with the question the course has been building toward: if you had to write one sentence that a designer could use to distinguish a system that produces moral weight from one that describes it — what would that sentence be? Students write it before the lecture. The lecture is the attempt to make the sentence precise enough to be useful.

Learning Outcomes
IDBloom'sLearning outcomeAssessable
LO-15.1 Analyze Identify the moral architecture of Disco Elysium — specifying whether it embodies one of the five frameworks studied, a combination, or requires a new category — defending the classification with reference to specific design decisions. 'Requires a new category' is acceptable if the student can specify what the new category is and why none of the five frameworks capture it. Yes
LO-15.2 Create Produce a comparative analysis of two published games' moral architectures — specifying which design decisions produce moral weight and which produce description of moral weight — and connect the comparison to the student's own gap analysis from Week 13. Yes
LO-15.3 Evaluate Articulate the gap between AI-auditable architecture and human-felt moral weight as a general design principle — with specific evidence from the Ethical Auditor session (Week 12), the gap analysis (Week 13), and one published game. The principle must be stated in one sentence. If it cannot be stated in one sentence, it is not yet a principle. Yes
Terminal outcome standard — LO-15.3: The standard for passing is a sentence that could appear in the preface of the book, with evidence. "AI can't feel things" fails. A sentence that names the specific design variable that creates the gap — with evidence from the student's own build and a published game — passes.
No next chapter. The principle the student articulates in LO-15.3 is the sentence they carry out of the course. It should be the sentence they could not have written in Week 1 — and if it is, the course worked.
Section 7

Open Design Questions

The items below are design decisions that must be resolved before the course runs. They are not administrative questions.

OQ-L2-A Critical Trap game must be built before course runs
The primary sequencing model (concrete to abstract) depends on a designed failure case in Week 1. VE is aspirational architecture only. Without the trap game, Week 1 has one game and the sequencing model is compromised. This is a prerequisite, not a nice-to-have.
OQ-L1-A High Week 14–15 fourth published game — selection pending
A published game that attempted player implication and produced smugness instead. The trap game fills this role for original design work but a published example would strengthen the Week 14–15 analysis. Selection pending.
OQ-L1-B High Operational definition of implication vs. smugness
The distinction does evaluative work from Week 1 onward but has no formal operational definition. Students applying it impressionistically will produce inconsistent peer feedback in Week 11. Definition needed by Week 3.
OQ-L1-C High Evidence requirement for LO-7.2
The AI-incoherence identification outcome is gameable by a student who writes a convincing description of an incoherence they didn't actually encounter. Consider requiring a code diff or build artifact as evidence.
OQ-L2-B High Week 1 response journal infrastructure
The journal must be collected, stored, and returned in Week 13. The course management system must support timed document return. If not, the gap analysis loses its most important data point.
OQ-L2-C High Instructor policy on failed Week 5 transition tests
Option 1: revise until passing — no build begins on incomplete architecture. Option 2: proceed with flagged specification — gap tracked through Week 13. Must be stated before the course runs. This is a design decision, not an administrative one.
OQ-General Medium Ethical Auditor prompt architecture
How much structural scaffolding does Claude receive before auditing? Too much = the audit tests prompt design, not game design. Too little = inconsistent results across cohorts. Requires a controlled prompt specification before Week 12.
Irreducibly Human: Ethical Play — TIC TOC v0.1 · March 2026
Sections completed: Book Concept, Learning Outcomes, Sequencing Logic. Sections pending: Book Type, Audience, Three-Act Arc, Prerequisite Map, Chapter Architecture, Production. Open questions log is the highest-priority action item before the course runs.