Irreducibly Human Series · Northeastern University · College of Engineering

Embodied Teaching: what AI handles and what requires a body

What AI can and can't do — applied to teaching practice in embodied domains

Course [XXXX 5XXX]  ·  4 Credit Hours  ·  Fall [Year]  ·  In-person
Instructor: Nik Bear Brown  ·  ni.brown@neu.edu
Version 1.0  ·  [Distribution Date]  ·  Reviewed by Dev the Dev

Contents

  1. Welcome
  2. The Irreducibly Human Series
  3. Course Information
  4. Learning Outcomes
  5. Required Materials
  6. Assessment and Grading
  7. Course Schedule
  8. Course Policies
Section 1

Welcome

I've spent years watching administrators hand AI integration mandates to teachers who have spent their careers developing expertise in things AI cannot do. Not because the administrators were wrong that AI matters — because the mandate missed the question entirely. The question was never "can you integrate AI?" It was always "do you know what you have that AI cannot touch?"

That is the question this course answers.

Embodied instruction — teaching that requires a body, hands, presence, and the relational intelligence that develops through years of watching students move and make and fail and try again — is the most specific form of teaching intelligence that exists. It is also the form AI integration discourse most consistently ignores. Every resource produced for teachers about AI in education was designed for content delivery. It has nothing to say to the ceramics teacher trying to understand what AI can actually do in a studio, or the nursing simulation instructor wondering whether case generation is the same thing as clinical judgment. The discourse keeps arriving at the wrong address.

This course is for the teachers at the right address.

AI handles the parts of teaching that don't require a body. This course tells you, for your domain, exactly which parts those are — and which parts it cannot touch. You will read fifteen domain analyses as analytical cases and your own domain as a design problem. You will build an AI integration plan specific to your teaching context, specifying what you hand off and what you protect. You will field-test that plan with colleagues in your domain. And then you will write the gap analysis — the document that names, precisely, where the integration plan works and where it requires a body to close the distance.

The course is demanding in a specific way. It will not ask you to survey AI tools. It will ask you to defend a claim about your domain: that this capacity, developed in the body through this kind of practice, cannot be simulated — and here is why, and here is what AI does instead. That claim is harder to make rigorously than it looks. Most teachers feel it. Few can demonstrate it. This course builds the demonstration.

What you will leave with: a complete, field-tested AI integration plan for your embodied domain, a gap analysis connecting your integration decisions to practitioner feedback, and the one sentence that names what AI cannot do in your teaching that only your presence can accomplish.

Here is what to do before we meet: come to Week 1 having read nothing. Come ready to describe your domain to someone who has never been in your classroom — in two minutes, without jargon. That description, before any vocabulary from this course, is the most important document you will produce all semester.

— Nik Bear Brown | ni.brown@neu.edu
Section 2

The Irreducibly Human Series

We are in the early years of the most powerful cognitive tools ever built. AI systems are superhuman at pattern recognition, fact retrieval, arithmetic, and syntactic correctness. They are genuinely poor at constructing problem formulations, auditing their own outputs for plausibility, reasoning causally about what they are measuring, and knowing when not to proceed.

The Irreducibly Human series develops exactly those capacities — the forms of reasoning that AI tools require humans to supply, and that your competitors who only learned to use the tools will not have.

This course — Embodied Teaching — is the domain-application volume of the series. It takes the full Irreducibly Human framework and applies it to teaching practice in domains where learning happens in the body: woodshop, physical education, nursing simulation, surgical training, music and voice, dance, culinary arts, theater, physical therapy, early childhood education, special education, lab science, architecture studio, and the trades. The course's central question is not whether AI can help embodied instruction. It is exactly where, in a specific domain's teaching practice, the machine's administrative and analytical competence ends and the teacher's irreplaceable physical and relational presence begins.

The companion courses build the capabilities this course applies. Botspeak develops the complete AI collaboration framework across five modes. Conducting AI builds the Tier 4 supervisory toolkit — problem formulation, plausibility auditing, interpretive judgment. Ethical Play applies the Irreducibly Human framework to game design systems that encode ethical positions as mechanical consequence structures. Any of these can be taken before this course; none is required.

Section 3

Course Information

Course Identifiers

FieldValue
Course TitleIrreducibly Human: What AI Can and Can't Do — Embodied Teaching
Course Number[XXXX 5XXX — assigned at CourseLeaf submission]
Credit Hours4
TermFall [Year]
Mode of DeliveryIn-person
ComponentsLecture/Seminar (1× weekly) + TA-led Domain Lab (1× weekly in-class lab)
DepartmentCollege of Engineering

Meeting Information

Lecture/Seminar: [TBD]  ·  Location: [Building, Room]
Domain Lab (TA-led): [TBD]  ·  Location: [Building, Room]

The Domain Lab is a required course component, not an optional recitation. It is where the framework becomes domain-specific design decisions and those decisions become a testable integration plan. The Week 11 peer domain review session cannot be replicated outside the lab — the feedback data collected there is the primary input to your gap analysis. Missing the lab is not equivalent to missing a lecture — it is missing the part of the course where the course's central question becomes answerable in your specific domain.

Instructor

FieldValue
NameNik Bear Brown
Emailni.brown@neu.edu
Response timeWithin 48 hours on weekdays. Put URGENT in subject line for time-sensitive questions.
Office / Zoom[TBD]
Student hours[Days, times, location] — booking link TBD
Preferred contactEmail for logistics. Student hours for anything that takes more than two sentences to answer well.

I hold student hours for you — not only for students with emergencies. Come because you're uncertain about where your domain's protected core actually lies, because you want to think through your integration plan before committing to a specification, because you want to know whether the gap you found in your field test is the gap you designed or a different one, or simply because you want to understand where this field is going. The most useful conversations I have with students in this course happen outside scheduled sessions.

Teaching Assistant

FieldValue
NameTBA
EmailTBA
Domain Lab hoursTBA

The TA runs the weekly Domain Lab — facilitating domain analysis exercises, running peer review sessions, and returning written feedback on lab submissions. For questions about integration plan architecture, protected core specification, and field application design, the TA is your first resource. Domain analysis and integration plan questions go to the TA first; if unresolved, the TA will forward to the professor.

Prerequisites

Official prerequisites: Graduate standing in Engineering or related field (exact CourseLeaf string TBD)

What this course assumes you know

You have substantial experience in at least one embodied domain — as a practitioner, teacher, or designer of learning environments. You have used AI tools in some capacity. You have been asked by someone to "integrate AI" into something you teach or design. You do not yet have a principled framework for deciding what to hand off and what to protect.

What this course does not assume

Prior education coursework or teacher preparation. Advanced AI fluency — the course builds the analytical framework, not the tool literacy. Prior Irreducibly Human series courses are not required.

A note for students with strong technical backgrounds

Students who arrive most confident in their AI fluency sometimes find the Act One weeks the most disorienting. That disorientation is the course working as intended. Evaluating what AI cannot do in an embodied domain is not a harder version of evaluating what AI can do — it is a different cognitive operation that requires domain knowledge the tools do not have. Students who approach their embodied domain as a new analytical object — not an extension of their technical fluency — will get the most from it.

If you are not currently teaching or working in an embodied domain, contact the instructor before the first week. This course requires field access for the Week 9 pilot application and Week 11 peer review. Students without a domain context will need to establish one before Week 6.

Section 4

Learning Outcomes

By the end of this course, students will be able to:

  1. Distinguish between AI's analytical competence in a teaching domain and the irreplaceable physical and relational intelligence of the teacher working in that domain — using a specific embodied domain as the diagnostic case
  2. Apply the Tier 2 framework to at least three embodied domains — identifying, for each, which teaching tasks AI handles well and which require the teacher's body, sensory expertise, or relational attunement
  3. Specify the "protected core" of their primary embodied domain: the capacity that develops only through physical practice, only through the teacher's embodied presence, and that no AI integration plan should attempt to replace
  4. Construct an AI integration plan whose documentation, case generation, and assessment scaffolding functions are operationally specified — naming the tool, the workflow, and the specific time returned to embodied instruction
  5. Implement at least two specific AI applications in their domain: one for documentation or administrative reduction and one for case generation or assessment scaffolding
  6. Document instances where AI-generated materials for their domain are analytically correct but pedagogically incoherent — specifying the incoherence and the domain expertise required to correct it
  7. Apply the handoff/protect evaluative distinction as a structured feedback instrument during peer domain review, producing specific, located analysis of integration decisions rather than general impressions
  8. Submit an integration plan to an AI Integration Auditor and evaluate the audit against both design intent and practitioner field-test data, identifying where the structural analysis is accurate, where it misses the embodied dimension, and where it is wrong
  9. Construct a gap analysis tracing specific integration decisions to specific divergences between AI audit findings and practitioner field-test experience — naming the embodied variable the structural analysis could not reach
  10. Identify the Tier 2 boundary in a published AI-in-education deployment from evidence alone, and evaluate whether it correctly locates what requires a body
  11. Articulate the gap between AI-auditable integration architecture and irreplaceable embodied teaching as a general design principle — with evidence from the AI Integration Audit session, the gap analysis, and at least one published deployment case
Section 5

Required Materials

Textbook

FieldValue
TitleIrreducibly Human: What AI Can and Can't Do — Embodied Teaching
AuthorNik Bear Brown
PublisherBear Brown & Company / Kindle Direct Publishing, 2026
Availability[Amazon Kindle / print link — TBD at publication]
Cost[TBD]
EditionFirst edition. No prior edition exists.

Supplementary Materials

Distributed via Canvas throughout the semester at no cost. Required readings are marked [Required] in the weekly schedule; optional readings are marked [Recommended] and are genuinely optional.

Required Technology

Integration plan production (no prior setup required)

Field application and documentation (free, browser-based)

AI Integration Auditor (free)

Course platforms

Section 6

Assessment and Grading

Point Summary

AssessmentPointsQuality/Portfolio
Reading Responses (5 × 30 pts)150✓ 20 pts each
Weekly Domain Lab Assignments (8 × 25 pts, drop lowest of 9)200✓ 20 pts each
Domain Lab Participation100✓ 20 pts component
Midterm — Preliminary Integration Specification100
Final Project — Integration Plan Lock Checkpoint100✓ 20 pts
Final Project — Beta Integration Plan Checkpoint100✓ 20 pts
Final Project — Final Submission250✓ 20 pts
Total1000

AI-Based Grading Approach

800+ points — relative scale
Top 25%A
Next 25%A–
Next 25%B+
Final 25%B
Below 800 — absolute scale
780–799C+
730–779C
700–729C–
600–699D
Below 600F
Students below 800 points cannot earn a grade higher than B–, even if the relative curve would otherwise place them higher. The instructor reserves the right to make minor adjustments for fairness.

Quality/Portfolio Score (20 points — on all qualifying assignments)

Every assignment carrying the Quality/Portfolio component is evaluated on a relative 20-point scale comparing your work to peers, emphasizing depth of domain analysis, quality of gap reasoning, and evidence that the integration decisions and protected core judgments were made by you — not delegated to a tool.

Percentile BandScore
Bottom 25%5 pts
26–50th percentile10 pts
51–75th percentile15 pts
Top 25%20 pts

AI Use in Assignments

You are encouraged to use generative AI tools on every assignment. Citation is required. Undisclosed AI use is an academic integrity violation. Disclosed AI use is not.

Every submission must include an AI Use Disclosure block:

AI USE DISCLOSURE
Tool(s) used:
Portions assisted:
How used:
What I changed:
What the AI could not do: [name at least one domain judgment that required
your embodied expertise, your knowledge of what bodies need to develop,
or your accountability for a practitioner's teaching capacity — not optional]
The last field is the Irreducibly Human declaration. A disclosure that cannot name one thing the AI could not do has not demonstrated that the student performed the irreducibly human analytical layer. This is not a formality — it is the assessment spine of the entire course, made explicit.

Drop Policy

The lowest-scoring Domain Lab Assignment is dropped. Eight of nine assignments count toward the final grade. This absorbs one week where the domain didn't map cleanly to the framework or the field access didn't cooperate. It does not absorb a pattern of non-engagement.

Section 7

Course Schedule

The schedule maps each week to a chapter in Irreducibly Human: Embodied Teaching. Read the assigned chapter before Session A. Come to Session A with the domain case in your head. Come to Session B ready to use the concept in your own domain. Come to the Domain Lab ready to analyze or apply.

Reading time per chapter: approximately 45–75 minutes  ·  ⚑ = graded deliverable due  ·  ★ = transition week

Act One · Weeks 1–5 · Chapters 1–5

The Framework

What AI handles and what requires a body: the Tier 2 argument built from domain cases

Week 1 ★ The Tier 2 Problem Chapter 1
By the end of this week: Distinguish what AI can handle in a teaching domain from what the teacher's body and presence must supply — and produce the Week 1 domain description before any Tier 2 vocabulary is introduced.
Session ABoth documents read or reviewed in sequence — the Boyle System documentation package and the pre-Boyle mentor session transcript. No framework vocabulary introduced. One question only: what changed between the two teaching contexts, and what stayed the same? Week 1 domain descriptions collected before the lecture begins.
Session BThe Tier 2 distinction named. What AI handles in a teaching domain. What a teacher's physical and relational presence provides that the tool cannot reach. The handoff/protect distinction introduced.
Domain LabDomain Lab Assignment #1 — handoff/protect distinction exercise. (Ungraded warm-up portion in lab; graded submission due Week 2.)
⚠ The Week 1 domain description is load-bearing. It is collected, stored, and returned in Week 13 as the primary record of pre-vocabulary domain understanding. Do not skip it. Do not use Tier 2 vocabulary you have not yet encountered. Two minutes of description. What you teach that a body must do. That is all.
⚑ Reading Response #1 — Week 1 Domain Description (30 pts)
Describe your domain to someone who has never been in your classroom. What do you teach that a body must do? No Tier 2 vocabulary. Two minutes of description. This document will be returned to you in Week 13. Due before Session A, Week 1.
⚑ Domain Lab Assignment #1 (25 pts)
Apply the handoff/protect distinction to five teaching tasks in the Lab Science domain (provided). For each task: classify as AI-handleable or requiring a body/presence, and state in one sentence the specific reason for your classification. Then: compare the pre-Boyle and post-Boyle mentor session — which teaching tasks shifted to AI, and which remained with the mentor? Due before Domain Lab, Week 2.
Week 2 Legible Documentation: What the Boyle System Demonstrates Chapter 2
By the end of this week: Trace a documentation handoff through a four-stage consequence chain — and evaluate whether each stage represents a genuine return of teaching capacity or only a reduction in administrative load.
Session AThe Boyle System consequence chain: a four-stage sequence from adopting MVAL documentation to mentor meeting quality. Is each stage caused by the one before it, or does it only follow plausibly? The difference between those two descriptions is the chapter.
Session BThe Legible Handoff standard. What genuine capacity return looks like vs. administrative reduction that does not free embodied instruction. First structured spiral return: what does the Boyle System's consequence chain predict for your domain?
Domain LabDomain Lab Assignment #2 — 4-stage consequence chain construction for your domain.
⚑ Reading Response #2 (30 pts)
Identify the stage in the Boyle System consequence chain where the causal logic is weakest — where the connection is plausible rather than caused. What integration decision would make the connection genuinely causal? One paragraph, specific to one stage. Due before Session A, Week 3.
⚑ Domain Lab Assignment #2 (25 pts)
Construct a 4-stage consequence chain for one documentation handoff in your chosen project domain. Each stage must be causally connected to the prior stage; each connection statable in one sentence. Then: identify where the Boyle System's documented capacity return (60% → 20% mentor meeting time on gap review) would and would not transfer to your domain — and name the specific teaching task that explains the difference. Due before Domain Lab, Week 3.
Week 3 What Lives in the Hands: Woodshop, Trades, and the Grain of the Material Chapter 3
By the end of this week: Specify a protected core for a domain not your own — and identify the structural features that distinguish "AI cannot do this" from "AI has not yet done this."
Session AThe woodshop teacher standing next to a student hearing the blade catch wrong. What information is in that moment? What form does it take? Can it be transmitted without physical presence? Ch. 3 reading note: stop at "What the Hands Know." Come ready to feel the claim before evaluating it.
Session BThe structural definition: what makes a teaching capacity irreplaceable vs. currently unreplicated. The woodshop protected core specified rigorously. The ambiguity: does a high-fidelity haptic simulation change the answer? Second spiral return on the Boyle System — does it have a woodshop equivalent?
Domain LabDomain Lab Assignment #3 — protected core specification for an assigned domain.
⚠ Highest analytical risk chapter. The claim "AI cannot do X" is harder to defend than it appears. The woodshop domain is the required opening case because its protected core is among the most specific and least contestable in the book. Read the chapter through "What the Hands Know" before Session A. Read the rest after Session A.
⚑ Domain Lab Assignment #3 (25 pts)
Specify the protected core for an assigned domain (not your own). Required: the specific teaching capacity, the form it takes (tactile/auditory/visual/relational), the specific reason it requires physical presence, and one specific AI application that approaches but does not reach it. Then: evaluate the woodshop case — does the protected core claim hold if haptic simulation fidelity increases by a factor of ten? Take a defended position. Due before Domain Lab, Week 4.
Week 4 Kinesthetic Intelligence: What the Body Knows Chapter 4
By the end of this week: Distinguish between AI applications that develop a student's embodied capacity and those that document or analyze it — and apply this distinction as a design constraint in your domain.
Session AThe movement analysis tool in PE: it identifies what the student's body is doing. Does it develop what the student's body needs to develop? The AI application/AI assessment distinction made concrete.
Session BThe teacher's ear in music and voice instruction — the resonance adjustment, the physical discovery that the recording cannot produce. Kinesthetic intelligence in dance — the spot, the hands that are there and then gone. Third spiral return: where does kinesthetic intelligence appear in your domain?
Domain LabDomain Lab Assignment #4 — AI application/AI assessment distinction exercise.
⚑ Reading Response #3 (30 pts)
In your chosen project domain, who bears the cost if AI is integrated where it does not belong? Describe them specifically — not as a category, but as a learner in a specific moment of development. What integration decision in your current thinking most risks that learner's embodied development, and what would the protective redesign require? Due before Session A, Week 5.
⚑ Domain Lab Assignment #4 (25 pts)
In your domain, identify three teaching tasks that AI can analyze or document, and distinguish each from the embodied capacity development it accompanies. For each: name the task, name the AI application that reaches it, and name the specific embodied capacity the AI application cannot develop. Then: return to the Boyle System — what does MVAL documentation analyze, and what embodied scientific capacity does it accompany without developing? Due before Domain Lab, Week 5.
Week 5 ★⚑ The Integration Map: Where the Line Is Chapter 5
By the end of this week: Select your domain integration focus, defend the selection, and produce the preliminary integration specification that will govern your plan.
Session AA designed overreach failure — an AI integration case from a domain covered in Act One where the tool was applied where it does not belong. Students now have vocabulary to name exactly what went wrong. The overreach failure is the last concept before the integration plan begins because it is the mistake most likely to appear in the first draft of every student's plan.
Session BThe Integration Map methodology: the complete framework for specifying handoff and protected core. First-pass Boyle System integration map produced as a class. Students are now writing a document of this type for their domain.
Domain LabDomain consultation — TA available for one-on-one feedback on preliminary specification before midterm submission.
⚠ Act One Gate. The preliminary integration specification is the midterm deliverable. A student whose specification cannot name the specific teaching capacity it is protecting — and the specific AI application it is handing off to — has not yet designed an integration architecture. The plan does not begin until this specification is approved.

⚑ Midterm — Preliminary Integration Specification (100 pts)

The complete specification governing the semester integration plan. Required:

  • Domain selected with defense against three criteria — specificity of protected core, availability of genuine AI handoff applications, and field-test access
  • Primary AI handoff application identified as a specific tool for a specific teaching task — stated as an integration decision, not a general category
  • Protected core specification — the specific teaching capacity being protected, the form it takes, and why it requires the teacher's physical or relational presence
  • One specific integration decision predicted to return genuine teaching capacity, with reasoning
  • First-pass Boyle System integration map — which teaching tasks in Lab Science map to AI handoff, which map to protected core, one paragraph each

This is the Act One gate. Due end of Week 5.

⚑ Reading Response #4 (30 pts)
Name the overreach failure in the Act One case — the exact integration decision that replaced embodied capacity rather than freeing time for it. Then name the equivalent risk in your own preliminary specification: where in your current integration design is the overreach failure most likely to appear? Due before Session A, Week 6.
Act Two · Weeks 6–11 · Chapters 6–11

Build

Students design, implement, and field-test their domain AI integration plan. You enter Act Two with an approved preliminary integration specification. You leave with a field-tested beta integration plan and the practitioner feedback data required for the gap analysis. The protected core is operationally specified as a boundary. The handoff applications are implemented as workflows.

Week 6 ★⚑ Integration Plan Lock Chapter 6
By the end of this week: Produce a complete integration plan that specifies your handoff applications and protected core with enough precision that an AI auditor could evaluate its architecture — without being told where the boundary is.
Session AA redacted integration plan from a prior course (or purpose-written example). Class identifies the protected core from the integration decisions alone. Then: what would it take to make the protected core unidentifiable from the plan? That failure mode is the design problem for the week.
Session BThe Boyle System documentation package as structural model. Students are now writing a document of this type — an integration argument encoded in specific workflow decisions. The question shifts from "what is the Boyle System protecting?" to "can I build a document that does what the Boyle System does?"
Domain LabIntegration plan drafting session — TA provides written feedback on handoff architecture and protected core legibility.

⚑ Final Project — Integration Plan Lock Checkpoint (100 pts)

Required: (1) the integration plan — protected core not named or described as a philosophical position anywhere in the document; (2) practitioner experience goal stated as the specific type of teaching capacity return the plan should produce — and the specific type of capacity displacement it actively avoids; (3) evaluation of the preliminary integration specification from Week 5 against the completed plan — where intended architecture and specified workflows diverged, and what revision closed each gap. The protected core must be identifiable from the plan's workflow decisions by a reader who does not know where the boundary was drawn. Due end of Week 6.

Week 7 Build I: The Documentation Layer Chapter 7
By the end of this week: Implement the documentation layer of your integration plan as a working workflow — and document one instance where AI-generated materials were analytically correct but pedagogically incoherent for your domain.
Session ALive demonstration: a domain specification submitted to an AI tool for case generation. Output accepted. Class asked: is this pedagogically usable, or does it require domain expertise to correct? The demonstration is designed to produce at least one pedagogically incoherent output.
Session BThe Boyle System's MVAL protocol as the legibility standard for AI-generated documentation: does each element of the AI output correspond to a genuine teaching task, or does it only follow plausibly from the domain description? Tracking the domain judgments the tool cannot make.
Domain LabDomain Lab Assignment #5 — documentation layer implementation and AI-incoherence documentation.
⚑ Domain Lab Assignment #5 (25 pts)
Implement the documentation layer of your integration plan as a working workflow: select a tool, define the input specification, generate output, and evaluate it. Then: document one instance where the AI-generated output was analytically correct but pedagogically incoherent for your domain — specify the incoherence, provide the output as evidence, and state the domain judgment required to correct it. Due before Domain Lab, Week 8.
Week 8 Build II: The Assessment Scaffold Layer Chapter 8
By the end of this week: Implement the assessment scaffold layer and conduct a self-audit identifying the gaps between what you designed and what the AI-generated materials actually provide.
Session AStructured self-audit exercise: students bring their integration plan and current implementation and identify the three largest gaps between them. The gaps are not failures — they are the data.
Session BThe Boyle System's mentor session structure as reference standard: the shift from gap-review to strategy as the measure of genuine capacity return. Does the student's assessment scaffold produce the equivalent shift in their domain?
Domain LabDomain Lab Assignment #6 — integration plan self-audit.
⚑ Domain Lab Assignment #6 (25 pts)
Implement the assessment scaffold layer of your integration plan (documentation layer + assessment scaffold). Produce a self-audit: where implementation matches the specification, where it has diverged, and the single workflow that carries the most capacity-return potential — where the handoff most directly returns time and presence to embodied instruction — with reasoning. Due before Domain Lab, Week 9.
Week 9 Build III: Field-Testable Alpha Chapter 9
By the end of this week: Reach a field-testable plan and run the first informal test of whether the integration produces genuine capacity return — or something else.
Session AStructured debrief: one student presents informal field-test data, class applies the handoff/protect distinction to the reports. Models what Week 11 peer domain review will require at scale.
Session BThe Boyle System's practitioner experience goals as diagnostic frame. Students ask of their own field test: which capacity return goals did practitioners report? Which did they not? The Boyle System goals as vocabulary for informal field-test analysis.
Domain LabDomain Lab Assignment #7 — informal field-test data and revision specification.
⚑ Reading Response #5 (30 pts)
After your informal field test: describe the moment where a practitioner's response surprised you — where something landed differently than designed. What integration decision produced that moment? What does it tell you about the gap between your plan's intent and the practitioner's experienced teaching practice? Due before Session A, Week 10.
⚑ Domain Lab Assignment #7 (25 pts)
Conduct an informal field test with at least two practitioners in your domain (colleagues, cooperating teachers, or domain experts), collecting specific feedback on whether the integration produces genuine capacity return or administrative reduction using the handoff/protect distinction as a structured feedback instrument. Submit: (1) field-test data (anonymized); (2) assessment of the largest gap between intended and reported capacity return, located in a specific integration decision; (3) a plan revision specification naming at most three design changes, each defended against the primary capacity return goal. Three changes maximum is an acceptance criterion — prioritization is the skill. Due before Domain Lab, Week 10.
Week 10 ★⚑ Build IV: Beta Integration Plan Chapter 10
By the end of this week: Submit a beta integration plan for peer review and produce the AI Integration Audit preparation document.
Session AThe AI Integration Audit prompt architecture distributed. One example audit run using the Boyle System documentation as input — class sees what the AI finds and what it misses. Then: what would you have to change in the integration plan to make the AI miss more of what actually matters?
Session BThe Boyle System documentation as the model for the AI Integration Audit preparation document — an architectural description that encodes the handoff/protect logic without naming the protected core as a philosophical claim.
Domain LabAI Integration Audit preparation document drafting — TA provides feedback on architectural completeness.

⚑ Final Project — Beta Integration Plan Checkpoint (100 pts)

Required: (1) beta integration plan incorporating revisions from Week 9 field testing — handoff applications implemented as workflows, protected core operationally specified, plan evaluable by a practitioner in a single review session; (2) AI Integration Audit preparation document — an architectural description enabling AI evaluation of the handoff/protect structure without revealing the protected core's philosophical basis anywhere; (3) a prediction, with specific reasoning, of where the AI Integration Auditor will correctly identify the embedded protected core and where it will fail — and the integration decision most likely to mislead the AI. The prediction document will be returned for comparison with actual audit results in Week 12. Due end of Week 10.

Week 11 ★ Peer Domain Review Chapter 11
By the end of this week: Function as both reviewer of another student's integration plan and designer receiving reviewer feedback — and produce specific, located analysis rather than general impressions.
Session AThe Act One overreach case reviewed by everyone using the formal feedback instrument. The overreach case's failures are designed to be visible — if students produce vague feedback on the overreach case, the instrument needs refinement before peer domain review begins.
Session BPeer domain review sessions in Domain Lab format — structured feedback instrument applied to each student's integration plan.
Domain LabDomain Lab Assignment #8 — peer domain review feedback analysis.
⚠ Act Two Gate. Students without a field-testable beta integration plan and practitioner field-test data cannot meaningfully complete the Week 12 AI Integration Audit session. The gap analysis will have only one side.
⚑ Domain Lab Assignment #8 (25 pts)
Produce: (1) formal reviewer feedback on one peer's integration plan using the handoff/protect feedback instrument — specific, located, referenced to integration decisions; (2) analysis of the feedback received on your own plan, distinguishing between reports of practitioner experience and reports of structural observation, identifying which type is more useful for the Week 13 gap analysis; (3) evaluation of the peer review data against the prediction from the Beta Integration Plan Checkpoint — where the prediction was accurate and where practitioner experience diverged from the expected audit result. Due before Domain Lab, Week 12.
Act Three · Weeks 12–15 · Chapters 12–15

Audit and Analysis

The AI Integration Audit session, the gap analysis, and the deployed cases analysis. You enter Act Three with a beta integration plan, a completed AI Integration Audit preparation document, and practitioner field-test data. You leave with the gap analysis — the document that names, precisely, where structural analysis diverges from embodied teaching reality, and why. Act Three does not give you new tools. It gives you the question the tools were built to answer.

Week 12 The AI Integration Audit Chapter 12
By the end of this week: Submit your integration plan to the AI Integration Auditor and evaluate the audit report against both design intent and practitioner field-test data.
Session AThe Boyle System documentation submitted to the AI Integration Auditor as a shared class exercise — live, visible to everyone. Class discusses the results before individual sessions begin. The AI's structural competence demonstrated at scale. Its limits become visible in the same session — specifically, what it cannot find because finding it requires a body.
Session BIndividual or small-group AI Integration Audit sessions. Students submit beta integration plan and preparation document, receive structural analysis.
Domain LabAI Integration Audit sessions continued. Audit reports collected.
Note on Weeks 12–13: The AI Integration Audit session and gap analysis drafting feed directly into the Final Submission. They are structured work sessions, not separately graded deliverables. All work from these weeks is assessed as part of the Final Submission.
Week 13 ★ The Gap Analysis Chapter 13
By the end of this week: Construct the gap analysis — the course's central deliverable — tracing specific integration decisions to specific divergences between AI audit findings and practitioner field-test experience.
Session AWeek 1 domain descriptions distributed. Students read their own pre-vocabulary description of what their domain teaches. Ask: what can you name now that you could only feel then? What can you still only feel?
Session BGap analysis writing session. The arc made explicit: described it without naming it → named it with the Tier 2 framework → built an integration plan → the AI evaluated the structure → now name the gap between the structure and the embodied teaching reality.
Domain LabGap analysis drafting with TA feedback.
⚠ The Week 1 domain descriptions are returned to students at the start of this session. Students read their own words from before they had vocabulary. The gap analysis begins there and ends with the AI audit findings from Week 12.
Week 14 Deployed Cases I: Reading as a Designer Chapter 14
By the end of this week: Apply the course's full analytical framework to two published AI integration deployments in embodied domains — reading as a domain analyst rather than a user.
Session ADeployed Case A — an AI integration program in a clinical or simulation domain (case distributed Week 14). At what point in the integration did the program reach the protected core? Was that the line the designers intended? How would you know?
Session BDeployed Case B — an AI integration program in a trades or CTE domain (case distributed Week 14). Does the program's claimed capacity return match the practitioner experience reported in evaluation data? The evaluation is genuinely contested. Bring the contested reading into class.
Domain LabDomain Lab Assignment #9 — deployed cases analysis.
⚑ Domain Lab Assignment #9 (25 pts)
Apply the AI Integration Audit framework to Deployed Case A, producing a structural audit report in the format used in Week 12 — and compare the result against documented practitioner experience data from published evaluation reports. Then: evaluate whether Deployed Case B achieves genuine practitioner capacity return or produces a different outcome. Position required, defended with specific integration decisions and practitioner feedback evidence. Due before Domain Lab, Week 15.
Week 15 ★⚑ Deployed Cases II + Synthesis Chapter 15
By the end of this week: Complete the deployed cases analysis and articulate the course's central claim as a general design principle — with evidence from your own integration plan and the deployed cases.
Session ADeployed Case C — an AI integration program in early childhood or special education (case distributed Week 15). Does it correctly locate the protected core of its domain, or does it cross it? "Correctly locates the protected core" is acceptable if the student can specify how and why the designers made that judgment.
Session BFinal presentations or written submission with recorded walkthrough. The terminal outcome: one sentence that a designer could use to distinguish an AI integration plan that protects embodied teaching capacity from one that displaces it. Students write it before the lecture. The lecture is the attempt to make the sentence precise enough to be useful.
Domain LabOpen session — TA available for final questions. No new material.

⚑ Final Project — Final Submission (250 pts)

Required sections:

  • Gap analysis — at least three specific integration decisions traced to specific divergences between AI audit findings and practitioner field-test experience; one named embodied variable per divergence that the structural analysis could not reach
  • One judgment call — the single integration decision in the semester project that required your domain knowledge, your understanding of what bodies need to develop, or your accountability for a practitioner's teaching capacity that an AI could not have made; specify the integration decision, the alternative the AI would have generated, and why that alternative would have failed the capacity return goal
  • One integration revision proposal — the specific change that would close the largest gap between structural legibility and genuine capacity return; the prediction must be falsifiable
  • Deployed cases comparative analysis — two deployed cases' integration architectures compared; which integration decisions protect embodied teaching capacity, which displace it; connected to your own gap analysis
  • The general principle — the gap between AI-auditable integration architecture and irreplaceable embodied teaching stated as a general design principle in one sentence, with evidence from the AI Integration Audit session, the gap analysis, and at least one deployed case. If it cannot be stated in one sentence, it is not yet a principle.
  • Week 1 domain description reflection — one paragraph connecting what you described in Week 1 before vocabulary to what you can name now; what you could only feel then that you can now demonstrate, and what you can still only feel
  • Peer review response — written response to peer feedback received during Week 11 domain review, submitted with the final project

Due end of Week 15.

Schedule at a Glance

WeekChapterActMajor DeliverablePoints
1Ch. 1 — The Tier 2 ProblemOneRR #1 (domain description) + Domain Lab #130 + 25
2Ch. 2 — Legible DocumentationOneRR #2 + Domain Lab #230 + 25
3Ch. 3 — What Lives in the HandsOneDomain Lab #325
4Ch. 4 — Kinesthetic IntelligenceOneRR #3 + Domain Lab #430 + 25
5Ch. 5 — The Integration MapOneMidterm + RR #4100 + 30
6Ch. 6 — Integration Plan LockTwoIntegration Plan Lock Checkpoint100
7Ch. 7 — Documentation LayerTwoDomain Lab #525
8Ch. 8 — Assessment Scaffold LayerTwoDomain Lab #625
9Ch. 9 — The Field TestTwoRR #5 + Domain Lab #730 + 25
10Ch. 10 — Beta Integration PlanTwoBeta Integration Plan Checkpoint100
11Ch. 11 — Peer Domain ReviewTwoDomain Lab #825
12Ch. 12 — AI Integration AuditThree(Audit session — feeds Final)
13Ch. 13 — Gap AnalysisThree(Gap analysis drafting — feeds Final)
14Ch. 14 — Deployed Cases IThreeDomain Lab #925
15Ch. 15 — Through-Line + SynthesisThreeFinal Submission250

Domain Lab participation (100 pts) assessed continuously across all 15 weeks. Lowest Domain Lab Assignment dropped — 8 of 9 count toward final grade.

Section 8

Course Policies

Attendance and Participation

This course has three weekly contact points: two lecture/seminar sessions and one TA-led Domain Lab. Each serves a different function. Missing any of them is not equivalent to missing the same thing twice.

Per College of Engineering MGEN policy, students are allowed a maximum of 2 absences per course. 3 or more absences result in an F. More than 3 unexcused Domain Lab absences will result in a failing participation grade regardless of Quality/Portfolio score. The Week 11 peer domain review session is not replicable outside the lab — its data is required for the gap analysis.

Students who do not attend during the first week risk being dropped from the course. Week 1 contains the domain description collection that is required for Week 13 — missing Week 1 creates a structural gap in the course's primary deliverable.

Please inform me of any anticipated absence before class. Participation means engagement — analyzing, specifying, field-testing, reviewing peer work, asking structural questions about integration architecture, and connecting today's framework concept to your domain's teaching practice. Physical presence without engagement does not count as participation.

Late Work

Domain Lab Assignments feed the following week's integration work. A late submission that arrives after the lab has missed the feedback loop it was designed to produce.

Academic Integrity

What you submit is supposed to represent your domain knowledge, your integration decisions, your argument for why this workflow protects the teaching capacity only your presence can supply. Submitting borrowed integration analysis is not just an integrity violation — it is practicing the appearance of domain expertise rather than demonstrating it.

Violations include: submitting AI-generated work without citation, using another student's integration plan, gap analysis, or protected core specification without attribution, submitting work substantially similar to a peer's submission. All violations will be reported to OSCCR. No exceptions.

Collaboration policy: You are encouraged to discuss concepts, frameworks, and integration strategies. You may not share integration plans, gap analyses, protected core specifications, or field-test data. Work you submit with your name on it must reflect your own domain reasoning in your own words. If you collaborated on ideas, list your collaborators clearly.

If you are unsure whether something crosses a line — ask. I would rather answer that question than navigate a violation.

Generative AI Policy

You are encouraged to use generative AI tools in this course. This is not a reluctant permission — it is the pedagogical thesis.

Use AI to generate case materials for your domain. Accept the output. Then ask: does this work analytically but fail pedagogically for my domain? That question — and your answer to it — is the irreducibly human domain judgment the course is building.

Use Claude as the AI Integration Auditor in Week 12. Read its structural analysis. Then ask: what did it find that was correct? What did it miss? What could it never find, because finding it requires a body and years of embodied practice in this domain? The gap between those two questions is the sentence you will write in Week 15.

Every submission requires the AI Use Disclosure block specified in Section 6. Undisclosed AI use is an academic integrity violation. The TA or instructor may ask you to walk through and explain any part of your submitted work.

Instructor disclosure: I use generative AI tools in developing this course — for drafting domain case scenarios, generating first-pass integration plan structures that I then evaluate and revise against domain expertise, and editing course materials. I document my own AI use in the same format I am asking of you.

Incomplete Grades

An incomplete grade may be reported when a student has failed to complete a major course component. Missing work must be submitted within 30 days of the term's end or the agreed-upon due date, or it receives no credit. Contact the instructor before the final week if circumstances warrant discussion.

Irreducibly Human: What AI Can and Can't Do — Embodied Teaching
Syllabus v1.0 · Nik Bear Brown · Northeastern University · Fall [Year]

This syllabus reflects course information as of the distribution date. Learning outcomes, assessment architecture, and policies are stable. If meeting times, room assignments, or textbook availability change, updates will be posted to Canvas and communicated by email with at least one week's notice.