Irreducibly Human Series · Northeastern University · College of Engineering

Causal Reasoning: the identification layer no algorithm supplies

What AI can and can't do — building a defensible model of what causes what

Course [XXXX 5XXX]  ·  4 Credit Hours  ·  Fall [Year]  ·  In-person
Instructor: Nik Bear Brown  ·  ni.brown@neu.edu
Version 1.0  ·  [Distribution Date]  ·  Reviewed by Dev the Dev

Contents

  1. Welcome
  2. The Irreducibly Human Series
  3. Course Information
  4. Learning Outcomes
  5. Required Materials
  6. Assessment and Grading
  7. Course Schedule
  8. Course Policies
Section 1

Welcome

I've spent years watching capable engineers make confident causal claims from data that couldn't support them — not because they weren't smart, but because no one had ever taught them the layer of reasoning that sits between the data and the claim. They knew how to build the model. They didn't know how to ask whether the model was measuring what they thought it was measuring.

That gap is what this course closes.

Causal AI tools are genuinely powerful. They can estimate effects, run sensitivity analyses, and produce clean output with narrow confidence intervals. What they cannot do is draw your causal graph, choose what to condition on, or defend the assumptions that make a result trustworthy. That layer — the identification layer — requires domain expertise that no algorithm supplies. It requires you to know your field well enough to argue, in writing, why the arrows in your causal model point the way they do. No training data replaces that. No model learns it. It is the irreducibly human part.

This course teaches you to perform it.

You will leave this course able to build a defensible causal model for a problem in your own engineering domain, hand it off to an estimation tool with confidence, and evaluate whether the result should be trusted. More importantly, you will be able to answer — clearly, in a job interview or a boardroom — the question that separates engineers who use AI well from engineers who use it confidently and incorrectly: "Is what your system is measuring actually causing the outcome, or just correlated with it?"

The course is demanding in a specific way. It will not ask you to memorize frameworks or reproduce procedures. It will ask you to make judgment calls, defend them to skeptical peers, and revise them when the argument doesn't hold. That is harder than a methods course — and more valuable.

I am looking forward to the reasoning you will bring from your domains. The cases you work through in your own field will teach your classmates things I cannot. That exchange is the course working as intended.

Here is what to do before we meet: read Chapter 1 of Irreducibly Human: Causal Reasoning. Come to Session A with the case in your head. You don't need to understand it yet. You just need to feel the problem.

— Nik Bear Brown | ni.brown@neu.edu
Section 2

The Irreducibly Human Series

We are in the early years of the most powerful cognitive tools ever built. AI systems are superhuman at pattern recognition, fact retrieval, arithmetic, and syntactic correctness. They are genuinely poor at constructing causal models, formulating the right questions, auditing the plausibility of their own outputs, and knowing when not to proceed.

The Irreducibly Human series develops exactly those capacities — the forms of reasoning that AI tools require humans to supply, and that your competitors who only learned to use the tools will not have.

This course — Causal Reasoning — develops one specific, high-value cognitive skill: the ability to build a defensible model of what causes what in your domain, and to know what that model can and cannot support. It is not a course about causal AI tools. It is a course about the thinking those tools cannot do, that you will need to do every time you use them.

The companion course — Conducting AI — builds the broader metacognitive and supervisory toolkit: problem formulation, plausibility auditing, interpretive judgment, and tool orchestration. The two courses can be taken in either order; they are designed to complement, not to require each other.

Section 3

Course Information

Course Identifiers

FieldValue
Course TitleIrreducibly Human: What AI Can and Can't Do — Causal Reasoning
Course Number[XXXX 5XXX — assigned at CourseLeaf submission]
Credit Hours4
TermFall [Year]
Mode of DeliveryIn-person
ComponentsLecture/Seminar (1× weekly) + TA-led DAG Workshop (1× weekly in-class lab)
DepartmentCollege of Engineering

Meeting Information

Lecture/Seminar: [TBD]  ·  Location: [Building, Room]
DAG Workshop (TA-led): [TBD]  ·  Location: [Building, Room]

The DAG Workshop is a required course component, not an optional recitation. It is where concepts become skills. Missing the workshop is not equivalent to missing a lecture — it is missing the part of the course where learning consolidates.

Instructor

FieldValue
NameNik Bear Brown
Emailni.brown@neu.edu
Response timeWithin 48 hours on weekdays. Put URGENT in subject line for time-sensitive questions.
Office / Zoom[TBD]
Student hours[Days, times, location] — booking link TBD
Preferred contactEmail for logistics. Student hours for anything that takes more than two sentences to answer well.

I hold student hours for you — not only for students with emergencies. Come because you're uncertain about your DAG, because you want to think through your domain problem before committing to it for the final project, because you want to understand where this field is going, or simply because you want to know what this work looks like in practice. The most productive conversations I have with students happen outside scheduled sessions.

Teaching Assistant

FieldValue
NameTBA
EmailTBA
DAG Workshop hoursTBA

The TA runs the weekly DAG Workshop — designing critique sessions, facilitating peer review, and returning written feedback on workshop submissions. For questions about DAG construction, node identification, and backdoor criterion application, the TA is your first resource. Programming and tool questions go to the TA first; if unresolved, the TA will forward to the professor.

Prerequisites

Official prerequisites: Graduate standing in Engineering or related field (exact CourseLeaf string TBD)

What this course assumes you know

You have completed at least one applied statistics or machine learning course at the graduate level. You are comfortable reading and writing Python. You have encountered regression — you know what a coefficient means in practice, even if you could not derive it from first principles.

What this course does not assume

Prior knowledge of causal inference, DAGs, or graph theory. No econometrics. No measure-theoretic probability. If you have read Pearl's The Book of Why — useful background, not required.

A note for students with strong ML backgrounds

Students who arrive most confident in their modeling skills sometimes find the early weeks the most disorienting. That disorientation is the course working as intended. The identification layer is not a harder version of what you already know — it is a different cognitive operation. Students who treat it as an extension of their modeling fluency will struggle more than students who approach it as genuinely new terrain.

If you are missing a prerequisite, contact the instructor or your advisor before the first week. This course builds on applied quantitative fluency from Session 1 — there is no ramp.

Section 4

Learning Outcomes

By the end of this course, students will be able to:

  1. Distinguish statistical association from causal effect, naming the assumption required to move from one claim to the other
  2. Identify the identification layer within a causal analysis workflow and name decisions within it that require domain judgment
  3. Diagnose a causal claim as well-identified or under-identified, specifying which assumption is load-bearing and where it could fail
  4. Construct a directed acyclic graph (DAG) for a domain problem, correctly placing confounders, mediators, and colliders with every arrow stated as a causal claim
  5. Distinguish confounders, mediators, and colliders by structural position and predict the consequence of conditioning on each
  6. Apply the backdoor criterion to derive a valid adjustment set for a given DAG
  7. Defend the assumptions encoded in a DAG to a skeptical collaborator, with explicit plausibility rankings
  8. Translate a completed DAG into an estimation specification document for a causal tool
  9. Evaluate causal estimation tool output against the original DAG's assumptions
  10. Design a complete causal analysis plan for a novel domain problem, from DAG construction through output evaluation
  11. Assess whether a causal analysis should be attempted or reported as definitive given the available data and assumptions
Section 5

Required Materials

Textbook

FieldValue
TitleIrreducibly Human: What AI Can and Can't Do — A Practical Guide to Causal Inference for Domain Experts
AuthorNik Bear Brown
PublisherBear Brown & Company / Kindle Direct Publishing, 2026
Availability[Amazon Kindle / print link — TBD at publication]
Cost[TBD]
EditionFirst edition. No prior edition exists.

Supplementary Readings

Distributed via Canvas throughout the semester at no cost. Required supplementary readings are marked [Required] in the weekly schedule; optional readings are marked [Recommended] and are genuinely optional.

Required Technology

DAG drawing tools (free, browser-based — no installation required)

Hand-drawn submissions accepted in workshop sessions only. All graded assignments require digital submission.

Causal estimation (Act Three — Weeks 12–15 only)

Course platforms

Section 6

Assessment and Grading

Point Summary

AssessmentPointsQuality/Portfolio
Reading Responses (5 × 30 pts)150✓ 20 pts each
Weekly DAG Assignments (8 × 25 pts, drop lowest of 9)200✓ 20 pts each
DAG Workshop Participation100✓ 20 pts component
Midterm100
Final Project — DAG Draft Checkpoint100✓ 20 pts
Final Project — Specification Checkpoint100✓ 20 pts
Final Project — Final Submission250✓ 20 pts
Total1000

AI-Based Grading Approach

800+ points — relative scale
Top 25%A
Next 25%A–
Next 25%B+
Final 25%B
Below 800 — absolute scale
780–799C+
730–779C
700–729C–
600–699D
Below 600F
Students below 800 points cannot earn a grade higher than B–, even if the relative curve would otherwise place them higher. The instructor reserves the right to make minor adjustments for fairness.

Quality/Portfolio Score (20 points — on all qualifying assignments)

Every assignment carrying the Quality/Portfolio component is evaluated on a relative 20-point scale comparing your work to peers, emphasizing depth of causal reasoning, quality of domain judgment, and evidence that the identification layer was performed by you — not delegated to a tool.

Percentile BandScore
Bottom 25%5 pts
26–50th percentile10 pts
51–75th percentile15 pts
Top 25%20 pts

AI Use in Assignments

You are encouraged to use generative AI tools on every assignment. Citation is required. Undisclosed AI use is an academic integrity violation. Disclosed AI use is not.

Every submission must include an AI Use Disclosure block:

AI USE DISCLOSURE
Tool(s) used:
Portions assisted:
How used:
What I changed:
What the AI could not do: [name at least one identification decision
that required your domain knowledge — this field is not optional]
The last field is the Irreducibly Human declaration. A disclosure that cannot name one thing the AI could not do has not demonstrated that the student performed the identification layer.

Drop Policy

The lowest-scoring DAG assignment is dropped. Eight of nine assignments count. This absorbs one week where the concept didn't click. It does not absorb a pattern of non-engagement.

Section 7

Course Schedule

The schedule maps each week to a chapter in Irreducibly Human: Causal Reasoning. Read the assigned chapter before Session A. Come to Session A with the case in your head. Come to Session B ready to use the concept. Come to the workshop ready to draw.

Reading time per chapter: approximately 45–75 minutes  ·  ⚑ = graded deliverable due  ·  ★ = transition week

Act One · Weeks 1–4 · Chapters 1–4

Establish

What breaks when causal reasoning is absent — and why it matters for the work you already do

Week 1 The Decision That Looked Right Chapter 1
By the end of this week: Describe the difference between a pattern in data and a causal claim — using an example from your own domain.
Session AIn medias res — one complete causal failure, no definitions yet.
Session BThe hidden variable named; the three-node structure drawn together.
DAG WorkshopDraw the Ch. 1 case DAG; label every arrow as a claim.
⚑ Reading Response #1 (30 pts)
Describe a causal claim from your own engineering domain. What data supports it? What would have to be true for the data to be misleading? Due before Session A, Week 2.
Week 2 Three Words for the Same Problem Chapter 2  ·  Conditioning · Confounding · Controlling For
By the end of this week: Translate a causal claim from your domain into all three disciplinary registers.
Session AConditioning, confounding, controlling for — one operation, three registers; aspirin/headache case.
Session BWhy the vocabulary isn't neutral; mobile app/demographics case.
DAG WorkshopDraw the aspirin/headache DAG; label every arrow.
⚑ Reading Response #2 (30 pts)
Take the causal claim from RR1. Describe it in all three registers. What does each framing reveal that the others obscure? Due before Session A, Week 3.
Week 3 ★ The Map Before the Territory Chapter 3  ·  An Introduction to Directed Acyclic Graphs
By the end of this week: Draw a DAG for a known domain problem, label every arrow as a causal claim, and identify missing arrows as assumptions by omission.
Session AWhat a DAG is and isn't; London Underground analogy; graph literacy primer.
Session BWhat a DAG assumes by omission; the job training case.
DAG WorkshopWeekly DAG Assignment #1 — scaffolded DAG construction.
⚑ Weekly DAG Assignment #1 (25 pts)
Domain, variables, and causal direction provided. Draw the graph, label every arrow as an explicit causal claim, identify two missing arrows as assumptions by omission. Due before DAG Workshop, Week 4.
Week 4 ★⚑ The Identification Layer: What Only You Can Do Chapter 4  ·  The Decisions That Require Domain Expertise
By the end of this week: Name the identification layer, distinguish it from the estimation layer, and describe the three identification failure types using an unseen case.
Session AThe vendor demo case — what the tool received and what it assumed.
Session BThe three identification failures; the irreducibly human layer named explicitly.
DAG WorkshopAnnotate a given DAG: explicit claim / default assumption / unconsidered.

⚑ Midterm (100 pts)

Two unseen causal scenarios. For each: draw the implied DAG, diagnose the identification failure, name the load-bearing assumption, explain what domain knowledge would address it. No definitions asked. No recall tested.

⚑ Reading Response #3 (30 pts)
What is one decision in your final project domain problem that belongs to you and cannot be delegated to a tool? Name it specifically. Due before Session A, Week 5.
Act Two · Weeks 5–11 · Chapters 5–9

Build

The identification toolkit — built piece by piece through cases you recognize. You enter Act Two able to name the identification layer. You leave Act Two able to perform it.

Week 5 Confounders: The Variable You Forgot Chapter 5
By the end of this week: Find confounders in a DAG using the three questions, identify backdoor paths, and determine a valid adjustment set.
Session AStructural definition — the three questions; backdoor paths introduced intuitively.
Session BThe confounder you can't measure; bias direction.
DAG WorkshopWeekly DAG Assignment #2 — confounder identification.
⚑ Weekly DAG Assignment #2 (25 pts)
Given domain and variable list. Apply the three questions, draw backdoor paths, determine adjustment set, name any unmeasured confounder with bias direction. Due before DAG Workshop, Week 6.
Week 6 Mediators: The Variable You Shouldn't Touch Chapter 6
By the end of this week: Identify mediators by structural position and make an explicit analytical choice between total and direct effect estimation.
Session AWhat conditioning on a mediator does; workplace wellness case.
Session BTotal vs. direct effect; when you actually want the direct effect.
DAG WorkshopWeekly DAG Assignment #3 — confounder vs. mediator distinction.
⚑ Weekly DAG Assignment #3 (25 pts)
Given DAG with one confounder and one mediator. Identify each by structural position. Predict consequence of conditioning on each. State which effect to estimate and why. Due before DAG Workshop, Week 7.
Week 7 ★ Colliders: The Variable That Breaks Everything (Part 1 of 2) Chapter 7
By the end of this week: Recognize that a spurious association can be created — not merely revealed — by conditioning.
Session AThe hiring puzzle — case only, no definition. Session ends with the question open.
Session BStructural definition emerges from the puzzle; path opening named.
DAG WorkshopWeekly DAG Assignment #4 — find the collider without being told where it is.
⚠ Hardest conceptual week in the course. The definition is withheld until Session B deliberately. If the case doesn't produce discomfort before the concept is named, the concept will not stick. Read Chapter 7 only through "The Structural Definition" before Session A. Read the rest after.
⚑ Weekly DAG Assignment #4 (25 pts)
Given domain scenario with a hidden collider. Identify it by structural reasoning. Draw the DAG. Label the path that conditioning opens. Explain in one paragraph why the spurious association is created by conditioning — not present in the underlying data. Due before DAG Workshop, Week 8.
Week 8 Colliders: The Variable That Breaks Everything (Part 2 of 2) Chapter 7 continued
By the end of this week: Recognize selection bias as a structural collider problem — and explain why a larger sample does not fix it.
Session AThe obesity paradox — second collider instance; hospitalization as collider.
Session BSelection bias is collider bias; M-bias introduced; AI training data implication.
DAG WorkshopWeekly DAG Assignment #5 — selection bias as structural collider bias.
⚑ Weekly DAG Assignment #5 (25 pts)
Given study with defined sample restriction. Draw full DAG including sampling mechanism. Identify the collider. Explain bias direction. Explain why increasing sample size within the restricted population does not resolve the problem. Due before DAG Workshop, Week 9.
⚑ Reading Response #4 (30 pts)
Describe one place in your engineering domain where selection into a sample might be a collider. Name it and explain what would have to be true for it to be a problem. Due before Session A, Week 9.
Week 9 ★ The Backdoor Criterion: Closing the Paths That Don't Belong (Part 1 of 2) Chapter 8
By the end of this week: Trace all backdoor paths in a complex DAG and apply the first condition of the backdoor criterion.
The backdoor criterion is introduced here as relief — you have been building intuition about path blocking for four weeks. This is the procedure that replaces guesswork.
Session AThe problem Act Two created; two conditions in plain language; path-tracing introduced.
Session BPath-tracing practiced on two cases; sensitivity seed planted — deferred explicitly to Week 14.
DAG WorkshopWeekly DAG Assignment #6 — path tracing only; listing is the deliverable.
⚑ Weekly DAG Assignment #6 (25 pts)
Given six-node DAG. List every backdoor path. For each: name every node, identify its type, state whether open or closed. Do not derive the adjustment set yet. Due before DAG Workshop, Week 10.
⚑ Reading Response #5 (30 pts)
Two-register writing exercise. Describe the key causal relationship in your final project domain in one technical sentence (for a statistician) and one plain-language sentence (for a product manager). The two must differ in framing and intellectual structure — not just vocabulary. Due before Session A, Week 10.
Week 10 The Backdoor Criterion: Closing the Paths That Don't Belong (Part 2 of 2) Chapter 8 continued
By the end of this week: Derive a minimal valid adjustment set — and recognize when no valid adjustment set exists.
Session AMinimal adjustment set; why less is sometimes more; finance analyst case.
Session BWhen no valid adjustment set exists; unidentifiable effects.
DAG WorkshopWeekly DAG Assignment #7 — full backdoor criterion application.
⚑ Weekly DAG Assignment #7 (25 pts)
Given complex DAG. Apply both conditions. Derive minimal valid adjustment set. Verify conditions (a) and (b). State whether the effect is identifiable and why. Due before DAG Workshop, Week 11.
Week 11 ★⚑ Defending Your DAG: What You're Claiming, Assuming, and Leaving Open Chapter 9
By the end of this week: Construct a three-part DAG defense in two registers — and submit your final project DAG draft for feedback.
Session AThe three-part defense structure — explicit claims, ranked assumptions, open questions.
Session BTwo-register translation; a failing example shown alongside the model.
DAG WorkshopDefense writing for final project DAG; TA provides written feedback.

⚑ Final Project — DAG Draft Checkpoint (100 pts)

Your domain problem DAG with three-part defense: (1) every arrow stated as a causal claim; (2) missing arrows listed and ranked by plausibility; (3) unmeasured confounders named with bias directions. One paragraph each in technical and plain-language registers. Due end of Week 11.

Act Three · Weeks 12–15 · Chapters 10–13

Apply

The identification toolkit deployed — answers get less clean, and that is the point. You enter Act Three with a defended DAG. You leave with a complete causal analysis you can put in a portfolio and discuss in a job interview. Act Three stops giving you well-formed problems and starts giving you the kind of problems you will actually encounter — cases that recombine earlier concepts rather than introducing new structural problems. That is not easier. It is harder in the way that matters.

Week 12 ★⚑ From DAG to Data: What the Machine Needs Chapter 10
By the end of this week: Translate your defended DAG into an estimation specification document that preserves every identification decision through the handoff to a tool.
Session AThe handoff failure — what gets lost in translation.
Session BThe specification document; three handoff failures named; DoWhy sidebar.
DAG WorkshopWrite a specification document for a given DAG; identify what tool defaults would change.

⚑ Final Project — Specification Checkpoint (100 pts)

Revised DAG (incorporating Week 11 feedback) + complete estimation specification: treatment variable, outcome variable, adjustment set with justification, identification assumptions, "do not add" list. Due end of Week 12.

Week 13 Reading the Output: What to Trust and What to Interrogate Chapter 11
By the end of this week: Apply a three-question diagnostic to any causal estimation output and name four things the output cannot tell you.
Session AThe three questions; why clean formatting and narrow CIs don't address identification.
Session BWhat the output cannot tell you — four things; paid search incrementality case.
DAG WorkshopWeekly DAG Assignment #8 — output evaluation.
⚑ Weekly DAG Assignment #8 (25 pts)
Given causal estimation output and original specification. Apply the three-question diagnostic. Identify at least one discrepancy between specification and implementation. State what the output can and cannot support as a causal claim. Due before DAG Workshop, Week 14.
Week 14 When the Assumptions Don't Hold: Limits, Sensitivity, and Honesty Chapter 12
By the end of this week: Calculate an E-value, interpret it as a comparator against domain knowledge, and write a qualified conclusion.
Session AThe E-value as comparator — the sensitivity question from Week 9 answered here.
Session BWhen the analysis should not proceed; the honest qualified conclusion.
DAG WorkshopWeekly DAG Assignment #9 — sensitivity and qualified conclusion.
⚑ Weekly DAG Assignment #9 (25 pts)
Given domain scenario, causal estimate, and E-value. Compare against domain knowledge about likely confounders. Write qualified conclusion in two registers. State whether the analysis supports a definitive recommendation and why. Due before DAG Workshop, Week 15.
Week 15 ★⚑ The Full Analysis: One Problem, Every Decision Chapter 13
By the end of this week: Present a complete causal analysis plan for your own domain problem — every identification decision made explicitly, every limit named honestly.
Session AStructured peer review — each student reviews one peer's specification and defense using the three-question diagnostic.
Session BFinal presentations or written submission with recorded walkthrough.
DAG WorkshopOpen session — TA available for final questions; no new material.

⚑ Final Project — Final Submission (250 pts)

Complete causal analysis plan: domain question and specification; DAG with three-part defense; identification decisions stated as decisions; estimation specification; output evaluation; sensitivity analysis with E-value and domain argument; qualified conclusion in two registers. Due end of Week 15.

Schedule at a Glance

WeekChapterActMajor DeliverablePoints
1Ch. 1 — The Decision That Looked RightOneReading Response #130
2Ch. 2 — Three Words for the Same ProblemOneReading Response #230
3Ch. 3 — The Map Before the TerritoryOneWeekly DAG #125
4Ch. 4 — The Identification LayerOneMidterm + RR #3100 + 30
5Ch. 5 — ConfoundersTwoWeekly DAG #225
6Ch. 6 — MediatorsTwoWeekly DAG #325
7Ch. 7 — Colliders (Part 1)TwoWeekly DAG #425
8Ch. 7 — Colliders (Part 2)TwoWeekly DAG #5 + RR #425 + 30
9Ch. 8 — Backdoor Criterion (Part 1)TwoWeekly DAG #6 + RR #525 + 30
10Ch. 8 — Backdoor Criterion (Part 2)TwoWeekly DAG #725
11Ch. 9 — Defending Your DAGTwoDAG Draft Checkpoint100
12Ch. 10 — From DAG to DataThreeSpecification Checkpoint100
13Ch. 11 — Reading the OutputThreeWeekly DAG #825
14Ch. 12 — When Assumptions Don't HoldThreeWeekly DAG #925
15Ch. 13 — The Full AnalysisThreeFinal Project Submission250

DAG Workshop participation (100 pts) assessed continuously across all 15 weeks. Lowest DAG assignment dropped — 8 of 9 count toward final grade.

Section 8

Course Policies

Attendance and Participation

This course has three weekly contact points: two lecture/seminar sessions and one TA-led DAG Workshop. Each serves a different function. Missing any of them is not equivalent to missing the same thing twice.

Per College of Engineering MGEN policy, students are allowed a maximum of 2 absences per course. 3 or more absences result in an F. More than 3 unexcused DAG Workshop absences will result in a failing participation grade regardless of Quality/Portfolio score.

Students who do not attend during the first week risk being dropped from the course. Please inform me of any anticipated absence before class.

Participation means engagement — drawing, revising, critiquing, asking structural questions, and connecting today's concept to your domain. Physical presence without engagement does not count as participation.

Late Work

DAG assignments feed the following week's workshop. A late submission that arrives after the workshop has missed the feedback loop it was designed to produce.

Academic Integrity

What you submit is supposed to represent your domain judgment, your identification decisions, your argument for why the arrows in your DAG point the way they do. Submitting borrowed identification work is not just an integrity violation — it is practicing the appearance of the irreducibly human reasoning layer rather than performing it.

Violations include: submitting AI-generated work without citation, using another student's DAG or defense without attribution, submitting work substantially similar to a peer's submission. All violations will be reported to OSCCR. No exceptions.

Collaboration policy: You are encouraged to discuss concepts, cases, and strategies. You may not share DAGs, defenses, written analyses, or specifications. Work you submit with your name on it must reflect your own reasoning in your own words. If you collaborated on ideas, list your collaborators clearly.

If you are unsure whether something crosses a line — ask. I would rather answer that question than navigate a violation.

Generative AI Policy

You are encouraged to use generative AI tools in this course. This is not a reluctant permission — it is a deliberate pedagogical choice grounded in the course's thesis.

Use Claude to generate your first-pass DAG. Use ChatGPT to draft your assumption defense. Then ask: What did it get wrong? What did it assume that I would never assume, knowing what I know about this domain? That gap is the irreducibly human part. Finding it, naming it, and correcting it is the work of this course.

Every submission requires the AI Use Disclosure block specified in Section 6. Undisclosed AI use is an academic integrity violation. The TA or instructor may ask you to walk through and explain any part of your submitted work.

Instructor disclosure: I use generative AI tools in developing this course — for drafting case study scenarios, generating first-pass DAG structures that I then evaluate and revise, and editing course materials. I document my own AI use in the same format I am asking of you.

Incomplete Grades

An incomplete grade may be reported when a student has failed to complete a major course component. Missing work must be submitted within 30 days of the term's end or the agreed-upon due date, or it receives no credit. Contact the instructor before the final week if circumstances warrant discussion.

Irreducibly Human: What AI Can and Can't Do — Causal Reasoning
Syllabus v1.0 · Nik Bear Brown · Northeastern University · Fall [Year]

This syllabus reflects course information as of the distribution date. Learning outcomes, assessment architecture, and policies are stable. If meeting times, room assignments, or textbook availability change, updates will be posted to Canvas and communicated by email with at least one week's notice.