I've spent years watching capable engineers make confident causal claims from data that couldn't support them — not because they weren't smart, but because no one had ever taught them the layer of reasoning that sits between the data and the claim. They knew how to build the model. They didn't know how to ask whether the model was measuring what they thought it was measuring.
That gap is what this course closes.
Causal AI tools are genuinely powerful. They can estimate effects, run sensitivity analyses, and produce clean output with narrow confidence intervals. What they cannot do is draw your causal graph, choose what to condition on, or defend the assumptions that make a result trustworthy. That layer — the identification layer — requires domain expertise that no algorithm supplies. It requires you to know your field well enough to argue, in writing, why the arrows in your causal model point the way they do. No training data replaces that. No model learns it. It is the irreducibly human part.
This course teaches you to perform it.
You will leave this course able to build a defensible causal model for a problem in your own engineering domain, hand it off to an estimation tool with confidence, and evaluate whether the result should be trusted. More importantly, you will be able to answer — clearly, in a job interview or a boardroom — the question that separates engineers who use AI well from engineers who use it confidently and incorrectly: "Is what your system is measuring actually causing the outcome, or just correlated with it?"
The course is demanding in a specific way. It will not ask you to memorize frameworks or reproduce procedures. It will ask you to make judgment calls, defend them to skeptical peers, and revise them when the argument doesn't hold. That is harder than a methods course — and more valuable.
I am looking forward to the reasoning you will bring from your domains. The cases you work through in your own field will teach your classmates things I cannot. That exchange is the course working as intended.
Here is what to do before we meet: read Chapter 1 of Irreducibly Human: Causal Reasoning. Come to Session A with the case in your head. You don't need to understand it yet. You just need to feel the problem.
We are in the early years of the most powerful cognitive tools ever built. AI systems are superhuman at pattern recognition, fact retrieval, arithmetic, and syntactic correctness. They are genuinely poor at constructing causal models, formulating the right questions, auditing the plausibility of their own outputs, and knowing when not to proceed.
The Irreducibly Human series develops exactly those capacities — the forms of reasoning that AI tools require humans to supply, and that your competitors who only learned to use the tools will not have.
This course — Causal Reasoning — develops one specific, high-value cognitive skill: the ability to build a defensible model of what causes what in your domain, and to know what that model can and cannot support. It is not a course about causal AI tools. It is a course about the thinking those tools cannot do, that you will need to do every time you use them.
The companion course — Conducting AI — builds the broader metacognitive and supervisory toolkit: problem formulation, plausibility auditing, interpretive judgment, and tool orchestration. The two courses can be taken in either order; they are designed to complement, not to require each other.
| Field | Value |
|---|---|
| Course Title | Irreducibly Human: What AI Can and Can't Do — Causal Reasoning |
| Course Number | [XXXX 5XXX — assigned at CourseLeaf submission] |
| Credit Hours | 4 |
| Term | Fall [Year] |
| Mode of Delivery | In-person |
| Components | Lecture/Seminar (1× weekly) + TA-led DAG Workshop (1× weekly in-class lab) |
| Department | College of Engineering |
Lecture/Seminar: [TBD] · Location: [Building, Room]
DAG Workshop (TA-led): [TBD] · Location: [Building, Room]
| Field | Value |
|---|---|
| Name | Nik Bear Brown |
| ni.brown@neu.edu | |
| Response time | Within 48 hours on weekdays. Put URGENT in subject line for time-sensitive questions. |
| Office / Zoom | [TBD] |
| Student hours | [Days, times, location] — booking link TBD |
| Preferred contact | Email for logistics. Student hours for anything that takes more than two sentences to answer well. |
I hold student hours for you — not only for students with emergencies. Come because you're uncertain about your DAG, because you want to think through your domain problem before committing to it for the final project, because you want to understand where this field is going, or simply because you want to know what this work looks like in practice. The most productive conversations I have with students happen outside scheduled sessions.
| Field | Value |
|---|---|
| Name | TBA |
| TBA | |
| DAG Workshop hours | TBA |
The TA runs the weekly DAG Workshop — designing critique sessions, facilitating peer review, and returning written feedback on workshop submissions. For questions about DAG construction, node identification, and backdoor criterion application, the TA is your first resource. Programming and tool questions go to the TA first; if unresolved, the TA will forward to the professor.
You have completed at least one applied statistics or machine learning course at the graduate level. You are comfortable reading and writing Python. You have encountered regression — you know what a coefficient means in practice, even if you could not derive it from first principles.
Prior knowledge of causal inference, DAGs, or graph theory. No econometrics. No measure-theoretic probability. If you have read Pearl's The Book of Why — useful background, not required.
Students who arrive most confident in their modeling skills sometimes find the early weeks the most disorienting. That disorientation is the course working as intended. The identification layer is not a harder version of what you already know — it is a different cognitive operation. Students who treat it as an extension of their modeling fluency will struggle more than students who approach it as genuinely new terrain.
If you are missing a prerequisite, contact the instructor or your advisor before the first week. This course builds on applied quantitative fluency from Session 1 — there is no ramp.
By the end of this course, students will be able to:
| Field | Value |
|---|---|
| Title | Irreducibly Human: What AI Can and Can't Do — A Practical Guide to Causal Inference for Domain Experts |
| Author | Nik Bear Brown |
| Publisher | Bear Brown & Company / Kindle Direct Publishing, 2026 |
| Availability | [Amazon Kindle / print link — TBD at publication] |
| Cost | [TBD] |
| Edition | First edition. No prior edition exists. |
Distributed via Canvas throughout the semester at no cost. Required supplementary readings are marked [Required] in the weekly schedule; optional readings are marked [Recommended] and are genuinely optional.
Hand-drawn submissions accepted in workshop sessions only. All graded assignments require digital submission.
| Assessment | Points | Quality/Portfolio |
|---|---|---|
| Reading Responses (5 × 30 pts) | 150 | ✓ 20 pts each |
| Weekly DAG Assignments (8 × 25 pts, drop lowest of 9) | 200 | ✓ 20 pts each |
| DAG Workshop Participation | 100 | ✓ 20 pts component |
| Midterm | 100 | — |
| Final Project — DAG Draft Checkpoint | 100 | ✓ 20 pts |
| Final Project — Specification Checkpoint | 100 | ✓ 20 pts |
| Final Project — Final Submission | 250 | ✓ 20 pts |
| Total | 1000 |
| Top 25% | A |
| Next 25% | A– |
| Next 25% | B+ |
| Final 25% | B |
| 780–799 | C+ |
| 730–779 | C |
| 700–729 | C– |
| 600–699 | D |
| Below 600 | F |
Every assignment carrying the Quality/Portfolio component is evaluated on a relative 20-point scale comparing your work to peers, emphasizing depth of causal reasoning, quality of domain judgment, and evidence that the identification layer was performed by you — not delegated to a tool.
| Percentile Band | Score |
|---|---|
| Bottom 25% | 5 pts |
| 26–50th percentile | 10 pts |
| 51–75th percentile | 15 pts |
| Top 25% | 20 pts |
You are encouraged to use generative AI tools on every assignment. Citation is required. Undisclosed AI use is an academic integrity violation. Disclosed AI use is not.
Every submission must include an AI Use Disclosure block:
AI USE DISCLOSURE Tool(s) used: Portions assisted: How used: What I changed: What the AI could not do: [name at least one identification decision that required your domain knowledge — this field is not optional]
The lowest-scoring DAG assignment is dropped. Eight of nine assignments count. This absorbs one week where the concept didn't click. It does not absorb a pattern of non-engagement.
The schedule maps each week to a chapter in Irreducibly Human: Causal Reasoning. Read the assigned chapter before Session A. Come to Session A with the case in your head. Come to Session B ready to use the concept. Come to the workshop ready to draw.
Reading time per chapter: approximately 45–75 minutes · ⚑ = graded deliverable due · ★ = transition week
What breaks when causal reasoning is absent — and why it matters for the work you already do
Two unseen causal scenarios. For each: draw the implied DAG, diagnose the identification failure, name the load-bearing assumption, explain what domain knowledge would address it. No definitions asked. No recall tested.
The identification toolkit — built piece by piece through cases you recognize. You enter Act Two able to name the identification layer. You leave Act Two able to perform it.
Your domain problem DAG with three-part defense: (1) every arrow stated as a causal claim; (2) missing arrows listed and ranked by plausibility; (3) unmeasured confounders named with bias directions. One paragraph each in technical and plain-language registers. Due end of Week 11.
The identification toolkit deployed — answers get less clean, and that is the point. You enter Act Three with a defended DAG. You leave with a complete causal analysis you can put in a portfolio and discuss in a job interview. Act Three stops giving you well-formed problems and starts giving you the kind of problems you will actually encounter — cases that recombine earlier concepts rather than introducing new structural problems. That is not easier. It is harder in the way that matters.
Revised DAG (incorporating Week 11 feedback) + complete estimation specification: treatment variable, outcome variable, adjustment set with justification, identification assumptions, "do not add" list. Due end of Week 12.
Complete causal analysis plan: domain question and specification; DAG with three-part defense; identification decisions stated as decisions; estimation specification; output evaluation; sensitivity analysis with E-value and domain argument; qualified conclusion in two registers. Due end of Week 15.
| Week | Chapter | Act | Major Deliverable | Points |
|---|---|---|---|---|
| 1 | Ch. 1 — The Decision That Looked Right | One | Reading Response #1 | 30 |
| 2 | Ch. 2 — Three Words for the Same Problem | One | Reading Response #2 | 30 |
| 3 | Ch. 3 — The Map Before the Territory | One | Weekly DAG #1 | 25 |
| 4 | Ch. 4 — The Identification Layer | One | Midterm + RR #3 | 100 + 30 |
| 5 | Ch. 5 — Confounders | Two | Weekly DAG #2 | 25 |
| 6 | Ch. 6 — Mediators | Two | Weekly DAG #3 | 25 |
| 7 | Ch. 7 — Colliders (Part 1) | Two | Weekly DAG #4 | 25 |
| 8 | Ch. 7 — Colliders (Part 2) | Two | Weekly DAG #5 + RR #4 | 25 + 30 |
| 9 | Ch. 8 — Backdoor Criterion (Part 1) | Two | Weekly DAG #6 + RR #5 | 25 + 30 |
| 10 | Ch. 8 — Backdoor Criterion (Part 2) | Two | Weekly DAG #7 | 25 |
| 11 | Ch. 9 — Defending Your DAG | Two | DAG Draft Checkpoint | 100 |
| 12 | Ch. 10 — From DAG to Data | Three | Specification Checkpoint | 100 |
| 13 | Ch. 11 — Reading the Output | Three | Weekly DAG #8 | 25 |
| 14 | Ch. 12 — When Assumptions Don't Hold | Three | Weekly DAG #9 | 25 |
| 15 | Ch. 13 — The Full Analysis | Three | Final Project Submission | 250 |
DAG Workshop participation (100 pts) assessed continuously across all 15 weeks. Lowest DAG assignment dropped — 8 of 9 count toward final grade.
This course has three weekly contact points: two lecture/seminar sessions and one TA-led DAG Workshop. Each serves a different function. Missing any of them is not equivalent to missing the same thing twice.
Per College of Engineering MGEN policy, students are allowed a maximum of 2 absences per course. 3 or more absences result in an F. More than 3 unexcused DAG Workshop absences will result in a failing participation grade regardless of Quality/Portfolio score.
Students who do not attend during the first week risk being dropped from the course. Please inform me of any anticipated absence before class.
Participation means engagement — drawing, revising, critiquing, asking structural questions, and connecting today's concept to your domain. Physical presence without engagement does not count as participation.
What you submit is supposed to represent your domain judgment, your identification decisions, your argument for why the arrows in your DAG point the way they do. Submitting borrowed identification work is not just an integrity violation — it is practicing the appearance of the irreducibly human reasoning layer rather than performing it.
Violations include: submitting AI-generated work without citation, using another student's DAG or defense without attribution, submitting work substantially similar to a peer's submission. All violations will be reported to OSCCR. No exceptions.
Collaboration policy: You are encouraged to discuss concepts, cases, and strategies. You may not share DAGs, defenses, written analyses, or specifications. Work you submit with your name on it must reflect your own reasoning in your own words. If you collaborated on ideas, list your collaborators clearly.
If you are unsure whether something crosses a line — ask. I would rather answer that question than navigate a violation.
You are encouraged to use generative AI tools in this course. This is not a reluctant permission — it is a deliberate pedagogical choice grounded in the course's thesis.
Use Claude to generate your first-pass DAG. Use ChatGPT to draft your assumption defense. Then ask: What did it get wrong? What did it assume that I would never assume, knowing what I know about this domain? That gap is the irreducibly human part. Finding it, naming it, and correcting it is the work of this course.
Every submission requires the AI Use Disclosure block specified in Section 6. Undisclosed AI use is an academic integrity violation. The TA or instructor may ask you to walk through and explain any part of your submitted work.
An incomplete grade may be reported when a student has failed to complete a major course component. Missing work must be submitted within 30 days of the term's end or the agreed-upon due date, or it receives no credit. Contact the instructor before the final week if circumstances warrant discussion.