Course 2 · Tier 5

Causal Reasoning and World Modeling

AI finds correlations. Humans build causal models. This course operates at Tier 5 — the deepest level in the series — because causal reasoning is the capacity that most decisively separates human cognition from statistical pattern matching.

DAG Construction

A Directed Acyclic Graph (DAG) is the formal representation of a causal model. Students learn to construct DAGs from domain knowledge — not from data. This is the critical distinction: data can tell you what happened, but only a human can propose why.

The course walks through the full process of building a DAG: identifying variables, specifying directed edges that represent causal claims, and — most importantly — defending the exclusion of edges. Every missing arrow is an assertion that two variables are not directly causally related, and every such assertion must be justified.

  • Variable selection from domain expertise, not data mining
  • Edge specification as causal claims requiring justification
  • Missing edges as testable assumptions
  • Iterative refinement through peer critique and empirical challenge

The Backdoor Criterion and Identification

Once a DAG is constructed, the identification layer determines whether a causal effect can be estimated from observational data. The backdoor criterion provides a systematic method: if you can block all backdoor paths between treatment and outcome by conditioning on a set of variables, the causal effect is identified.

This is where the course reaches Tier 5. Students must reason about confounding, collider bias, and mediation — concepts that require genuine counterfactual thinking. No current AI system can construct a defensible causal model from scratch, because doing so requires knowledge of the world that is not contained in the data.

Confounding

A common cause of both treatment and outcome creates a spurious association. Students learn to identify confounders from the DAG structure and condition on them appropriately — or recognize when conditioning introduces new bias.

Collider Bias

Conditioning on a common effect of two variables opens a non-causal path between them. This is counterintuitive and routinely missed by automated analyses. Recognizing collider structures requires understanding the causal story, not just the statistical associations.

Continue the Sequence

Questions about Causal Reasoning?