Training programs that build tool competence without building supervisory metacognition are producing performers when they need conductors.
Graduate engineers learn to use AI tools. They learn to prompt, delegate, and verify. Then they encounter the situation every employer now recognizes: something feels wrong and they cannot say why. The problem they have been handed is the wrong problem. The AI output is accurate, efficient, and pointed in the wrong direction.
The conductor metaphor unifies all five capacities: a conductor does not play any instrument — they hold the whole performance in their head, hear the wrong note before the score confirms it, and decide which piece is worth performing. The performance collapses without them even though they produce no sound themselves. As AI capability scales, the conductor role becomes more consequential, not less.
| Field | Value |
|---|---|
| Series position | Course 4 of 6 · Irreducibly Human: What AI Can and Can't Do |
| Deployment context | 15-week graduate course · College of Engineering · Northeastern University |
| Also appropriate for | Professional development; executive education |
| Prerequisite | AI tool competence — Course 1 of the series (AI Literacy, Fluency, and Trust) or equivalent professional experience |
This reader can describe what went wrong after the fact. She cannot yet name it in the moment — which means she cannot stop it. The five capacities this course builds are the ones that turn the felt wrongness into a located, correctable judgment before the performance goes off the rails.
Each capacity is developed across two weeks: a framework week (peaks at Analyze) and an application week (peaks at Create or Evaluate). No student goes more than one week without an Apply-or-above deliverable.
Hears the wrong note before the score confirms it. Evaluates AI output for coherence, domain fit, and hidden assumptions — before the output is acted on.
Decides which problem is worth solving. Reframes the brief before generating. Identifies when the AI is solving the stated problem rather than the real one.
Knows when to bring in the brass. Selects, sequences, and combines AI tools based on what the problem actually requires — not habit or availability.
Supplies the meaning the output cannot supply for itself. Translates AI findings into consequential decisions — knowing what the data shows and what it cannot show.
Holds the whole performance in their head. Synthesizes outputs across tools, timelines, and stakeholders into a coherent course of action the professional can defend.
Three cases run across multiple chapters, producing a through-line that connects every capacity to a single arc of professional development. Case content is pending specification.
| Case | Chapters | Status |
|---|---|---|
| Biomedical engineering analysis | 2, 4, 5, 11 | Named — content pending |
| Supervisory analysis | 13, 14, 15 | Named — content pending |
| Personal case inventory | 1, 15 | Named — content pending |
Four structural features are identified as essential to the course. They are named and described at concept level. Prompt specifications, rubrics, and implementation protocols are pending.
Claude functions as adversarial assessor in the final two weeks. The student's grade depends in part on what the auditor cannot find — proving development rather than self-report. The audit prompt architecture must be specified before the course runs: too much structural scaffolding and the audit tests prompt design, not supervisory capacity; too little and results are inconsistent across cohorts.
Pending: controlled prompt specification; rubric for what constitutes a passing audit; policy on what happens when the auditor finds nothing the student named.
Every deliverable requires the student to name, in writing, one judgment call that required their values, their domain knowledge, or their accountability that an AI could not have made on their behalf. Fifteen instances across the semester. Non-optional. A deliverable that cannot answer this question has not cleared the course's minimum threshold regardless of technical quality.
Pending: instance mapping across all 15 deliverables; rubric for what constitutes a specific vs. generic declaration.
Three cases run across multiple chapters — the biomedical engineering analysis, the supervisory analysis, and the personal case inventory — producing a through-line that connects every capacity to a single arc of professional development. Cases are named but not yet written.
Pending: case content for all three longitudinal threads; worked examples per chapter; connection between case data and chapter deliverables.
Every capacity is developed across two weeks — a framework week that peaks at Analyze and an application week that peaks at Create or Evaluate. No student goes more than one week without an Apply-or-above deliverable. This pacing constraint must hold across all ten capacity chapters.
Pending: Bloom's distribution map confirming the pacing constraint holds across all chapters; verification that Act One (three orientation chapters) does not break the one-week Apply-or-above rule.