Irreducibly Human Series

AI in the classroom: a practical workshop series for business school faculty

What AI can and can't do — and what that means for how you teach

4 Sessions · Once Weekly · 90–120 minutes per session · April 2026 · In-person
Executive Education
Version 1.0  ·  [Distribution Date]  ·  Reviewed by Dev the Dev

Contents

  1. Welcome
  2. The Irreducibly Human Series
  3. Workshop Information
  4. Learning Outcomes
  5. Required Materials
  6. Workshop Structure and Session Schedule
Section 1

Welcome

I've spent years watching capable professionals use AI tools confidently and badly — not because they weren't intelligent, but because no one had given them a framework for understanding what they were actually working with. They could generate outputs. They couldn't evaluate them. They could prompt. They couldn't specify. They could iterate. They couldn't tell when they had stopped thinking and started deferring.

That gap is what this workshop series closes.

AI fluency is not knowing which tools to use. It is understanding the cognitive nature of the entity you are collaborating with — what it does at superhuman level, what it cannot do at all, and what remains irreducibly yours. The faculty member who uses AI to generate a syllabus policy without understanding why AI outputs fail is not using AI well. They are using it optimistically. Those are different practices with very different outcomes.

This series is built around one organizing principle: you are not the user of this tool — you are the supervisor of it. Every session builds one layer of that supervisory capacity, applied immediately to real work in your own courses.

The series is demanding in a specific way. It will not ask you to learn a new platform or memorize prompt templates. It will ask you to make judgment calls about AI outputs, apply those judgments to your own teaching, and leave each session with a working document you can use next week. That is more durable than a demonstration. It is also harder.

What you will leave with: four working artifacts built from real courses you are teaching, and the framework for continuing to develop your practice after the series ends.

Here is what to do before Session 1: bring a laptop and a course you are currently teaching. Nothing else is required.

— [Lead Instructor] | [email]
— [DMSB Co-Instructor, TBD] | [email]
Section 2

The Irreducibly Human Series

We are in the early years of the most powerful cognitive tools ever built. AI systems are superhuman at pattern recognition, fact retrieval, arithmetic, and syntactic correctness. They are genuinely poor at formulating the right question, auditing their own outputs for plausibility, and knowing when not to proceed.

The Irreducibly Human series develops exactly those capacities — the forms of reasoning that AI tools require humans to supply.

This workshop — AI in the Classroom — is the series entry point for faculty practitioners. It builds the foundational framework for AI collaboration across four sessions, develops the supervisory practices that make AI a professional strength rather than a professional liability, and applies that framework directly to teaching, assessment design, and classroom policy.

Aligned with the Schofield & Zhou AI Literacy Framework, the series moves participants from foundational understanding to applied classroom practice. Faculty will not study AI — they will practice it.

Section 3

Workshop Information

Workshop Identifiers

FieldValue
Workshop TitleAI in the Classroom: A Practical Workshop Series for Business School Faculty
Sessions4
FormatOnce weekly, April 2026
Duration per session90–120 minutes
Mode of DeliveryIn-person
DepartmentExecutive Education

Meeting Information

Day and time: [TBD]  ·  Location: [Building, Room]

Each session has two components: a facilitated concept segment and a hands-on workshop segment. The workshop segment is where learning consolidates. A session attended only for the first half is a session where no artifact gets built.

Instructor Information

RoleNameEmailResponse time
Lead Instructor[TBD][TBD]Within 48 hours on weekdays
DMSB Co-Instructor[TBD][TBD]Within 48 hours on weekdays

Preferred contact: email for logistics. Office hours for anything that takes more than two sentences to answer well.

Session Materials

All session materials, artifact templates, and reference documents will be distributed via [platform TBD] before each session. Sessions will be recorded for participants who wish to review material covered. Recordings are available to enrolled participants only.

Prerequisites

What this workshop assumes: At least one prior use of an AI tool (ChatGPT, Claude, Microsoft Copilot, or equivalent). A laptop with browser access. One course currently being taught or in development.

What this workshop does not assume: Prior coursework in AI, machine learning, or data science. Technical background of any kind. Prior prompting experience or tool fluency.

Section 4

Learning Outcomes

By the end of this workshop series, participants will be able to:

  1. Explain how AI systems generate outputs through pattern completion rather than knowledge retrieval, and predict the most likely failure type before it occurs
  2. Apply a reliability framework to AI outputs, distinguishing tasks that are safe to use as-is, tasks that require verification, and tasks that should not be delegated
  3. Write a complete, testable instruction for an AI tool — specifying task, audience, inclusions, exclusions, and success standard — and predict its failure mode before running it
  4. Apply a four-layer verification protocol to an AI output, identifying the layer most likely to fail in their specific discipline
  5. Redesign one existing assessment to remain meaningful in an AI environment, shifting from product to process and requiring the judgment AI cannot supply
  6. Draft a syllabus AI policy statement that specifies permitted and prohibited uses — not a prohibition, and not open permission
  7. Articulate, in writing, at least one judgment call in their teaching practice that requires their values, domain knowledge, or accountability — and that an AI cannot make on their behalf
Section 5

Required Materials

No textbook. No advance reading. No specialized software. No installation required.

AI tools (all free — no purchase required)

ToolURL
Claudeclaude.ai
ChatGPTchatgpt.com
Microsoft Copilotcopilot.microsoft.com

Participants may use any combination. No single tool is required.

Documentation tools (free, browser-based)

Google Docs or equivalent — for artifact production during workshop segments.

Bring to every session

Section 6

Workshop Structure and Session Schedule

Session structure

Every session follows the same three-part structure:

Open
20 min

A real failure case. Not a definition. A specific situation where a faculty member or professional used AI and something went wrong — or nearly wrong. Participants encounter the problem before they encounter the framework.

Build
40–50 min

The concept that explains the failure, demonstrated live using tools participants already have. Hands-on in the room.

Make
30–40 min

Participants apply the concept to their own real course and leave with a completed artifact. No hypotheticals. No homework. Built in the room.

Session 1 What you're actually working with Week 1 · April 2026 · 90–120 min
Why does AI give me wrong answers so confidently — and what do I do about that?
Open 20 min

A faculty member submits a course proposal with three AI-generated citations. Two of the studies do not exist. The journals are real. The authors are real. The papers are not. She had no framework for why it happened and no way to have caught it. This case is not unusual. It is the default failure mode of a confident tool used without a mental model.

Build 40 min

How AI systems generate output: pattern completion, not knowledge retrieval. The distinction matters because it predicts the failure type before it occurs. AI does not know things — it completes patterns based on training data. The more plausible something sounds, the more confidently it will say it, whether or not it is true. This is not a bug being fixed. It is the architecture.

Why confident expression is structurally uncorrelated with accuracy. The reliability question: which tasks are safe to delegate, which require verification, which should never leave your hands.

Live demonstration: the same question asked three ways, showing how framing changes output — and a live hallucination in a business domain participants recognize.

Make 30–40 min

Participants map three tasks from their own teaching onto a simple framework: Use as-is / Verify before using / Never delegate. Pairs compare and discuss. Full group debrief on what surprised them.

Session 1 Artifact — Personal AI Reliability Map
A one-page document showing which tasks in their specific teaching and research workflow they will trust AI to handle, which they will verify before using, and which they will not hand off. Built in the session. Specific to their discipline and their courses.
Session 2 How to direct it well Week 2 · April 2026 · 90–120 min
Why do I keep getting mediocre output even when I try — and how do I fix that?
Open 20 min

A Strategy faculty member asks AI to generate a case discussion for her MBA class. The output is smooth, generic, uses no real companies, and could apply to any industry in any decade. She concludes AI isn't useful for her teaching. The problem wasn't the tool. It was the instruction. The tool did exactly what it was asked to do — which was not what she needed.

Build 40 min

The five components of an instruction that produces usable output:

  • The task — what you want
  • The audience — who it's for, their level, their prior knowledge
  • Inclusions — what it must contain
  • Exclusions — what it must not contain
  • Success standard — what "good" looks like for this specific use

Live demonstration: one weak prompt rebuilt through all five components. Output comparison shown live. Business school applications demonstrated: case discussion questions for a specific company and concept, assignment rubrics for specific learning outcomes, exam questions at calibrated difficulty levels, research summaries for course prep.

Make 30–40 min

Each participant writes a complete instruction for one real task they currently do manually. Partner reviews against the five components and identifies what is missing. Participant revises. Both run their prompts live and evaluate the output.

Session 2 Artifact — Working Prompt Specification
A tested, working instruction set for one real task in their course — written, run, revised, and producing output they evaluated in the room. Not a template. An actual prompt that works for their actual course.
Session 3 How to evaluate what it gives you Week 3 · April 2026 · 90–120 min
How do I know when to trust AI output — and how do I push back on it when I shouldn't?
Open 20 min

An Organizational Behavior faculty member uses AI to summarize current research on psychological safety for a lecture update. The summary is well-organized, smooth, and cites three papers that have been substantially challenged since publication. She uses it without checking. A student who has read the field more recently raises it in class. The faculty member had no verification practice. She had no framework for knowing which layer of the output to check.

Build 40 min

Four layers every AI output contains — and which layer is most likely to fail in which disciplines:

  • Facts — Are the specific claims accurate?
  • Reasoning — Does the logic hold?
  • Framing — Is a partial picture being presented as complete?
  • Omissions — What important thing is not here?

Calibrating verification depth: how much checking does this output actually need? The three variables — stakes, reversibility, and domain reliability — applied to real business school use cases.

Using AI to interrogate AI: the adversarial prompt. How to ask the tool to find the weaknesses in what it just told you.

Live demonstration: a piece of AI-generated course content run through all four layers. The adversarial prompt applied. What it found that ordinary review missed.

Make 30–40 min

Participants bring or generate one piece of AI output from their own domain. Apply the four-layer check. Identify which layer is highest-risk in their specific discipline and why.

Session 3 Artifact — Discipline-Specific Verification Checklist
A one-page verification protocol for AI outputs in their field — what to check, how to check it, and what "good enough to use" looks like for their specific teaching context. Built from their own hands-on work in the session.
Session 4 AI in your classroom: assessment and policy Week 4 · April 2026 · 90–120 min
What do I actually say to my students about AI, and how do I redesign my assessments so they hold up?
Open 20 min

A Finance faculty member bans AI in her course. Her students use it anyway, undisclosed. Her assessments are unchanged. She is grading AI-generated work without knowing it, and her students are not developing the skills she believes she is developing. The policy failed not because students are dishonest — but because it was designed for a world that no longer exists. A prohibition without redesign is not a policy. It is a wish.

Build 40 min

There is no AI-proof assessment. There is only AI-aware assessment design. The right question is not "did they use AI?" It is "what did they have to do that AI cannot do for them?"

What AI cannot do for your students:

  • Apply your disciplinary judgment to their specific situation
  • Defend a position under live questioning
  • Connect a concept to their own professional experience
  • Make an accountable professional judgment with their name on it

Assessment redesign principles for a business school context: shift from product to process — require the thinking, not just the answer. Add a defense or reflection component. Require domain specificity AI cannot fake — their company, their data, their decision. Make AI use explicit, disclosed, and part of the assessment.

Syllabus policy language that works: not prohibition, not open permission, but specification. What AI may be used for, what it may not be used for, and what the student must supply regardless of tool use.

Live demonstration: one existing assignment from a volunteer participant redesigned in the room. Before and after shown.

Make 30–40 min

Each participant identifies their highest-risk assessment — the one most vulnerable to AI substitution. Redesign one component using the session principles. Draft one paragraph of AI policy language for their syllabus.

Session 4 Artifact — Redesigned Assessment + AI Policy Statement
A revised version of one real assessment from their course, plus a draft AI policy statement ready to drop into their syllabus. Specific to their course. Defensible to their students.

Cumulative deliverable — AI Teaching Portfolio

Participants complete the series with four working documents:

SessionArtifactWhat it is
1Personal AI Reliability MapWhich tasks to delegate, verify, or retain — specific to their discipline
2Working Prompt SpecificationA tested prompt for one real course task, ready to use
3Verification ChecklistA discipline-specific protocol for evaluating AI output
4Redesigned Assessment + AI Policy StatementA revised assignment and syllabus policy statement
These are not templates. They are working documents built from real courses in the room.

Schedule at a glance

SessionTopicArtifact
1What you're actually working withPersonal AI Reliability Map
2How to direct it wellWorking Prompt Specification
3How to evaluate what it gives youVerification Checklist
4AI in your classroom: assessment and policyRedesigned Assessment + AI Policy Statement