I've spent years watching capable professionals use AI tools confidently and badly — not because they weren't intelligent, but because no one had given them a framework for understanding what they were actually working with. They could generate outputs. They couldn't evaluate them. They could prompt. They couldn't specify. They could iterate. They couldn't tell when they had stopped thinking and started deferring.
That gap is what this workshop series closes.
AI fluency is not knowing which tools to use. It is understanding the cognitive nature of the entity you are collaborating with — what it does at superhuman level, what it cannot do at all, and what remains irreducibly yours. The faculty member who uses AI to generate a syllabus policy without understanding why AI outputs fail is not using AI well. They are using it optimistically. Those are different practices with very different outcomes.
This series is built around one organizing principle: you are not the user of this tool — you are the supervisor of it. Every session builds one layer of that supervisory capacity, applied immediately to real work in your own courses.
The series is demanding in a specific way. It will not ask you to learn a new platform or memorize prompt templates. It will ask you to make judgment calls about AI outputs, apply those judgments to your own teaching, and leave each session with a working document you can use next week. That is more durable than a demonstration. It is also harder.
What you will leave with: four working artifacts built from real courses you are teaching, and the framework for continuing to develop your practice after the series ends.
Here is what to do before Session 1: bring a laptop and a course you are currently teaching. Nothing else is required.
We are in the early years of the most powerful cognitive tools ever built. AI systems are superhuman at pattern recognition, fact retrieval, arithmetic, and syntactic correctness. They are genuinely poor at formulating the right question, auditing their own outputs for plausibility, and knowing when not to proceed.
The Irreducibly Human series develops exactly those capacities — the forms of reasoning that AI tools require humans to supply.
This workshop — AI in the Classroom — is the series entry point for faculty practitioners. It builds the foundational framework for AI collaboration across four sessions, develops the supervisory practices that make AI a professional strength rather than a professional liability, and applies that framework directly to teaching, assessment design, and classroom policy.
Aligned with the Schofield & Zhou AI Literacy Framework, the series moves participants from foundational understanding to applied classroom practice. Faculty will not study AI — they will practice it.
| Field | Value |
|---|---|
| Workshop Title | AI in the Classroom: A Practical Workshop Series for Business School Faculty |
| Sessions | 4 |
| Format | Once weekly, April 2026 |
| Duration per session | 90–120 minutes |
| Mode of Delivery | In-person |
| Department | Executive Education |
Day and time: [TBD] · Location: [Building, Room]
| Role | Name | Response time | |
|---|---|---|---|
| Lead Instructor | [TBD] | [TBD] | Within 48 hours on weekdays |
| DMSB Co-Instructor | [TBD] | [TBD] | Within 48 hours on weekdays |
Preferred contact: email for logistics. Office hours for anything that takes more than two sentences to answer well.
All session materials, artifact templates, and reference documents will be distributed via [platform TBD] before each session. Sessions will be recorded for participants who wish to review material covered. Recordings are available to enrolled participants only.
What this workshop does not assume: Prior coursework in AI, machine learning, or data science. Technical background of any kind. Prior prompting experience or tool fluency.
By the end of this workshop series, participants will be able to:
| Tool | URL |
|---|---|
| Claude | claude.ai |
| ChatGPT | chatgpt.com |
| Microsoft Copilot | copilot.microsoft.com |
Participants may use any combination. No single tool is required.
Google Docs or equivalent — for artifact production during workshop segments.
Every session follows the same three-part structure:
A real failure case. Not a definition. A specific situation where a faculty member or professional used AI and something went wrong — or nearly wrong. Participants encounter the problem before they encounter the framework.
The concept that explains the failure, demonstrated live using tools participants already have. Hands-on in the room.
Participants apply the concept to their own real course and leave with a completed artifact. No hypotheticals. No homework. Built in the room.
A faculty member submits a course proposal with three AI-generated citations. Two of the studies do not exist. The journals are real. The authors are real. The papers are not. She had no framework for why it happened and no way to have caught it. This case is not unusual. It is the default failure mode of a confident tool used without a mental model.
How AI systems generate output: pattern completion, not knowledge retrieval. The distinction matters because it predicts the failure type before it occurs. AI does not know things — it completes patterns based on training data. The more plausible something sounds, the more confidently it will say it, whether or not it is true. This is not a bug being fixed. It is the architecture.
Why confident expression is structurally uncorrelated with accuracy. The reliability question: which tasks are safe to delegate, which require verification, which should never leave your hands.
Live demonstration: the same question asked three ways, showing how framing changes output — and a live hallucination in a business domain participants recognize.
Participants map three tasks from their own teaching onto a simple framework: Use as-is / Verify before using / Never delegate. Pairs compare and discuss. Full group debrief on what surprised them.
A Strategy faculty member asks AI to generate a case discussion for her MBA class. The output is smooth, generic, uses no real companies, and could apply to any industry in any decade. She concludes AI isn't useful for her teaching. The problem wasn't the tool. It was the instruction. The tool did exactly what it was asked to do — which was not what she needed.
The five components of an instruction that produces usable output:
Live demonstration: one weak prompt rebuilt through all five components. Output comparison shown live. Business school applications demonstrated: case discussion questions for a specific company and concept, assignment rubrics for specific learning outcomes, exam questions at calibrated difficulty levels, research summaries for course prep.
Each participant writes a complete instruction for one real task they currently do manually. Partner reviews against the five components and identifies what is missing. Participant revises. Both run their prompts live and evaluate the output.
An Organizational Behavior faculty member uses AI to summarize current research on psychological safety for a lecture update. The summary is well-organized, smooth, and cites three papers that have been substantially challenged since publication. She uses it without checking. A student who has read the field more recently raises it in class. The faculty member had no verification practice. She had no framework for knowing which layer of the output to check.
Four layers every AI output contains — and which layer is most likely to fail in which disciplines:
Calibrating verification depth: how much checking does this output actually need? The three variables — stakes, reversibility, and domain reliability — applied to real business school use cases.
Using AI to interrogate AI: the adversarial prompt. How to ask the tool to find the weaknesses in what it just told you.
Live demonstration: a piece of AI-generated course content run through all four layers. The adversarial prompt applied. What it found that ordinary review missed.
Participants bring or generate one piece of AI output from their own domain. Apply the four-layer check. Identify which layer is highest-risk in their specific discipline and why.
A Finance faculty member bans AI in her course. Her students use it anyway, undisclosed. Her assessments are unchanged. She is grading AI-generated work without knowing it, and her students are not developing the skills she believes she is developing. The policy failed not because students are dishonest — but because it was designed for a world that no longer exists. A prohibition without redesign is not a policy. It is a wish.
There is no AI-proof assessment. There is only AI-aware assessment design. The right question is not "did they use AI?" It is "what did they have to do that AI cannot do for them?"
What AI cannot do for your students:
Assessment redesign principles for a business school context: shift from product to process — require the thinking, not just the answer. Add a defense or reflection component. Require domain specificity AI cannot fake — their company, their data, their decision. Make AI use explicit, disclosed, and part of the assessment.
Syllabus policy language that works: not prohibition, not open permission, but specification. What AI may be used for, what it may not be used for, and what the student must supply regardless of tool use.
Live demonstration: one existing assignment from a volunteer participant redesigned in the room. Before and after shown.
Each participant identifies their highest-risk assessment — the one most vulnerable to AI substitution. Redesign one component using the session principles. Draft one paragraph of AI policy language for their syllabus.
Participants complete the series with four working documents:
| Session | Artifact | What it is |
|---|---|---|
| 1 | Personal AI Reliability Map | Which tasks to delegate, verify, or retain — specific to their discipline |
| 2 | Working Prompt Specification | A tested prompt for one real course task, ready to use |
| 3 | Verification Checklist | A discipline-specific protocol for evaluating AI output |
| 4 | Redesigned Assessment + AI Policy Statement | A revised assignment and syllabus policy statement |
| Session | Topic | Artifact |
|---|---|---|
| 1 | What you're actually working with | Personal AI Reliability Map |
| 2 | How to direct it well | Working Prompt Specification |
| 3 | How to evaluate what it gives you | Verification Checklist |
| 4 | AI in your classroom: assessment and policy | Redesigned Assessment + AI Policy Statement |