Irreducibly Human Series · Series Entry Point · Northeastern University · College of Engineering
Botspeak: the nine pillars of AI fluency
What AI can and can't do — the complete framework for AI collaboration
Version 1.0 · [Distribution Date] · Reviewed by Dev the Dev
Section 1
Welcome
I've spent years watching capable professionals use AI tools confidently and badly — not because they weren't intelligent, but because no one had ever given them a framework for understanding what they were actually working with. They could generate outputs. They couldn't evaluate them. They could prompt. They couldn't specify. They could iterate. They couldn't tell when they'd stopped thinking and started deferring.
The failure mode I've seen most consistently isn't a broken tool. It's a competent professional who has handed a decision to a machine without knowing it — and produced work that looks polished, sounds authoritative, and is wrong in ways neither they nor the machine can see.
That gap is what this course closes.
AI fluency is not knowing which tools to use. It is understanding the cognitive nature of the entity you are collaborating with — what it does at superhuman level, what it cannot do at all, and what remains irreducibly yours. The gap between those two engineers in Chapter 1 — the one who sees the problem and the one who cannot — is not a gap in technical skill. It is a gap in fluency. One of them understands what they are working with. The other is using a very sophisticated autocomplete.
This course gives you the framework. The Five Modes — Specification, Delegation, Conversation, Discernment, Diligence — are not prompting tips. They are a complete architecture for AI collaboration that works across every tool you will encounter, including ones that don't exist yet. The Nine Pillars are the cognitive capacities the framework develops. The tier taxonomy is the map.
The course is demanding in a specific way. It will not ask you to memorize tool features or reproduce prompting templates. It will ask you to make judgment calls, defend them, and — most importantly — name the judgment that was yours and could not have been the machine's. That is harder than a technical course. It is more durable.
What you will leave with: a complete AI fluency framework applied to real work in your own domain, in writing, with explicit reasoning you can defend anywhere — including in front of the person who commissioned the work.
Here is what to do before we meet: read Chapter 1 of Irreducibly Human: Botspeak. Come to Session A with the fabricated citation case in your head. You don't need to know why it happened yet. You just need to feel how a professional with no framework for AI failure looks exactly like one who has been competent and unlucky.
— Nik Bear Brown | ni.brown@neu.edu
Section 2
The Irreducibly Human Series
We are in the early years of the most powerful cognitive tools ever built. AI systems are superhuman at pattern recognition, fact retrieval, arithmetic, and syntactic correctness. They are genuinely poor at formulating the right question, auditing their own outputs for plausibility, reasoning causally about what they are measuring, and knowing when not to proceed.
The Irreducibly Human series develops exactly those capacities — the forms of reasoning that AI tools require humans to supply, and that your competitors who only learned to use the tools will not have.
This course — Botspeak — is the series entry point. It builds the complete architecture for AI collaboration across all five modes and nine pillars, teaches you to locate any cognitive task on the tier taxonomy, and develops the supervisory and metacognitive practices that make AI collaboration a professional strength rather than a professional liability.
The companion courses go deeper on specific layers. Conducting AI builds the full Tier 4 supervisory toolkit — problem formulation, plausibility auditing, interpretive judgment, and override decision-making — for engineers who deploy AI systems. Causal Reasoning builds Tier 5: the ability to construct a defensible model of what causes what in your domain, and to know what that model can and cannot support. The three courses can be taken in any order; each is self-contained while pointing toward the others.
Section 3
Course Information
Course Identifiers
| Field | Value |
| Course Title | Irreducibly Human: What AI Can and Can't Do — Botspeak: The Nine Pillars of AI Fluency |
| Course Number | [XXXX 5XXX — assigned at CourseLeaf submission] |
| Credit Hours | 4 |
| Term | Fall [Year] |
| Mode of Delivery | In-person |
| Components | Lecture/Seminar (1× weekly) + TA-led Mode Lab (1× weekly in-class lab) |
| Department | College of Engineering |
Meeting Information
Lecture/Seminar: [TBD] · Location: [Building, Room]
Mode Lab (TA-led): [TBD] · Location: [Building, Room]
The Mode Lab is a required course component, not an optional recitation. It is where concepts become skills. Chapter 6's adversarial conversation exercise cannot be performed by reading alone — it requires supervised practice with annotated transcripts and peer feedback. Missing the lab is not equivalent to missing a lecture — it is missing the part of the course where learning consolidates.
Instructor
| Field | Value |
| Name | Nik Bear Brown |
| Email | ni.brown@neu.edu |
| Response time | Within 48 hours on weekdays. Put URGENT in subject line for time-sensitive questions. |
| Office / Zoom | [TBD] |
| Student hours | [Days, times, location] — booking link TBD |
| Preferred contact | Email for logistics. Student hours for anything that takes more than two sentences to answer well. |
I hold student hours for you — not only for students with emergencies. Come because you want to think through your capstone research question before committing to it, because a Mode Lab exercise surfaced something you want to unpack, because you want to understand where this field is going, or simply because you want to know what fluent AI collaboration looks like in professional practice. The most productive conversations I have with students happen outside scheduled sessions.
Teaching Assistant
| Field | Value |
| Name | TBA |
| Email | TBA |
| Mode Lab hours | TBA |
The TA runs the weekly Mode Lab — designing exercises, facilitating adversarial conversation sessions, running peer critique, and returning written feedback on lab submissions. For questions about the Five Modes in practice, prompt pattern application, and weekly exercise work, the TA is your first resource. Tool and platform questions go to the TA first; if unresolved, the TA will forward to the professor.
Prerequisites
Official prerequisites: Graduate standing in Engineering or related field (exact CourseLeaf string TBD)
What this course assumes you know
You have access to a laptop and at least one AI tool (Claude, ChatGPT, Gemini, or equivalent). You have used an AI tool at least once. Nothing else is required.
What this course does not assume
Prior coursework in AI, machine learning, computer science, or data science. No technical background in AI systems. No philosophy or ethics background. No prior prompting experience. This course is designed to be accessible to any graduate-level professional who uses or will use AI in their work.
A note for students with strong technical AI backgrounds
Students who arrive most technically fluent in AI systems sometimes find the early weeks the most disorienting. That disorientation is the course working as intended. Fluency is not a harder version of technical literacy — it is a different cognitive operation. Students who treat the Five Modes as prompting optimization will produce technically correct exercises and miss the course. Students who approach the framework as genuinely new terrain — regardless of their technical background — will get the most from it.
If you do not have access to an AI tool, contact the instructor before the first week — options are available at no cost.
Section 4
Learning Outcomes
By the end of this course, students will be able to:
- Explain how AI systems generate outputs through pattern completion rather than knowledge retrieval, and predict failure types from this mechanism
- Apply proportional skepticism to AI outputs, calibrating verification depth to stakes, reliability zone, and reversibility
- Locate any cognitive task on the seven-tier Irreducibly Human taxonomy, identifying what the AI can perform and what the human must supply
- Write a complete five-component specification — intent, constraints, success criteria, exclusions, output format — and predict its failure modes before any prompt is written
- Produce a delegation map for a complex task with explicit tier-location and boundary rationale for each component
- Conduct adversarial AI conversation using at least two adversarial strategies, producing annotated transcripts that demonstrate intellectual ownership
- Apply a tiered verification protocol to an AI output, returning a structured verdict with domain-specific findings by verification layer
- Design a Diligence protocol for a deployed AI workflow, specifying monitoring cadence, drift indicators, escalation conditions, and shutdown criteria
- Produce a trust calibration map for a multi-step AI-assisted workflow, identifying the highest-risk compounding step
- Apply the PARU cycle diagnostic to classify an AI system, identify missing elements, and evaluate the Human Decision Node design
- Apply adversarial validation to an AI output, identifying distributional shift, framing failure, or assumption invisibility not detectable by ordinary review
- Use AI-assisted rapid prototyping to develop a defensible research question with documented iteration log and explicit identification of irreducibly human judgment calls
- Execute original research in AI fluency, demonstrating the Five Modes under real conditions and naming, specifically and honestly, the judgment calls that required human values, domain knowledge, or accountability
Section 5
Required Materials
Textbook
| Field | Value |
| Title | Irreducibly Human: What AI Can and Can't Do — Botspeak: The Nine Pillars of AI Fluency |
| Author | Nik Bear Brown |
| Publisher | Bear Brown & Company / Kindle Direct Publishing, 2026 |
| Availability | [Amazon Kindle / print link — TBD at publication] |
| Cost | [TBD] |
| Edition | First edition. No prior edition exists. |
Supplementary Readings
Distributed via Canvas throughout the semester at no cost. Required supplementary readings are marked [Required] in the weekly schedule; optional readings are marked [Recommended] and are genuinely optional.
Required Technology
AI tools (all free — no purchase required)
Students may use any combination. Appendices A, B, and C of the textbook provide tool-specific guidance. You are not required to use all three — you are required to understand that the framework applies across all of them.
Documentation tools (free, browser-based)
- Google Docs or equivalent — for iteration logs, specification documents, and capstone deliverables
Course platforms
Section 6
Assessment and Grading
Point Summary
| Assessment | Points | Quality/Portfolio |
| Reading Responses (5 × 30 pts) | 150 | ✓ 20 pts each |
| Weekly Mode Exercises (8 × 25 pts, drop lowest of 9) | 200 | ✓ 20 pts each |
| Mode Lab Participation | 100 | ✓ 20 pts component |
| Midterm | 100 | — |
| Final Project — Research Protocol Checkpoint | 100 | ✓ 20 pts |
| Final Project — Peer Review Checkpoint | 100 | ✓ 20 pts |
| Final Project — Final Capstone Submission | 250 | ✓ 20 pts |
| Total | 1000 | |
AI-Based Grading Approach
800+ points — relative scale
| Top 25% | A |
| Next 25% | A– |
| Next 25% | B+ |
| Final 25% | B |
Below 800 — absolute scale
| 780–799 | C+ |
| 730–779 | C |
| 700–729 | C– |
| 600–699 | D |
| Below 600 | F |
Students below 800 points cannot earn a grade higher than B–, even if the relative curve would otherwise place them higher. The instructor reserves the right to make minor adjustments for fairness.
Quality/Portfolio Score (20 points — on all qualifying assignments)
Every assignment carrying the Quality/Portfolio component is evaluated on a relative 20-point scale comparing your work to peers, emphasizing depth of AI fluency reasoning, quality of domain judgment, and evidence that the irreducibly human judgment calls were made by you — not delegated to a tool.
| Percentile Band | Score |
| Bottom 25% | 5 pts |
| 26–50th percentile | 10 pts |
| 51–75th percentile | 15 pts |
| Top 25% | 20 pts |
AI Use in Assignments
You are encouraged to use generative AI tools on every assignment. Citation is required. Undisclosed AI use is an academic integrity violation. Disclosed AI use is not.
Every submission must include an AI Use Disclosure block:
AI USE DISCLOSURE
Tool(s) used:
Portions assisted:
How used:
What I changed:
What the AI could not do: [name at least one judgment call that required
your values, domain knowledge, or accountability — this field is not optional]
The last field is the Irreducibly Human declaration. A disclosure that cannot name one thing the AI could not do has not demonstrated that the student performed the irreducibly human reasoning layer. This is not a formality. It is the assessment.
Drop Policy
The lowest-scoring Mode Exercise is dropped. Eight of nine exercises count toward the final grade. This absorbs one week where the concept didn't click. It does not absorb a pattern of non-engagement.
Section 7
Course Schedule
The schedule maps each week to a chapter in Irreducibly Human: Botspeak. Read the assigned chapter before Session A. Come to Session A with the case in your head. Come to Session B ready to use the concept. Come to the Mode Lab ready to apply it.
Reading time per chapter: approximately 45–75 minutes · ⚑ = graded deliverable due · ★ = transition week
By the end of this week: Explain the difference between pattern completion and knowledge retrieval — using an AI output from your own professional domain as the example.
Session AThe fabricated citation. A professional submits a report with an AI-generated citation to a study that does not exist. The client's researcher finds it. The professional has no framework for why it happened. No definitions yet.
Session BPattern prediction vs. knowledge retrieval. Why confidence of expression is structurally uncorrelated with accuracy of content. Training distribution and reliability zones — center vs. edge.
Mode LabReliability zone mapping — given five AI outputs, classify each by reliability zone and justify the distinction. (Ungraded — prepares for Exercise #1.)
⚑ Reading Response #1 (30 pts)
Describe an AI output you have used or encountered in your professional or academic domain. Identify the claim in that output you would be least willing to use without verification. What reliability zone does it occupy — and what would you need to check? Due before Session A, Week 2.
By the end of this week: Apply the proportional skepticism protocol to AI outputs at three different stakes levels — using cases from your own domain.
Session AA second high-stakes failure case in a different domain. Two cases across two chapters establishes the pattern as structural, not incidental. Hallucination, confabulation, and automation bias — precise definitions that distinguish the three.
Session BWhy confidence is a style learned from training data, not an accuracy signal. Proportional skepticism — stakes × reliability zone × reversibility. Symmetric failure modes: over-trust and under-trust.
Mode LabProportional skepticism exercise — three scenarios at different stakes levels, apply the protocol to each. (Ungraded — prepares for Exercise #1.)
⚑ Reading Response #2 (30 pts)
Return to the output from RR1. Apply the full proportional skepticism protocol: state the stakes level, the reliability zone, the reversibility of the use case, and the verification depth the protocol recommends. Then state whether you followed that protocol when you originally used the output. Due before Session A, Week 3.
By the end of this week: Map a current AI collaboration task onto the full Irreducibly Human framework — locating it on the tier taxonomy, identifying the relevant modes, and naming the irreducibly human judgment it requires.
Session AMove 37 — AlphaGo, March 13, 2016. Fan Hui goes silent. "It's not a human move." A different cognitive architecture searches different territory. The Irreducibly Human tier taxonomy introduced.
Session BThe Five Modes as a pre/during/post temporal architecture. The literacy/fluency distinction. "Botspeak" named as a language, not a toolset. The explicit bridge to Part II: you are about to stop studying Botspeak and start speaking it.
Mode LabMode Exercise #1 — Map a current AI collaboration task onto the full framework: tier location, modes required, irreducibly human judgment named.
⚑ Mode Exercise #1 (25 pts)
Choose a real AI collaboration task from your work or study. Map it fully: (1) locate it on the tier taxonomy with justification; (2) identify which of the Five Modes the task requires and which are currently absent from your practice; (3) name the specific irreducibly human judgment the task requires that AI cannot make on your behalf. Due before Mode Lab, Week 4.
By the end of this week: Write a complete five-component specification for a described task and predict its failure modes before any prompt is written.
Session AAPI documentation failure — AI documents a slightly different API than was built because ambiguities in the specification were resolved using training distribution patterns. Every ambiguity in a specification is a decision the AI makes without the human.
Session BThe five-component specification framework: intent, constraints, success criteria, exclusions, output format. The full prompt pattern toolkit — Recipe, Persona, Template, Audience Persona, Semantic Filter, Fact Check List, Constraint. Specification for agentic contexts.
Mode LabMode Exercise #2 — five-component specification exercise.
⚑ Mode Exercise #2 (25 pts)
Write a complete five-component specification for the structural engineering load analysis case. For each component: state it explicitly and name the failure mode its absence would produce. Select and apply at least two prompt patterns. Audit the final specification for autonomous decision risk. Due before Mode Lab, Week 5.
⚑ Reading Response #3 (30 pts)
What is the most important specification decision for your capstone project domain? Write a one-paragraph intent statement for a task in your domain. Then identify the single constraint whose absence would most likely produce a wrong-but-plausible output. Due before Session A, Week 5.
By the end of this week: Produce a delegation map for a complex task — with explicit tier-location and boundary rationale for every component — that addresses the performance paradox.
Session AStrategy consultant delegates all research synthesis to AI. The output is comprehensive. She misses the critical competitive dynamic she had the domain intuition to catch — but didn't, because she wasn't in the process.
Session BThe four delegation questions: tier location, stakes/reversibility, context only the human has, capability being preserved. The performance paradox — better short-term outputs, long-term capability degradation. Cognitive offloading: when it amplifies, when it atrophies.
Mode LabMode Exercise #3 — delegation map for the civil engineering project management case.
⚑ Mode Exercise #3 (25 pts)
Produce a delegation map for the civil engineering project management risk assessment case. For each task component: state the tier location, the delegation decision (delegate / human-in-the-loop / do not delegate), and the explicit boundary rationale. Identify where the performance paradox risk is highest and propose a mitigation. Due before Mode Lab, Week 6.
By the end of this week: Conduct an adversarial AI conversation using at least two adversarial strategies — with annotated transcript showing before and after intellectual positions.
Session APolicy researcher develops a polished, internally consistent argument over 45 minutes of AI conversation. The strongest counterargument in her field never appeared. She never asked. The peer reviewer found it immediately. Ch. 6 reading note: read through "Sycophantic Drift" before Session A. Read "The Four Adversarial Strategies" after.
Session BFour adversarial strategies: steelman the opposition, edge case probe, assumption surface, devil's advocate role assignment. The Flipped Interaction Pattern. Closure criteria. The ownership test.
Mode LabMode Exercise #4 — supervised adversarial conversation workshop. Annotated transcript required.
⚠ Mandatory supervised Mode Lab session this week. Reading alone is insufficient. The adversarial conversation exercise must be performed in supervised practice with peer feedback. A course that delivers Chapter 6 as reading-only will produce students who can describe Conversation mode but cannot use it.
⚑ Mode Exercise #4 (25 pts)
Conduct an adversarial conversation with an AI tool on a topic in your domain. Apply at least two adversarial strategies. Submit: (1) annotated transcript with strategy labels; (2) before/after comparison of intellectual position; (3) ownership test applied to the final output; (4) one paragraph on what the adversarial conversation found that a standard conversation would have missed. Due before Mode Lab, Week 7.
By the end of this week: Apply a tiered verification protocol to an AI output — returning a structured verdict with domain-specific findings by verification layer.
Session AThe pharmacist at the Human Decision Node — 14-medication patient, compromised renal function, AI interaction check that returned "no significant interactions." Three options: accept, reject, or discern. This chapter teaches Option 3. Human-in-the-loop vs. human-on-the-loop verification architectures.
Session BThe four verification layers: fact, reasoning, framing, omission. Tiered verification protocol: Tier 0 (scan) through Tier 3 (adversarial). Calibration variables: stakes × reliability zone × reversibility. Source triangulation as a practiced skill.
Mode LabMode Exercise #5 — four-layer verification on the environmental engineering contamination risk case.
⚠ Densest chapter in the course. Full week recommended. Chapters 9 and 11 both build directly on this chapter's foundation.
⚑ Mode Exercise #5 (25 pts)
Apply the tiered verification protocol to the environmental engineering contamination risk AI output. For each of the four verification layers: state what you checked, how you checked it, and what you found. Return a structured verdict: trust / modify / reject, with explicit reasoning and any domain-specific findings. Due before Mode Lab, Week 8.
By the end of this week: Design a Diligence protocol for a described AI deployment — specifying monitoring cadence, drift indicators, escalation conditions, and shutdown criteria.
Session AThe Amazon recruiting case — not why the system was biased, but why no one caught it for a year. The accountability chain was intact on paper and absent in practice. Intellectual ownership as a professional obligation. Three forms of AI degradation: model drift, context drift, use case drift.
Session BThree ways accountability gets obscured: process laundering, tool diffusion, verification gap. The four-component Diligence protocol. The LLM memory trap.
Mode LabMode Exercise #6 — Diligence protocol for the structural health monitoring case.
⚑ Mode Exercise #6 (25 pts)
Design a complete Diligence protocol for the structural health monitoring sensor upgrade case. Required: monitoring cadence with rationale; at least three drift indicators with detection method; escalation conditions; shutdown criteria; analysis of where in the Amazon case the Diligence protocol failed and which component would have caught it. Due before Mode Lab, Week 9.
⚑ Reading Response #4 (30 pts)
Describe one AI-assisted workflow in your domain — existing, planned, or hypothetical — where Diligence has not been designed in. Name the specific drift type most likely to occur and what you would need to detect it. Due before Session A, Week 9.
⚠ Midterm (Week 8/9 — flex) · 100 pts
Multi-mode case analysis. A novel AI collaboration scenario is provided with no annotation about which modes apply. Demonstrate all five modes as practices: write the specification this situation required; identify the delegation decisions and their rationale; name the adversarial conversation move most needed; apply the verification tier appropriate to the stakes; design the one Diligence component most critical for this deployment.
No mode recitation. No framework description. Application only. This is the Act Two → Act Three gate.
By the end of this week: Produce a trust calibration map for a multi-step AI-assisted workflow — identifying the highest-risk compounding step and the trust level appropriate to each stage.
Session AFinancial model with four AI-assisted stages. A rounding convention in Stage 2 compounds through Stage 3 into a trend line wrong by 7% — above the committee's 5% decision threshold. No single step failed enough to trigger the verification protocol. System-trust vs. output-trust.
Session BError compounding — how trust miscalibrations at earlier steps constrain the reliability ceiling of later steps. The trust cascade. Symmetric failure modes at the system level.
Mode LabMode Exercise #7 — trust calibration map for the environmental impact assessment case.
⚑ Mode Exercise #7 (25 pts)
Produce a trust calibration map for the six-step environmental impact assessment workflow. For each step: state the appropriate trust level, the calibration rationale, and the compounding risk if miscalibrated. Identify the highest-risk step and explain why miscalibration there propagates furthest. Due before Mode Lab, Week 10.
⚑ Reading Response #5 (30 pts)
Describe a multi-step AI-assisted workflow in your domain. Identify the step where a miscalibration would compound most severely — and describe what the failure would look like downstream before anyone caught it. Due before Session A, Week 10.
By the end of this week: Apply the three diagnostic questions to a described AI system — classifying its architecture, evaluating its Human Decision Node, and identifying what genuine oversight requires.
Session AMove 37 revisited — not "how remarkable" but "how did it find it?" The PARU cycle as the answer. Then immediately: Amazon's recruiting tool. Same surface behavior, structurally different architectures, fundamentally different oversight requirements.
Session BThe PARU cycle: Perceive, Act, Reward, Update. The reward signal problem. The epistemic invisible constraint. The Human Decision Node — genuine judgment vs. rubber-stamp approval. Five domain examples. The three diagnostic questions.
Mode LabMode Exercise #8 — PARU diagnostic and node redesign for the infrastructure maintenance scheduling case.
⚑ Mode Exercise #8 (25 pts)
Apply the three diagnostic questions to the infrastructure maintenance scheduling AI system: (1) Is the PARU cycle complete? (2) Is the reward signal measuring what we actually care about? (3) Is the Human Decision Node designed for genuine judgment? Then: analyze the Amazon recruiting case using the PARU cycle and redesign the Human Decision Node with a specific proposal. Due before Mode Lab, Week 11.
By the end of this week: Apply adversarial validation to an AI output — identifying at least one failure that ordinary verification is not designed to find.
Session APhase II clinical trial succeeds; Phase III fails. The Phase II analysis was technically correct for its data. The patient population was systematically unrepresentative. No facts were hallucinated. The framing failure was invisible to ordinary verification. The three failure modes adversarial validation targets.
Session BThe four adversarial moves: assumption surface, distributional probe, framing challenge, counterfactual stress test. Plausibility auditing directed where domain knowledge says it matters most. Professional accountability for adversarial verification in high-stakes contexts.
Mode LabMode Exercise #9 — adversarial validation on the urban planning intersection safety case.
⚑ Mode Exercise #9 (25 pts)
Apply the full adversarial validation protocol to the urban planning intersection safety AI output. Apply all four adversarial moves. Produce a formal report: for each move, state what you probed, what you found, and whether it constitutes a distributional shift, framing failure, or assumption invisibility problem. Return a structured verdict with explicit reasoning. Due before Mode Lab, Week 12.
By the end of this week: Use AI-assisted rapid prototyping to develop a defensible research question — with documented iteration log and explicit identification of the judgment calls that required your values, domain knowledge, or accountability.
Session AGraduate student develops a beautifully structured research proposal over three weeks of AI-assisted work. Her advisor asks one question: "Why is this gap worth filling?" She cannot answer. The AI found the gap efficiently. It had no basis for evaluating whether the gap was worth filling.
Session BThe gap vs. worth-pursuing distinction. Three rapid prototyping principles. AI-assisted literature synthesis — uses and limits. The iteration log as an accountability document. The four capstone tracks. Go/no-go gate for research protocols.
Mode LabResearch protocol development session. TA provides written feedback.
⚑ Final Project — Research Protocol Checkpoint (100 pts)
Your capstone research protocol. Required: (1) research question stated precisely; (2) gap identification with explicit worth-pursuing justification — why answering this question would advance understanding in your domain; (3) iteration log documenting at least three rounds of AI-assisted development with your judgment calls annotated; (4) capstone track selected with rationale; (5) one paragraph identifying the single most important irreducibly human judgment in the research design that AI could not have made. Go/no-go reviewed before Week 13. Due end of Week 12.
What this chapter is: not instruction — execution. The demonstration.
Session AIn-class research work session. Instructor role: ask the questions the AI won't ask. ("Why is this question worth answering?" "What would have to be true for this finding to not matter?")
Session BResearch work continues. Peer consultation pairs.
Mode LabOpen consultation — TA available for iteration log review and specification feedback.
No reading assigned this week. Research work in progress.
Capstone tracks (confirmed at protocol checkpoint)
AI Validation
Original empirical investigation of an AI system's reliability in a specific domain context
Bias Detection
Original investigation of systematic bias in AI outputs for a domain-relevant application
Explainability
Original investigation of how AI systems communicate (or fail to communicate) the basis for their recommendations
Trust Calibration
Original empirical investigation of how professionals should calibrate trust in AI outputs in a specific domain
Adaptation Track (optional): Rebuild one book chapter's framework for a specified non-engineering domain, with explicit load-bearing vs. swappable analysis.
By the end of this week: Review a peer's capstone project against the Five Modes rubric — and receive written feedback on your own work.
Session AStructured peer review — each student reviews one peer's project using the Five Modes rubric. Feedback is written and submitted, not just spoken.
Session BRevision work session based on peer feedback.
Mode LabOpen session — instructor and TA available for final questions. No new material.
⚑ Final Project — Peer Review Checkpoint (100 pts)
Written peer review of one classmate's capstone project. Required: Five Modes rubric applied; at least one specific finding per mode; one overall recommendation; one question the reviewer cannot answer from the submitted materials alone. Due end of Week 14.
By the end of this week: Present a complete original research project demonstrating AI fluency — with the Irreducibly Human section completed honestly and specifically.
Session AFinal presentations (8–10 minutes each) or written submission with recorded walkthrough.
Session BFinal presentations continued.
Mode LabOpen session — no new material.
⚑ Final Project — Final Capstone Submission (250 pts)
Complete original research project in AI fluency. Required sections:
- Research question and specification — precise statement, gap identification, worth-pursuing justification
- Methodology — Five Modes applied to the research process, with documentation
- Findings — original empirical or analytical results with verification record
- Iteration log — full documentation of AI-assisted development with judgment calls annotated
- The Irreducibly Human section (required; weighted at 50% of the total grade):
- Three specific judgment calls from the research that required your values, domain knowledge, or accountability — stated specifically, with reasoning, and with consequence named
- One judgment call that was tried-as-delegation and then reclaimed — what happened when you delegated it, what you found when you reclaimed it
- An honest assessment of the collaboration quality — where the AI was genuinely useful, where it produced confident-sounding noise, and what you would do differently
- Peer review response — written response to the Week 14 peer review, submitted with the final project
Due end of Week 15.
Schedule at a Glance
| Week | Chapter | Act | Major Deliverable | Points |
| 1 | Ch. 1 — What You're Actually Talking To | One | Reading Response #1 | 30 |
| 2 | Ch. 2 — The Confidence Trap | One | Reading Response #2 | 30 |
| 3 | Ch. 3 — The Map and the Language | One | Mode Exercise #1 | 25 |
| 4 | Ch. 4 — Specification | Two | Mode Exercise #2 + RR #3 | 25 + 30 |
| 5 | Ch. 5 — Delegation | Two | Mode Exercise #3 | 25 |
| 6 | Ch. 6 — Conversation | Two | Mode Exercise #4 (workshop) | 25 |
| 7 | Ch. 7 — Discernment | Two | Mode Exercise #5 | 25 |
| 8 | Ch. 8 — Diligence | Two | Mode Exercise #6 + RR #4 | 25 + 30 |
| — | Midterm / Flex | Act Two gate | Multi-mode case analysis | 100 |
| 9 | Ch. 9 — Trust Calibration | Three | Mode Exercise #7 + RR #5 | 25 + 30 |
| 10 | Ch. 10 — Automation, Agency, Human Decision Node | Three | Mode Exercise #8 | 25 |
| 11 | Ch. 11 — Verification Under Adversarial Conditions | Three | Mode Exercise #9 | 25 |
| 12 | Ch. 12 — Rapid Prototyping as Research | Three | Research Protocol Checkpoint | 100 |
| 13 | Ch. 13 — Capstone work session | Three | Draft + in-class consultation | — |
| 14 | Ch. 13 — Peer review session | Three | Peer Review Checkpoint | 100 |
| 15 | Ch. 13 — Presentations + final submission | Three | Final Capstone Submission | 250 |
Mode Lab participation (100 pts) assessed continuously across all 15 weeks. Lowest Mode Exercise dropped — 8 of 9 count toward final grade.
Section 8
Course Policies
Attendance and Participation
This course has three weekly contact points: two lecture/seminar sessions and one TA-led Mode Lab. Each serves a different function. Missing any of them is not equivalent to missing the same thing twice.
Per College of Engineering MGEN policy, students are allowed a maximum of 2 absences per course. 3 or more absences result in an F. More than 3 unexcused Mode Lab absences will result in a failing participation grade regardless of Quality/Portfolio score. Chapter 6's adversarial conversation workshop cannot be made up by reading alone.
Students who do not attend during the first week risk being dropped from the course. Please inform me of any anticipated absence before class.
Participation means engagement — applying modes, critiquing AI outputs, running adversarial conversations, revising specifications, and connecting today's concept to your domain. Physical presence without engagement does not count as participation.
Late Work
- Assignments due by 11:59 PM on the due date
- 5% deduction per day, rounded up
- No credit after solutions are posted (posted the Monday after the due date)
- Extensions via email before the deadline with a specific proposed new due date
- Work submitted late without prior communication will not be graded
- The midterm cannot be made up without prior arrangements
- Exceptions for long-term illness or family emergencies must be approved by the professor — reach out early
Mode Exercises feed the following week's lab. A late submission that arrives after the lab has missed the feedback loop it was designed to produce.
Academic Integrity
What you submit is supposed to represent your judgment calls, your adversarial moves, your argument for why this verification finding matters in your domain, your honest account of what the AI could not do. Submitting borrowed reasoning is not just an integrity violation — it is practicing the appearance of AI fluency rather than developing it.
Violations include: submitting AI-generated work without citation, using another student's specification, delegation map, verification record, or capstone section without attribution, submitting work substantially similar to a peer's submission. All violations will be reported to OSCCR. No exceptions.
Collaboration policy: You are encouraged to discuss concepts, cases, and reasoning strategies. You may not share specifications, delegation maps, verification records, iteration logs, or any section of the capstone deliverable. Work you submit with your name on it must reflect your own reasoning in your own words. If you collaborated on ideas, list your collaborators clearly.
If you are unsure whether something crosses a line — ask. I would rather answer that question than navigate a violation.
Generative AI Policy
You are encouraged to use generative AI tools in this course. This is not a reluctant permission — it is the entire point of the course.
Use Claude to generate your first-pass specification. Use ChatGPT to draft your delegation map. Then ask: What decision did it make that I hadn't made yet? What constraint did it omit that I know, from my own domain, is load-bearing? What did it get confidently wrong in a way I would only see because I know this field? That gap — the space between what the tool produced and what you know — is the irreducibly human part. Finding it, naming it, and correcting it is not a side exercise in this course. It is the primary assessment.
Every submission requires the AI Use Disclosure block specified in Section 6. Undisclosed AI use is an academic integrity violation. The TA or instructor may ask you to walk through and explain any part of your submitted work.
Instructor disclosure: I use generative AI tools in developing this course — for drafting case study scenarios, generating first-pass specifications and delegation maps that I then evaluate and revise, and editing course materials. I document my own AI use in the same format I am asking of you.
Incomplete Grades
An incomplete grade may be reported when a student has failed to complete a major course component. Missing work must be submitted within 30 days of the term's end or the agreed-upon due date, or it receives no credit. Contact the instructor before the final week if circumstances warrant discussion.
Irreducibly Human: What AI Can and Can't Do — Botspeak: The Nine Pillars of AI Fluency
Syllabus v1.0 · Nik Bear Brown · Northeastern University · Fall [Year]
This syllabus reflects course information as of the distribution date. Learning outcomes, assessment architecture, and policies are stable. If meeting times, room assignments, or textbook availability change, updates will be posted to Canvas and communicated by email with at least one week's notice.