Every AI integration mandate issued to teachers in the last three years was written for the wrong teacher.
The mandates assume that teaching is content delivery — that the primary barrier to learning is the speed and personalization of information access, and that AI, which is superhuman at both, therefore improves teaching. That assumption holds for some domains. It fails completely for others. It fails for the woodshop teacher who hears the blade catch wrong before the student does. It fails for the surgical training instructor whose hand on a resident's wrist carries information no motion-tracking system has yet captured. It fails for the nursing simulation faculty member who understands that an AI-driven virtual patient can train clinical reasoning but cannot provide the emotional presence that determines whether a frightened patient trusts the person in front of them.
For these teachers, the integration discourse keeps arriving at the wrong address. This course is for the teachers at the right address.
Embodied instruction — teaching that requires a body, hands, presence, and the relational intelligence that develops through years of watching students move and make and fail and try again — is the most specific form of teaching intelligence that exists. It is also the form AI integration discourse most consistently ignores.
The problem is not that AI is useless in these domains. It is that nobody has specified, rigorously and domain by domain, exactly where its usefulness ends. That boundary — between what the machine handles and what requires the teacher's irreplaceable physical and relational presence — is the most important design decision in any embodied instruction context. It is also the decision most integration plans never make.
Without that boundary, two things happen. Either the machine is kept out entirely — wasted capacity, administrative overload intact — or it is brought in without a principled stop condition, and it gradually displaces the thing it was supposed to protect.
Course information
| Course title | Irreducibly Human: What AI Can and Can't Do — Embodied Teaching |
| Credit hours | 4 |
| Delivery | In-person | Lecture/Seminar (weekly) + TA-led Domain Lab (weekly) |
| Level | Graduate |
| Prerequisites | None — no prior Irreducibly Human series courses required. Substantial experience in at least one embodied domain required. |
| Instructor | Nik Bear Brown · ni.brown@neu.edu |
| Series | Part of the Irreducibly Human series at Northeastern University — College of Engineering. This is the domain-application volume. Companion courses (Botspeak, Conducting AI, Ethical Play) can be taken before or after; none is required. |
Who this course is for
This course is for practitioners, teachers, and designers of learning environments who work in embodied domains and have been asked — by an administrator, a department, an institution — to integrate AI into what they do, without being given a principled framework for deciding what to hand off and what to protect.
What this course assumes
You have substantial experience in at least one embodied domain. You have used AI tools in some capacity. You do not yet have a framework for drawing the line.
What this course does not assume
Prior education coursework. Advanced AI fluency — the course builds the analytical framework, not the tool literacy. Prior Irreducibly Human series courses.
What you will leave with
- A complete, field-tested AI integration plan for your embodied domain — specifying the tools, the workflows, and the specific teaching time returned to embodied instruction.
- A gap analysis connecting your integration decisions to practitioner field-test experience — tracing the specific points where structural analysis could not reach what only a body can do.
- One sentence: the general design principle that names, with enough precision to be useful, what AI cannot do in your teaching that only your presence can accomplish. If it cannot be stated in one sentence, it is not yet a principle. The course ends when you can state it.
What this course builds
By the end of this course, students can:
- Distinguish between AI's analytical competence in a teaching domain and the irreplaceable physical and relational intelligence of the teacher — using a specific embodied domain as the diagnostic case
- Specify the protected core of their primary domain: the teaching capacity that develops only through physical practice, only through the teacher's embodied presence, and that no integration plan should attempt to replace
- Construct an AI integration plan whose handoff functions are operationally specified — naming the tool, the workflow, and the specific capacity returned to embodied instruction
- Document instances where AI-generated materials are analytically correct but pedagogically incoherent — and specify the domain expertise required to correct them
- Submit an integration plan to an AI Integration Auditor, evaluate the structural analysis against practitioner field-test data, and name precisely where the audit found what it could find and missed what it could not
- State the gap between AI-auditable integration architecture and irreplaceable embodied teaching as a general design principle — with evidence, not assertion
How the course is assessed
Every assignment requires an AI Use Disclosure — not as compliance, but as the course's central analytical act. Students document what they used, how they used it, what they changed, and — this field is not optional — what the AI could not do. A disclosure that cannot name one thing the AI could not do has not demonstrated that the student performed the irreducibly human analytical layer. That declaration is the spine of every graded submission.
The grade reflects depth of domain analysis, quality of gap reasoning, and evidence that the integration decisions and protected core judgments were made by the student — not delegated to a tool. Relative grading applies at the top of the scale. Absolute grading applies below the threshold, ensuring a floor for demonstrated competence.
How the course is structured
The course runs in three acts.
Act One builds the analytical vocabulary from domain cases. Students learn to distinguish between administrative efficiency — doing the same teaching with less overhead — and genuine capacity return: recovering time and presence for the high-value embodied work only a human can do. The act closes with a midterm: a preliminary integration specification that names the teaching capacity being protected and the specific AI application being handed off. A specification that cannot name both has not yet designed an integration architecture.
The Tier 2 Problem
The course opens before any framework vocabulary is introduced. Students describe their own domain to someone who has never been in their classroom — two minutes, no jargon. That description is collected and returned in Week 13 as the record of what the student could feel before they could name it. The Tier 2 distinction is then introduced: what AI handles in a teaching domain, and what the teacher's physical and relational presence must supply that the tool cannot reach.
Domain description — 30 pts Domain Lab #1 — 25 ptsLegible Documentation
The Boyle System — a real AI-assisted documentation framework deployed in lab science mentorship — is the first case. Students trace a four-stage consequence chain from documentation handoff to mentor meeting quality, learning to distinguish connections that are genuinely causal from connections that only follow plausibly. This is the analytical distinction the course applies all semester: administrative reduction is not the same as capacity return.
Reading Response #2 — 30 pts Domain Lab #2 — 25 ptsWhat Lives in the Hands
The woodshop domain is the required opening case for the protected core argument because its boundary is among the least contestable in the course: the knowledge that lives in the hands, the auditory and tactile feedback through a tool, the sound of a blade catching wrong. Students specify a protected core for a domain not their own — learning to distinguish "AI cannot do this" from "AI has not yet done this." The analytical gap between those two claims is where most integration plans fail.
Domain Lab #3 — 25 ptsKinesthetic Intelligence
Movement analysis tools in PE can identify what a student's body is doing. They cannot develop what the student's body needs to develop. This week draws the distinction between AI applications that document or analyze embodied capacity and those that build it — and requires students to apply that distinction as a design constraint in their own domain. Cases include the teacher's ear in music and voice instruction, and kinesthetic cueing in dance.
Reading Response #3 — 30 pts Domain Lab #4 — 25 ptsThe Integration Map
Act One closes with the midterm: a preliminary integration specification that names the specific teaching capacity being protected and the specific AI application being handed off. A specification that cannot name both has not yet designed an integration architecture. Before submitting, students analyze a designed overreach failure — a real case where AI was applied where it does not belong — so they have vocabulary to name the mistake most likely to appear in the first draft of their own plan.
Midterm / Preliminary Integration Specification — 100 pts Reading Response #4 — 30 ptsAct Two is where the plan becomes real. Students design, implement, and field-test an AI integration plan for their own embodied domain. The plan must be specific enough that a practitioner could evaluate it in a single review session. The protected core is not described as a philosophical statement — it is operationally specified as a boundary. Students then take the plan into the field, collect practitioner feedback, and discover where design intent and the practitioner's experienced teaching reality diverge. That divergence is the data.
Integration Plan Lock
Students produce a complete integration plan specific enough that an AI auditor could evaluate its architecture without being told where the boundary is. The protected core is not named or described as a philosophical position anywhere in the document — it must be identifiable from the workflow decisions alone. This is the standard that makes the Week 12 audit meaningful.
Integration Plan Lock Checkpoint — 100 ptsThe Documentation Layer
The documentation layer of the integration plan is implemented as a working workflow. A live demonstration in class is designed to produce at least one AI-generated output that is analytically correct but pedagogically incoherent for the domain — and students are required to document an instance of this in their own implementation, specifying the incoherence and the domain judgment required to correct it. That documentation is direct evidence of the irreducibly human analytical layer.
Domain Lab #5 — 25 ptsThe Assessment Scaffold Layer
The assessment scaffold layer is added to the implementation. Students conduct a structured self-audit — identifying where the implementation matches the specification, where it has diverged, and which single workflow carries the most capacity-return potential. The measure throughout is the same: does this handoff return time and presence to embodied instruction, or does it only reduce overhead?
Domain Lab #6 — 25 ptsThe Field Test
The plan goes into the domain. Students conduct an informal field test with at least two practitioners, collecting structured feedback on whether the integration produces genuine capacity return or something else. Three revision changes maximum are permitted in response — prioritization is a graded skill. The Reading Response asks students to identify the moment a practitioner's response surprised them: what integration decision produced that moment, and what does it reveal about the gap between design intent and practitioner experience?
Reading Response #5 — 30 pts Domain Lab #7 — 25 ptsBeta Integration Plan
The beta integration plan incorporates field-test revisions and is prepared for peer review and AI audit. Students also produce a prediction document: where will the AI Integration Auditor correctly identify the protected core, and where will it fail? Which integration decision is most likely to mislead it? That prediction is returned in Week 12 for comparison with the actual audit results.
Beta Integration Plan Checkpoint — 100 ptsPeer Domain Review
Students function as both reviewer and designer receiving review. The feedback instrument is the handoff/protect distinction — applied specifically and with reference to integration decisions, not general impressions. The peer review data is then evaluated against the Week 10 prediction: where was the prediction accurate, and where did practitioner experience diverge from the expected audit result?
Domain Lab #8 — 25 ptsAct Three closes the loop. Students submit their integration plan to an AI Integration Auditor — Claude, running a structured architectural analysis — and compare what the structural analysis finds against what practitioners reported. The gap between those two accounts is the course's central analytical object: the embodied variable the structural analysis could not reach. Naming it, tracing it to a specific integration decision, and stating it as a general design principle is the final deliverable.
The AI Integration Audit
Students submit their integration plan to Claude, running a structured architectural audit. The session begins as a shared class exercise — the Boyle System documentation submitted live, results visible to everyone — so students see both the AI's structural competence and its limits before their own audits begin. The limits are specific: the AI finds what it can find from the architecture. It cannot find what requires a body. That gap is now data.
Work feeds Final SubmissionThe Gap Analysis
Week 1 domain descriptions are returned. Students read their own pre-vocabulary words and ask: what can I name now that I could only feel then? What can I still only feel? The gap analysis is constructed from that question through to the AI audit findings — tracing specific integration decisions to specific divergences between structural analysis and practitioner field-test experience, and naming the embodied variable the audit could not reach.
Work feeds Final SubmissionDeployed Cases
Students apply the course's full analytical framework to two published AI integration deployments — reading as domain analysts, not users. One case is from a clinical or simulation domain; one is from the trades or CTE. For each: where did the integration reach the protected core? Was that the line the designers intended? Does the claimed capacity return match what practitioners actually reported?
Domain Lab #9 — 25 ptsThrough-Line and Synthesis
The course closes on the terminal deliverable: one sentence stating the general design principle that distinguishes an AI integration plan that protects embodied teaching capacity from one that displaces it. Students write it before the final session. The session is the attempt to make it precise enough to be useful. The final submission collects the gap analysis, the judgment call, the integration revision proposal, the deployed cases analysis, the general principle, the Week 1 reflection, and the peer review response.
Final Submission — 250 ptsDomain Lab participation (100 pts) is assessed continuously across all 15 weeks. The lowest-scoring Domain Lab Assignment is dropped — 8 of 9 count toward the final grade.