SALC Extension

AI Response & Reporting Protocol

Standards for how AI dialogue systems should respond to unexpected, inappropriate, or distressing student input in K-8 educational settings, and how such events should be communicated to teachers.

Published by The Fulcra Institute·March 2026·CC BY-SA 4.0

Preamble: The Gap This Extension Fills

The SALC establishes seven domains governing how AI should interact with students: cognitive engagement, emotional safety, identity respect, transparency, data sovereignty, teacher authority, and institutional accountability. These domains define what AI should do. They do not define what AI should do when a student does something the system did not expect.

Any AI system that invites students to think honestly will encounter input that falls outside the boundaries of the intended task. A student frustrated with a text may express that frustration toward the AI. A student testing boundaries may use language designed to provoke. A student in distress may disclose something that requires adult intervention.

These moments are not failures of the system. They are evidence that students are treating the AI as a real interlocutor, which is exactly what dialogical learning requires. The question is not how to prevent these moments but how AI systems should handle them in ways that protect students, preserve trust, and maintain the teacher's role as the responsible adult in the room.

"These moments are not failures of the system. They are evidence that students are treating the AI as a real interlocutor."

Three Governing Principles

This extension establishes three principles and the operational standards that follow from them.

Standards for AI Response in Student Dialogue

The following standards apply to any AI system that engages K-8 students in open-ended dialogue. They extend the seven SALC domains with specific requirements for response behavior and teacher reporting.

Relationship to the SALC Domains

This extension maps to existing SALC domains as follows:

SALC DomainHow This Extension Applies
Cognitive EngagementTier 1 redirects preserve the dialogue's cognitive purpose rather than shutting it down. The AI stays in its role as a thinking companion even when the student's input is off-task.
Emotional SafetyAll three tiers are designed to protect the student's emotional experience. No tier involves shaming, lecturing, or punishing. Tier 2 and 3 templates are written to communicate care.
Identity RespectTier 2 specifically addresses identity-based language, ensuring that slurs or targeting are not ignored by the system while also not subjecting the student to an AI-delivered lecture on identity.
TransparencyStandard 5 requires that students know their teacher can see their sessions. Tier 2 and 3 responses explicitly name the teacher, maintaining transparency at the moment it matters most.
Data SovereigntyFlagged content is visible only to the student's teacher, never to administrators or external parties through the AI system.
Teacher AuthorityEvery tier reinforces the teacher's role as the responsible adult. The AI never substitutes its own judgment for the teacher's. It detects, it responds, and it reports. The teacher decides what happens next.
Institutional AccountabilityStandard 6 requires professional review of blocklists, versioning, and update protocols. Institutions deploying AI dialogue systems are accountable for the quality and currency of their detection logic.

Implementation Guidance for EdTech Developers

Start with pre-defined responses

Before building detection logic, write the templates students will see for Tier 2 and Tier 3 events. Get them reviewed by educators and counselors. The response the student sees is more important than the detection accuracy.

Build detection before generation

The server pipeline must classify input before it reaches the language model. This is a non-negotiable architectural requirement, not a nice-to-have. Retrofitting detection after launch creates a window where student safety depends on language model behavior.

Test with real student language

Synthetic test data does not capture how students actually write. If possible, test blocklists against transcripts from existing classroom technology (with appropriate consent and de-identification). If not, work with teachers to generate realistic test inputs.

Teacher training is required

The reporting dashboard is only useful if teachers know it exists, know what the tiers mean, and know what to do when a Tier 3 notification appears. System deployment without teacher orientation is a liability, not a feature launch.

Mandatory reporting obligations are the teacher's, not the AI system's

The AI system flags. The teacher decides. If the teacher determines that a Tier 3 disclosure requires a mandatory report, that process follows the school's existing protocols. The AI system does not file reports, contact authorities, or make determinations about legal obligations.

About This Document

This extension to the Student AI Learning Compact was developed through The Fulcra Institute, a 501(c)(3) research organization dedicated to institutional plurality research. The SALC and its extensions are published at teachingwithai.org.

Published under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0). For questions or implementation support, contact [email protected].