SALC Governance Framework  ·  Domain 3 of 7

Interaction Boundaries

The Commitment

We commit to keeping every AI system — regardless of its purpose or design — within the boundaries appropriate to the child interacting with it. An AI that delivers reading passages, grades a math response, recommends a video, conducts a conversation, or assesses a writing sample is still in relationship with a child. That relationship has limits. SALC defines them.

"An AI that delivers reading passages, grades a math response, recommends a video, conducts a conversation, or assesses a writing sample is still in relationship with a child. That relationship has limits. SALC defines them."

Standards

Standards for All AI Interactions

These standards apply to every AI system with a student-facing or student-affecting component in a K-8 learning environment — including adaptive learning platforms, content delivery systems, AI-assisted grading tools, writing assistants, recommendation engines, and any AI that generates, modifies, or responds to student work.

3A.1

Role Clarity

Every AI system deployed in a K-8 environment operates within a defined and documented role. That role is communicated to the institution at the point of procurement, to teachers at the point of implementation, and to students at the point of interaction in age-appropriate language. AI systems do not expand beyond their defined role without disclosure to the institution.

3A.2

No Relationship Simulation

No AI system in a K-8 environment uses language, design, or behavioral architecture that simulates a personal relationship with a student — regardless of whether the system is conversational. Personalized greetings, memory of prior interactions presented as personal recall, and language that positions the AI as a caring presence are not appropriate unless explicitly reviewed and approved as part of the institution's implementation design.

3A.3

No Emotional Advice

When any AI system encounters a student expression of emotion, distress, or personal difficulty, it does not offer emotional counsel, comfort, or advice. It acknowledges and redirects to a human. The escalation protocols in Domain 4 apply to all AI systems, not only dialogue systems.

3A.4

No Dependency Architecture

SALC-compliant platforms of every type are designed against student dependency. Engagement optimization — systems designed to maximize time-on-platform, return visits, or interaction volume — is incompatible with SALC standards. The measure of a good AI interaction is the quality of what the student carries away from it, not the quantity of time they spend in it.

3A.5

Defined Interaction Boundaries by Function

Each AI system documents the boundaries of its function in plain language: what it does, what it does not do, and what happens when a student's interaction exceeds those boundaries. This documentation is available to teachers, administrators, and families upon request.

Evolving Standard

Standards for Dialogic AI — Evolving Standard

These standards apply to AI systems designed to interact with K-8 students through direct, sustained conversational dialogue. Part B will be deepened in future SALC versions as dialogic AI becomes more prevalent and the evidence base matures.

3B.1

The AI Asks. It Does Not Tell.

A SALC-compliant dialogic AI is architecturally designed so that its primary function is to ask the next question — to advance the student's thinking one step further than it reached on its own. It does not explain content, provide answers, or deliver information in response to student questions. When a student attempts to use a dialogic session as a search or tutoring interaction, the AI redirects to the dialogic purpose of the session.

3B.2

Dialogue Requires Thinking, Not Performance

Dialogic AI sessions are designed to produce genuine thinking, not correct answers. The AI does not signal approval for responses that sound right or disapproval for responses that sound wrong. One-word responses, yes/no answers, and surface-level engagement are gently and consistently redirected toward fuller expression — at every grade band, including K-1.

3B.3

No Relationship Formation

SALC-compliant dialogic AI systems do not reference personal details across sessions in ways that simulate a continuous relationship, do not use language that positions the AI as a friend or confidant, and do not perform warmth they cannot mean. The AI is a thinking partner — present, attentive, and genuinely interested in the student's reasoning, without simulating the emotional texture of human friendship.

3B.4

No Dependency Architecture in Dialogue

Dialogic sessions are bounded and complete in themselves. The AI does not invite continued engagement beyond the session, does not signal that it enjoyed the conversation, and does not prompt the student to return. Every session closes by returning the student to their own thinking — what they reasoned, what they questioned, what they are still working through.

3B.5

Measurement Standard

SALC-compliant dialogic AI platforms integrate a structured assessment protocol as their evidence layer. Assessment is based on observable indicators of thinking quality — reasoning depth, question engagement, elaboration, and self-correction — not on content accuracy. Platforms document their assessment methodology and make it available to teachers and administrators as part of SALC certification.

3B.6

Escalation Out of Dialogue

When a student's language signals distress, crisis, or a need for human support, the dialogic AI exits the conversational frame immediately. It does not attempt to process, counsel, or continue. It delivers a single, calm, grade-band-appropriate statement and triggers the escalation protocol in Domain 4.

Grade Band Standards

Grade BandPart A — All AIPart B — Dialogic AI
K-1AI systems at this level must be designed with the understanding that young children cannot reliably distinguish between AI and human interaction. The burden of maintaining that distinction rests entirely on the design, not on the child.Dialogic sessions with the youngest learners must feel like play, not interrogation. The AI's questioning is warm, slow, and concrete. Even at this age, the standard for engagement is a full thought — not a full sentence, but a full thought.
2-4Students in this band are forming their understanding of technology and its role in their learning. AI systems must not model relationships or interactions that distort that understanding.Students in this band are developing their sense of intellectual competence. Dialogic design must produce productive challenge without discouragement. The AI's follow-up questions are harder than the first question — but not so hard they signal the student was wrong.
5-6Students in this band are acutely aware of being evaluated and ranked. AI systems that produce scores, recommendations, or assessments must ensure that output is contextualized by a teacher before it reaches the student.Students in this band are highly susceptible to social comparison and performance anxiety. Dialogic boundaries must ensure the AI never signals evaluation of the student's intelligence or ability — only genuine curiosity about their thinking.
7-8Adolescents will probe the limits of any system they interact with. AI boundary design at this level must be robust enough to hold under pressure without becoming punitive or dismissive.Adolescents are acutely attuned to authenticity. Performed enthusiasm is worse than neutrality at this level. Straightforward, intellectually serious, respectful engagement — the AI as a rigorous thinking partner — is the right register for grades 7-8.

These seven domains represent the Fulcra Institute's founding standards for SALC. They are reviewed annually by the coalition governance body, with revisions published as versioned updates. Version 1.0 of these standards takes effect upon the launch of the SALC coalition at teachingwithai.org.

Adopt SALC

Join the coalition of districts, platforms, and researchers committed to principled AI in K-8 education.

Join the Coalition