District adviser flags data‑privacy and model‑training concerns around classroom AI tools

5735143 · July 30, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

A district adviser reviewed AI educational tools and warned that some vendors may promise not to train models on submitted student data but that data may still be accessible and not encrypted; staff said district will keep the topic under review.

A participant with policy research experience briefed the board on artificial intelligence tools and privacy risks for schools, urging caution when the district evaluates classroom AI products. The participant explained that some companies market education versions of AI software that promise not to use submitted data to train their public models, but that such assurances do not necessarily mean the data are private or encrypted. He told the board engineers at large‑model companies can access data that users send into these systems, and that school communities should not assume text entered into educational chatbots is protected like end‑to‑end encrypted messaging. Board members and staff discussed the need to review vendor contracts and seek tools that provide appropriate data‑use guarantees for students and teachers. Staff said the AI topic will remain under review and flagged the need to consider model selection, data handling, teacher training, and subscription‑level differences that may affect access to higher‑capacity tools. Ending: Staff said they will continue to evaluate AI products and their privacy guarantees before recommending classroom adoption.