
Key Points
- Orange County Public Schools integrates AI tools in classrooms and district operations to support teaching and data management.
- OCPS plans to develop formal AI policies, expand teacher training, and create resources to ensure responsible AI use and protect privacy.
- School board members express concerns about AI risks such as cheating, loss of critical thinking, mental health impacts and data privacy threats.
As Orange County Public Schools (OCPS) seeks to move forward with using artificial intelligence (AI) in several aspects of the district, leaders said they will manage the tool carefully to ensure it’s a tool used for enhancement rather than a replacement for human judgment and critical thinking.
Maurice Draggon, chief information officer for OCPS, presented the strategic plan update on AI integration into OCPS and the technology’s role in educational processes at the Dec. 9 Orange County School Board meeting in Orlando.
“Our AI initiatives are tailored to align with the OCPS’s 2030 strategic plan and departmental priorities and not as standalone efforts,” he said.
According to Draggon, OCPS is already using AI in its district operations and school classrooms.
Currently, AI organizes and searches district information such as board policies, management directives and YouTube board meetings inside Google Notebook LM (Language Model); and to support educators with lesson planning, data analysis, translations, drafting communications and creating instructional media.
OCPS-approved GenAI platforms are ChatGPT, Google Gemini and Microsoft Copilot for staff use; and Adobe Firefly and Khanmingo for staff and student use.
In his presentation, Draggon said OCPS’ next steps on AI include drafting a formal AI policy with clear guardrails on privacy and academic integrity, expanding teacher training so staff can use approved tools responsibly, and developing resources for students and parents.
“Training and building capacity are critical components of our continued focus on AI,” Draggon said. “We continue to focus on what I call the ‘AI sandwich’ process that begin with a human using AI and ending with a human making the final decision and application.”
Draggon provided the definition of AI that, ironically, he said he got from Google Gemini: “Artificial intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. It can range from simple automation to complex machine learning algorithms that allow systems to learn from data, identify patterns, make decisions, and even understand natural language.”
Board members praised AI’s potential to assist people and better streamline operations and processes in offices and classrooms but also expressed concerns about the technology’s impact and ramifications.
Such worries include plagiarism and cheating, erosion of writing and critical thinking skills, student mental health risks from chatbots, and serious data-privacy threats if staff uploaded student information into public AI tools.
Board members urged tight guardrails, in-class writing without AI, and necessary training on responsible use and privacy before the district expands usage.
District 5 member Vicki-Elaine Felder said she sees some value in AI for visuals and relieving teachers of paperwork but said she “[doesn’t] trust AI” and “[doesn’t] like it at all.”
“The pedagogy, the art of teaching it, I don’t want that to be lost,” she said. “I think if we’re not careful, it will get lost, and children will not learn the mechanics of good writing, of critical thinking, they’ll just go to Gemini.”
District 1 member Angie Gallo raised clear concerns about student mental health and hallucinations stemming from AI use but acknowledged that this technology is here to stay and can ease workloads and support instruction if used carefully.
“People, we can be afraid of it all we want, but it’s not going anywhere,” she said. “It’s going to continue to grow, and we’ve got to stay ahead of it to ensure that that we are creating a safe environment for our students.”
District 7 Board member Melissa Byrd, who was at the meeting, believes that Draggon “did a good job” with his presentation and praised him as the “right” person to lead OCPS in its continued integration of AI in its goals and support, she said in a Monday interview with the Chief.
“He’s very strict on the security around OCPS [facilities] and the data and protecting student data,” she said. “I think he did a good job of sharing all of that [in his presentation] and listening to the board members and board members’ concerns.”
This month, Byrd attended the annual winter conference for the Florida School Boards Association’s annual winter conference, where the topic was AI and its use in education. Like Gallo, Byrd, too, acknowledged that AI is not going anywhere.
“It is part of all of our lives,” she said. “It’s definitely going to be part of this generation’s future in the workforce. So, it’s our responsibility as educators who are preparing kids for the workforce and their futures, to prepare them in every way, and that includes on the technology that they’re going to be using in their workplace.”
Byrd mentioned that there are available resources about AI, such as the Florida K-12 AI Education Task Force. The University of Florida’ s CS Everyone Center leads this initiative, which develops policy, curriculum, and training for implementing AI into the state’s K-12 schools.
To Byrd, data privacy is the single most critical ethical or privacy guardrail that OCPS should have in place before doing a full-scale AI rollout.
Byrd mentioned the Family Educational Rights and Privacy Act (FERPA), a federal law that protects student education records and gives parents certain rights related to them. Under this law, generally, schools must obtain written permission or consent before it can release a student’s records or information contained in them.
OCPS has implemented FERPA policies to protect students’ privacy. However, AI could pose a challenge to upholding those policies if employees use AI models that could leak sensitive information and train them to get smarter.
“I think that what’s most attractive about AI is the administrative tasks that it could help ease,” Byrd said. “Things like writing a student’s IEP [individualized education program] or reports like emails or reports to parents. If you have 150 students, you’ve got a lot of communication. You’re always doing that kind of thing. It would be really helpful, but it has to be done on secure networks. We have tohave agreements with the AI companies to where it’s a closed AI so that anything that is entered stays within OCPS and doesn’t go outside our data system.”
Editor’s note: This story has been updated to include reflections from Melissa Byrd.
Suggested Articles
No related articles found.



Here’s a strong, direct version that calls out the gaps, uses the word reckless, and invites deeper coverage—without crossing into threats or hyperbole:
⸻
I’m glad to see this issue being covered, but respectfully, this coverage falls far short of what a subject of this magnitude demands. The integration of generative AI into K–12 classrooms is not a routine technology update — it is one of the most consequential decisions a district can make regarding student development, privacy, and safety. This should be front-page, in-depth reporting for every parent in the county.
Where are the hard questions? How will the district defend against prompt injection attacks? Will there be governance over the prompts students and staff can enter? Is there real-time inspection of AI outputs before they reach minors? Who is responsible for filtering inappropriate, biased, or harmful content? What safeguards exist against hallucinated research being presented as fact? What oversight exists when vendors silently push new model updates or features?
Globally, AI is causing market volatility, regulatory upheaval, executive resignations, and growing public safety debates. Governments are tightening controls. Industry safety leaders have publicly warned about unresolved risks. And yet locally, adoption appears to be moving forward while policies are still being drafted. That is not cautious innovation — it is reckless.
I have spent 30 years in technology and the last several deeply immersed in AI/ML. I have also founded a nonprofit specifically focused on addressing AI safety, transparency, and responsible deployment. This is not anti-technology advocacy. It is pro-accountability. If districts across Florida move toward classroom AI without rigorous safeguards, independent validation, and transparent governance, parents deserve to understand the risks and organize accordingly.
If this publication is willing to dig deeper, I am more than willing to share detailed insight into the technical, psychological, and governance concerns that deserve public scrutiny. Parents need more than optimism — they need facts, safeguards, and accountability before this moves any further.