Are We Talking Ourselves Out of a Generation-Defining Opportunity?
The debate around AI in early childhood education is loud — and the questions being asked are exactly the right ones. Will AI de-skill educators? Is children's data safe? Does it replace the educator's authentic voice?
These are the questions we should be asking. The problem isn't the questions — it's that the answers people have formed are based on AI in its most generic form.
AI represents the biggest technological shift since the industrial revolution, and it's not going anywhere. The real challenge is how we leverage it with intention, demand better from the tools we use, and ensure the technology serving our sector is worthy of the trust we place in it.
AI isn't the problem — how we design it is
When people worry that AI will de-skill educators, they're identifying something real. But the culprit isn't AI itself — it's how AI tools are designed.
Research makes this distinction clear. A 2025 MIT study found that when students used ChatGPT too early in the learning process — simply typing in a question and accepting whatever came back — they retained less information and engaged less deeply with the task. AI, used passively, can shortcut the very thinking it should be supporting.
Perhaps the strongest evidence for why design is the real variable comes from OpenAI's Study Mode, developed with teachers and pedagogy experts and studied by researchers at Stanford and the University of Tartu. Rather than just providing answers, Study Mode uses scaffolding and built-in checks for understanding to actively support learning. Early results from a randomised controlled trial with 300+ college students found that those with access to Study Mode scored roughly 15% higher on exams than the no-AI control group.
That framing matters. The right question isn't "should children or educators use AI?" It's "which AI, built how, for what purpose?"
What we will see across sectors is purpose-built AI tools designed by those from within the sector. At Mana, this is the approach we're taking — our AI ‘Coach Sue’ uses Socratic questioning and scaffolding to push educators into deeper thinking before any documentation can be generated. In a 2024 survey of over 10,000 educators, 89% said Coach Sue improved their critical reflection. At the heart of Montessori is the credo, “teach me to do it myself” and this is a core part of how we are designing AI.
Figure 1 - Mana’s AI - ‘Coach Sue’ guiding a cycle of planning
The answer isn't to reject AI. It's to demand better design and to more rigorously test to ensure we're producing better outcomes for educators and children.
"Is storing data with AI safe?"
When we talk about children's data and safety, the key issue isn't AI itself. It's how information is stored, who can access it, and what safeguards are in place to prevent misuse. This is a data storage and cybersecurity issue — not an AI feature.
We see this distinction every day. Companies like Apple and Samsung use AI extensively across their products, yet most of us are comfortable storing photos and videos of our own children on our phones. That trust comes from confidence in the security systems they've built — not from an absence of AI.
The same principle applies in early childhood education — but right now, the sector isn't meeting that bar. We know educators are already using generic AI tools that frequently store data on overseas servers and train models on customer data. When children's information is exposed in this way, it isn't acceptable. This is precisely why we must advocate for safe, ethical AI built into the technology products our sector relies on.
A recent industry-wide review by Aram Advisory revealed just how much room there is to improve. At Mana, we're currently the only provider in the sector with an independently certified SOC 2 cybersecurity attestation — meaning our systems have been externally audited against strict international standards for data security, privacy, and access controls. We're proud of that, but we shouldn't be the only ones. We're strong advocates for lifting this standard across the whole sector.
When it comes to children's data, the right question isn't "Does this platform use AI?" It's "How well is the data protected?"
"But AI is not their authentic voice - it doesn't reflect their own words."
Getting help with writing is nothing new.
World leaders often work with speechwriters. Many authors use ghostwriters or editors. Executives regularly rely on graduates or assistants to draft reports and emails. Even Martin Luther King Jr. worked with two lesser-known speechwriters, Clarence B. Jones and Stanley Levison. Yet the words — and the responsibility for them — were ultimately his.
When designed with intention - the right AI tools can support educators in the same way. We extract the educators' deep knowledge and insight, and turn those ideas into clear, professional language. The educator reviews, edits, and owns the final words.
This isn't about replacing professional judgement or an educator's voice — it's about supporting it, like a trusted co-writer.
As a profession, educators will need to adapt to this new way of working — learning how to interact with AI thoughtfully and purposefully. The often-cited fear that AI will take jobs misses the point. AI will never take away an educator's job - the future of education and care is human and always will be. But an educator who leans into exploring AI tools will have an edge over one who doesn't.
"But won't they lose the ability to write documentation truly on their own?"
There was a time when people navigated by the stars. Today, we use GPS. We once solved long division by hand. Now, we use calculators. Throughout history, new tools have changed how work gets done — and each time, they forced us to ask the same important question: what is the high-value human work we must preserve?
For educators, the answer is clear. The high-value work is critical thinking and quality time spent with children.
AI tools need to be built with that in mind — using coaching to support deeper reflection, voice-to-text to speed up writing, and instant language translation to improve accessibility. The goal is to design AI that maximises time spent on thinking, not time spent on mechanics. The educator's judgement, their relationship with each child, their professional insight — those are irreplaceable. The hours spent perfecting sentences are not.
For parents, it's also important to understand how much time documentation can take away from direct time with children. The right tools help lighten that load, giving educators more energy and attention for what truly counts — the children themselves.
Figure 2 - Long Division replaced by Calculators. Progress changes how we work.
The early childhood sector has an opportunity. The conversation around AI isn't going away — nor should it. These are important questions about the future of a profession that shapes children's earliest experiences. But the quality of that conversation matters. When we ask "is AI good or bad?", we get nowhere. When we ask "is this tool well-designed, rigorously tested, and built for this context?" — we start making real progress. The sector deserves technology that's been held to a high standard: designed with educators, tested for outcomes, and built on a foundation of trust and security. That's the bar worth setting. And it's a conversation worth having.
