As education systems around the world lean into artificial intelligence (AI), a fundamental question emerges: Is AI enriching learning—or edging humans to the side? The promise of personalized, efficient education is irresistible. Yet the greatest promise lies not in substituting learners with algorithms, but in ensuring that every child remains the author of their own journey—empowered, inspired, and free to choose.
The stakes are global — and unequal
The baseline is stark and well documented. UNESCO’s most recent aggregated estimates show roughly 244 million children and youth (age 6–18) out of school, a figure that has remained unacceptably high in the post-pandemic era.
Learning quality is the other half of the crisis. The World Bank-led global “learning poverty” analysis warns that somewhere near seven in ten children in low- and middle-income countries may now be unable to read and understand a simple text by age 10 — a problem that has been driven, and in many places deepened, by COVID-19 school closures.
It is into that fragile context that AI has arrived. For under-resourced classrooms where teacher shortages and large classes are routine, AI-infused tools—adaptive tutors, auto-grading engines, and personalised practice apps—are presented as a fast route to scale. But scale without stewardship can hollow out the very qualities education must protect: critical thinking, creative collaboration and the formation of values. The empirical question for policy-makers, donors and educators is therefore not only whether AI can raise scores, but whether it can do so while building human agency, civic capacities and inclusion.
A global tapestry: deeper regional perspectives
North America — experimentation with guardrails
In the United States, districts and universities are piloting AI tutors and assessment tools alongside established platforms (Google Classroom, Canvas and Microsoft Teams). Khan Academy’s Khanmigo — an AI tutoring assistant — was piloted with a selection of districts and seen as an adjunct to classroom practice rather than a stand-alone replacement. Districts deploying these tools emphasise teacher training, curricular alignment and strict privacy controls as preconditions to meaningful use.
Policy tension remains fierce: some states adopt permissive, innovation-friendly postures; others restrict classroom uses pending stronger evidence and privacy rules. The practical lesson is clear — North American experience tends to validate augmentation models (AI + teacher) when classroom outcomes and equity are the objectives.
Europe — ethics, literacy and public stewardship
European policy has emphasised governance and teacher empowerment. The European Commission’s ethical guidelines on AI in education and related educator guidance position transparency, data protection and pedagogical literacy as non-negotiable. Pilot studies in several EU countries show gains when adaptive software frees class time for conceptual discussion, but the EU insistence on explainability and teacher professional development reflects a deeper conviction: technology without public guardrails risks widening divides.
Asia — scale and the hybrid imperative
Asia presents a mixed picture of scale, private capital and rapid innovation. China and India are home to powerful adaptive systems and AI tutoring platforms (Squirrel AI, Yuanfudao, Zuoyebang, BYJU’S) that serve millions; many use detailed diagnostic mapping and large datasets to personalise learning. However, civil-society and government reviews also underscore risks—commercialisation, uneven regulation and over-reliance on test-prep models. UNESCO and regional bodies have hosted roundtables warning against “technological solutionism” and calling for teacher-centred hybrid designs.
Latin America & the Caribbean — policy frameworks and capacity gaps
Regional development banks and education ministries are advancing AI and EdTech strategies, and the Inter-American Development Bank has published frameworks for responsible deployment. Yet governments stress capacity gaps—teacher training, connectivity and local content—as the primary constraints. In practice, LAC experiments suggest AI products can complement classroom practice, but sustainability requires public investment in teacher professional development and infrastructure.
Africa — innovation at the margins, equity at the centre
Africa’s story is not one of absence but of innovation in constraint. SMS and low-bandwidth adaptive services such as M-Shule and Eneza show how AI-informed algorithms can operate over feature phones to reach learners who lack smartphones or constant internet. Early research on conversational tutors (for example, a WhatsApp-based AI math tutor trialled in Ghana) reports promising learning gains when AI is paired with school-level support and human mentorship; but researchers emphasise the need for teacher coaching, curriculum alignment and data-governance arrangements to lock in benefits.
Platforms in use — and how teachers are collaborating with them
Across jurisdictions, a consistent pattern emerges: teachers use platforms that reduce routine workload and free time for higher-order instruction. Common platforms and roles include:
- Khanmigo / Khan Academy (AI tutor + content library). District pilots show use as an in-class assistant and homework tutor; teachers curate prompts, scaffold conversation and turn AI suggestions into classroom tasks.
- DreamBox (adaptive mathematics) and DreamBox research showing efficacy when blended with teacher coaching.
- BYJU’S / BYJU’S WIZ and China’s Squirrel AI / Yuanfudao / Zuoyebang: large-scale, adaptive commercial services that partner with schools but often require careful regulation to ensure alignment with public goals.
- M-Shule and Eneza — SMS and low-data services used in Kenya, Ghana and beyond; teachers receive analytics and use those signals to target in-class remediation. These models illustrate how simple, low-cost tech + teacher action can reach marginalised learners. UILUNESCO
- Google Classroom, Microsoft Teams, Canvas — ubiquitous classroom management and collaboration platforms that increasingly embed AI features (insights, auto-grading, content generation); teachers mediate and contextualise outputs.
Crucially, successful deployments show teachers are not passive recipients of technology; they are curators and mediators: designing tasks that ask students to critique AI outputs, using analytics to group learners, and converting machine-generated feedback into human coaching.
Myths and the real counters (narrative form)
One persistent myth holds that “AI will replace teachers.” That misconception is dangerous because it obscures where learning is most human — norms, judgement, motivation and mentorship. The evidence from trials and systematic reviews is emphatic: AI lifts outcomes most when it augments teacher practice; it rarely performs as a substitute for relational teaching. UNESCO and the World Bank both warn that scaling AI without strengthening teacher capacity amplifies risk rather than reducing it.
Another common claim is that “algorithms are neutral.” In reality, algorithms encode choices: training data, language coverage, and assessment priorities. Without inclusive design and audit mechanisms, AI can reproduce biases (gender, linguistic, socio-economic) and amplify marginalisation. The European Commission, UNESCO and other bodies now emphasise transparency, explainability and bias auditing as central obligations for education AI developers.
A final myth is that “automated teaching is cheaper and therefore better for low-income contexts.” Cost is necessary but insufficient. Experience from SMS models and low-bandwidth pilots shows that unit cost advantages evaporate when software is deployed without teacher training, local content or reliable data-privacy safeguards. The World Bank’s EdTech guidance stresses the five principles of effective EdTech investment: alignment, affordability, evidence, interoperability and governance. When those principles are respected, technology can be a powerful amplifier; when they are not, it can deepen learning poverty.
Anchors of agency — an actionable way forward
- Design for augmentation, not automation. Procurement and procurement criteria must favour tools that make teachers more effective (diagnostics, scaffolding prompts, time savings), not those that aim to supplant judgement. International guidance from UNESCO and the World Bank centres this approach.
- Invest in teacher AI literacy and professional learning. Countries and donors must fund sustained professional development so teachers can read model outputs, detect error or bias, and convert analytics into human-centred instruction. Regional roundtables in Asia-Pacific and Africa emphasise teacher training as the single largest determinant of success.
- Prioritise low-tech, culturally relevant entry points for inclusion. SMS, WhatsApp bots and offline packages (the models behind M-Shule, Eneza and Rori) show how to reach households without robust internet. Scale requires partnerships with telcos, ministries and community organisations.
- Embed robust governance: data protection, bias audits and transparency. Public authorities must adopt procurement clauses that require explainability, local-language capability and third-party audits — a theme central to the European approach and UNESCO guidance.
- Measure the right outcomes. Beyond short-term test gains, systems should track learner empowerment, agency, collaboration and long-term retention. Pilot evaluation frameworks by IDB and the World Bank emphasise mixed methods and human-centred indicators.
How to rope low-income communities in for the long term
Long-term inclusion is less a technology problem than a systems problem. Strategies that have shown promise:
- Choose offline-first delivery (SMS, USSD, WhatsApp) when connectivity is spotty. M-Shule and Eneza offer strong precedents.
- Pair tech with local mentors and community facilitators. Evidence from field pilots shows that human supervision increases usage and learning retention.
- Leverage public procurement and donor funding to subsidise access while building local capacity. The World Bank and IDB frameworks for EdTech deployment emphasise blended financing and cost-sharing mechanisms.
- Localise content and language; protect data and privacy. Locally relevant curricula increase uptake; data rules build trust. UNESCO’s AI guidance and regional policy dialogues stress local content and protections.
- Sustain teacher professional development and embed feedback loops. Training, in-service coaching and data feedback for teachers make the difference between transient pilots and durable systems.
Closing reflection
On this International Day of Education, the imperative is plain: technology must extend our humanity, not erode it. The evidence and the pilots converge on a single practical truth — AI delivers when it is an instrument of teachers and communities, not a substitute for them.
If CLIP’s mission is to bridge educational gaps and inspire STEM participation, the pathway is to adopt AI in ways that build teacher capacity, invest in low-tech inclusion models, and insist on governance that protects learners’ rights. That strategy honours both the promise of modern tools and the enduring fact that education is a human endeavour: a craft of relationship, judgement and shared aspiration.
Let AI be our tool, not our master; let agency remain our guiding star.