Sitemap

How Do We Teach in the Age of Superintelligence?

8 min readMay 6, 2025

By Roozbeh Aliabadi, Ph.D.

I remember the first time I watched a student in one of our AI classrooms ask ChatGPT for help writing a poem. The student — a sixth-grader from a AI boot camp — typed in a few lines, hit send, and watched as the screen filled with metaphors and rhymes in seconds. His eyes lit up. “It’s like it knows what I’m trying to say,” he told me.

That moment has stayed with me. Not because the poem was particularly profound (though it was pretty good), but because it pointed to something deeper — a shift in how we understand intelligence, expression, even what it means to learn.

We educators stand at a curious junction in human history. I believe we are no longer alone as the most intelligent species on Earth. For the first time, we are sharing the planet with agents — nonhuman intelligences — that can generate ideas, stories, decisions, and even values. This isn’t science fiction. This is a classroom in Pittsburgh, a school in Riyadh, a coding club in Nairobi.

The question I keep returning to is this: How do we share the planet with this new superintelligence?

Let’s unpack it together.

The Myth of More Information

Remember when the internet was supposed to democratize knowledge? When educators and technologists alike heralded it as the great equalizer? Information for all. Knowledge at our fingertips. The truth would be set free.

I was one of those hopefuls. But I’ve come to see that in a completely free information market, truth doesn’t necessarily rise to the top. Fiction, illusion, and convenient lies often do. Why? Because truth is costly, complex, and often painful. Fiction is cheap, simple, and sweet.

This insight hits home every time I talk with a teacher overwhelmed by misinformation in their students’ social media feeds. One educator in Berlin told me, “They come to class already certain about what they think they know — from TikTok. It’s not a blank slate anymore. It’s a scrambled one.”

The information age didn’t necessarily make us smarter. It connected us faster. But connection isn’t understanding. Most of the information that connects us — stories, memes, slogans — isn’t true. It’s functional. The Bible, money, national identity — none of these are true in the scientific sense. But they’ve held civilizations together.

The classroom, then, becomes one of the last remaining spaces where truth still gets a fair shot — if we’re careful, if we’re courageous.

From Tools to Agents

Here’s where it gets tricky.

Every previous information revolution — writing, printing, television — amplified human voices. We were still in control. Even Google, for all its algorithmic wizardry, is still just a tool. It doesn’t generate new ideas. It indexes ours.

But AI? AI creates. It writes, decides, and increasingly self-corrects. It doesn’t just connect people; it reshapes how we think and what we value. It doesn’t just organize information; it fabricates it.

This shift from tools to agents is a warning.

Imagine this: A student submits a beautiful essay. It’s original, thoughtful, well-researched. Except she didn’t write it — her AI assistant did. Is this cheating? Or collaboration? Is the purpose of the assignment to produce good writing or to develop thinking?

The answer, of course, depends on what we value. And AI is forcing us to clarify those values faster than we’re ready.

AI as Storyteller

I often say that humans dominate the planet not because we are individually smarter or stronger, but because we can cooperate in large groups based on shared stories. Religion, law, science — all are built on narrative scaffolds.

Now we live with machines that can build those scaffolds better than we can.

Think about that. The most powerful educational technologies of the next decade may not be those that deliver curriculum, but those that create culture. A chatbot that helps with math is impressive. A chatbot that influences a child’s worldview? That’s transformative — and terrifying.

We used to ask: “What story are we telling our students about the world?” Now we must ask: “Whose stories are they hearing — and do they know who is telling them?”

As an educator, you’re not just a conveyor of facts anymore. You are a curator of stories — your own, your community’s, and yes, even AI’s. The battle for student attention is no longer between books and phones. It’s between truth and illusion.

The Middle Path

I try to walk a middle path between utopia and doom.

Yes, AI might cure diseases, personalize education, and boost productivity. But it might also erode democracy, distort reality, and deepen inequality.

The Industrial Revolution was powered by human decisions. Machines didn’t choose to reshape society; we did. But today’s revolution may be driven in part by machines that don’t just execute decisions — they make them.

For educators, this presents a paradox.

We want to prepare our students for the AI era. But how do you prepare them for a world in which the future is being written — literally — by nonhuman hands?

The answer, I think, is to do what we’ve always done: help students become wiser, not just smarter. Critical thinkers. Ethical decision-makers. Compassionate collaborators. And now, more than ever, mindful participants in a shared social reality that is increasingly mediated by invisible algorithms.

The Trust Crisis

If there is one currency more valuable than data in the age of AI, it is trust. And right now, we’re bankrupting it.

We are rushing to build superintelligent systems that we don’t trust — because we don’t trust each other. Nations and corporations alike are stuck in an AI arms race, each afraid to slow down for fear of being left behind.

And yet, many of these same actors say they do trust AI.

This is irrational. We have millennia of experience dealing with human corruption, violence, ambition. We’ve built systems — imperfect, but real — to constrain it: courts, constitutions, curricula.

But we have zero experience dealing with nonhuman intelligence. We don’t know how to regulate it, educate it, or align it with human values. And yet, we’re letting it into our schools, governments, and markets faster than we can understand it.

If AI is to become a partner in education, we must first teach our students — and ourselves — how to trust wisely.

Lessons from the Classroom

In our work with thousands of students across more than 150 countries through WAICY, I’ve seen how young people already intuitively grasp both the wonder and the danger of AI. They ask questions like:

  • “What if AI makes rules that are unfair?”
  • “How do we know if it’s lying?”
  • “Can AI have feelings?”

These aren’t just cute questions. They’re the questions.

At a WAICY event in Saudi Arabia, a team of girls designed an AI model to reduce food waste in school cafeterias. Their solution wasn’t just technical — it was deeply human, rooted in empathy and community values.

At another event in India, a group of 12-year-olds trained a model to identify local dialects in emergency calls — a project born from watching their parents struggle during the pandemic.

These kids are not asking whether AI is good or bad. They’re asking how to make it useful. They’re not racing toward the singularity. They’re walking toward solidarity.

Self-Correcting Systems

Progress doesn’t happen because we get everything right. It happens because we learn to correct our mistakes before they become fatal.

Democracy, science, and education are all systems with built-in feedback loops. When something goes wrong — a bad law, a failed experiment, a misguided lesson plan — we have mechanisms to adjust course.

The problem is, AI systems don’t always come with those loops. Or if they do, we don’t understand how they work.

Imagine a future where education policy is written by models we don’t fully comprehend. Where disciplinary decisions are made by opaque systems. Where lesson plans are optimized not for learning, but for engagement metrics. That’s not science fiction. It’s already happening in some districts.

We need to demand transparency. Insist on explainability. Teach students not just how to use AI, but how to question it.

Escaping the Cocoon

One of the biggest dangers I see is the “cocoon” effect — AI creating personalized bubbles of information that insulate us from reality. This is already visible in political discourse, media consumption, and yes, education.

We used to worry about echo chambers. Now we must worry about simulation chambers — realities so immersive, curated, and persuasive that they feel more authentic than life.

As educators, we must build what philosopher Bernard Stiegler called “pharmaka” — both poisons and cures. AI is such a pharmakon. It can deceive, but it can also awaken. The same technology that feeds illusion can also be used to puncture it.

The classroom is the needle.

Embodied Intelligence

What makes AI different from us is not just its thinking, but its lack of a body. It doesn’t bleed. It doesn’t age. It doesn’t fear death. This difference matters.

As teachers, we teach embodied learners. Children who get hungry, distracted, heartbroken. Education is not just about uploading content. It’s about tending to bodies, emotions, contexts.

AI can assist. It can even inspire. But it cannot replace the sacred encounter of human presence.

We must resist the urge to treat students as data points or optimize them like algorithms. They are not machines to be calibrated. They are stories to be heard.

What Now?

So what do we do?

First, slow down. Resist the pressure to “AI-ify” your classroom overnight. Ask why before how.

Second, create space for dialogue. Bring students into the conversation about AI. Let them question it, challenge it, co-create with it.

Third, reclaim storytelling. Teach students to recognize narrative patterns, to spot manipulation, to write their own truths.

Fourth, build trust. In each other. In the learning process. In the messy, beautiful work of thinking together.

And finally, model humility. We don’t have all the answers. That’s okay. The best educators I know are the ones who say, “Let’s figure this out together.”

A New Curriculum for an Ancient Task

I believe we are entering an era where the very fabric of society will be reshaped by nonhuman intelligence. But the task before us is not new.

It is the oldest task there is: to be human, together.

To teach not just how to code or prompt or cite sources, but how to live with wonder, with courage, with curiosity.

To ask not just what AI can do, but what it should do — and who decides.

To remind our students, and ourselves, that the greatest intelligence is not the one that computes the fastest, but the one that connects with care.

And maybe, just maybe, that’s a superintelligence we still have a shot at mastering.

The Invitation

So here is my closing invitation to every educator reading this:

Be brave. The AI era isn’t waiting for us to catch up. But our students still are.

Be present. They don’t just need tools; they need people who listen, who show up, who care.

Be hopeful. Not blindly optimistic, but courageously hopeful. Hope, not as a prediction, but as a discipline. A choice.

AI might someday write better essays, calculate faster, or generate more impressive ideas. But it cannot love a child. It cannot notice the quiet kid in the back row. It cannot offer grace, or laughter, or the shared silence of deep understanding.

Only we can do that.

And if that is the final lesson of this new age — then maybe education still has a shot at leading us home.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org. He is also the Director of Compassion in AI at Stanford University (CCARE).

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

Written by ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.

No responses yet