20 Mar 2026

Moving from “Will AI shape education?” to “How can we shape AI to support learning?”

By OER Project staff, featuring Aaron Cuny

Cookie Policy

Our website uses cookies to understand content and feature usage to drive site improvements over time. To learn more, review our Terms of Use and Privacy Policy.

When AI in the classroom first became a topic, the question was “Do you use it?” Now, the question is more often “How do you use it?” As states and districts scramble to write AI policy that evolves with the technology, it’s clear that the focus needs to move from whether we allow AI in classrooms, to what education and learning can and should look like in an AI-immersed world. Organizations across the educational landscape—from the Center on Reinventing Public Education to the American Historical Society—seek to shape meaningful use of AI to support learning. At OER Project, we’ve compiled resources and strategies that help teachers incorporate AI in ways that encourage critical thinking and keep teaching and learning at the forefront. This winter, we surveyed teachers in our community about how they use AI, and learned that teachers want to see real examples of lessons, activities, and strategies that don’t shortcut student thinking. Teachers want to do more than minimize harm; they’re hungry for AI uses that actually strengthen historical thinking skills.

We believe that the first step to supporting classroom usage of AI is to reflect on what learning should look like—with or without AI—and then to figure out if and how AI can support that. But it’s not just about decisions at the teacher level; schools and districts have the challenging task of ensuring they have systems and policies in place to provide guidance and frameworks. We wanted to gather the insights of someone deeply immersed in thinking about AI’s place in education, so we turned to OER Project Advisory Board member and AI for Equity founder Aaron Cuny, who works with school systems to thoughtfully integrate AI in the classroom. 

What originally drew you into working with AI, and what keeps you engaged in the field today?

I was drawn to AI work out of an early concern that its benefits would not be distributed evenly—even before there was data to confirm that risk. I worried that AI’s benefits would concentrate in wealthier districts and higher-income families, following the familiar contours of existing inequality. That concern compelled me to found AI for Equity because I believe the education sector has a profound responsibility to ensure AI’s benefits reach every student, especially those historically underserved. Unfortunately, the early data has validated those fears: AI usage and training are indeed more prevalent in higher-income contexts, and the equity gap is already emerging.

What keeps me engaged is the urgency of closing that gap before it widens further. We see this as a moral imperative.

If you could design the ideal classroom of the future, how would AI meaningfully enhance teaching and learning?

The ideal classroom of the future wouldn’t use AI to bypass thinking, it would use AI to make thinking visible and develop it more intentionally. Right now, we face a critical risk: AI can produce sophisticated work without students doing the cognitive work that makes learning happen. The classroom I envision flips this entirely. Students would engage AI as a thinking partner within what we call an HAH—a human-AI-human cycle. In an HAH, students do substantial initial thinking, then use AI to extend or challenge that thinking, and then return to human reflection with teachers and peers. The key shift is that AI amplifies rather than replaces cognitive work. Teachers would have curriculum materials that build this approach directly into daily instruction, making it not an add-on but an integrated part of how students learn to think critically.

What risks does AI pose to student learning, and how can schools mitigate them?

The core risk is what I would call cognitive bypass: AI can produce sophisticated work without students doing the thinking that makes learning happen. A student can generate a polished essay, a well-structured argument, or a nuanced analysis without developing the underlying skills those products are supposed to represent. This is not cheating in the traditional sense; it is a subtler erosion where the visible output looks like learning, but the cognitive work never occurred. As I said earlier, the mitigation necessitates an evolution in our thinking about instructional pedagogy: We must redesign instruction around an HAH where students do substantial thinking first, engage AI as a questioning partner rather than a content generator, and return to human judgment to evaluate and integrate what AI offered. [Check out a full HAH example for causation.] Schools that embed this approach directly into curriculum, rather than treating AI use as a separate policy question, position students to develop genuine capability and content knowledge alongside AI fluency.

We hear more and more that the critical thinking skills taught in a history classroom will be essential in a world where AI is prevalent. What does truly effective critical thinking with AI look like in practice?

Truly effective critical thinking with AI looks like students interrogating AI the way they'd interrogate any historical source. After analyzing a source collection and developing their own interpretation of, say, the causes and consequences of the Cold War, a student might prompt AI to challenge their reasoning: What evidence might contradict my argument? What perspectives am I not considering? Then comes the critical work, evaluating whether AI's challenges hold up against the primary sources, asking what assumptions AI might be making, and deciding what to incorporate or reject. Teachers can model this by explicitly showing how to question AI's historical reasoning: What sources might AI be drawing from? Whose voices might be missing from this response? The goal is students who treat AI as one more voice to evaluate, not an authority to defer to.

What emerging developments will define the next major frontier for AI in education?

The next frontier is not a technology breakthrough, it’s an institutional one. AI capabilities are already advancing on a timescale of weeks and months while school-system governance operates on a timescale of years. That velocity mismatch is widening, not closing. The defining development will not be what AI can do; it'll be whether school systems develop the absorption capacity to use it well.

What has surprised you most in your work at the intersection of AI and education?

A crucial realization we’ve come to is that current accountability systems aren’t designed for this moment. State, district, and charter-authorizer accountability structures don’t yet incentivize focus on the evolving skills and competencies students will need to flourish in an age of accelerating AI integration. This raises critical equity questions: The schools most tightly bound to these accountability systems are often those serving students furthest from opportunity. Schools serving privileged student populations can experiment and iterate, while schools under intense accountability pressure face a rational calculus that favors caution over innovation. The gap between what AI makes possible and what schools are positioned to practice is widening fastest for the students who can least afford to wait. Changing this is not about exhorting leaders to be bolder, it’s about changing the current conditions that make boldness risky.

One piece of advice for teachers navigating AI in their classrooms?

Don’t start with AI policy—start with learning goals. The question isn’t “Should students use AI?” but “What thinking do I need students to do, and where might AI help or hinder that?” When you’re clear on the cognitive work that matters, decisions about AI use become instructional decisions, not technology decisions.


Want to learn more about how we can use it to support high-quality instruction? Join us on April 22 for History in the Age of AI: Teaching Critical Thinkers. Burning questions about what other teachers are doing? Want to share or hear success stories about piloting AI lessons? Or perhaps learning moments from piloting AI lessons? Head to the community to ask away and share!

About the authorAaron is the Founder and CEO of AI for Equity, where he equips leaders at innovative school networks nationwide to lead boldly on AI, in service of a more just world for all students. Previously, he co-founded and led Ingenuity Prep Public Charter School, where at-risk student achievement ranked near the top of all school networks in Washington, D.C. He holds an M.A. in Education Leadership from Teachers College, Columbia University and a B.A. from the University of North Florida. 

Header image: Students in a science class put the final touches on their class presentation. Photo by Allison Shelley/The Verbatim Agency for EDUimages, CC BY-NC 4.0.