The age of artificial intelligence has dawned, and it’s a lot to take in. In eSpark’s “AI in Education” series, we’ll help get you up to speed, one issue at a time. In this installment, we’ll look at what it means to keep humans in the loop and always center educators.
In May of 2023, the US Department of Education’s Office of Educational Technology released a highly anticipated policy report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. This thorough examination of the AI landscape and the most immediate applications and concerns in education immediately became a must-read for interested educators, developers, and policymakers alike.
One common thread that emerged throughout the report was the strong recommendation for humans to remain in the loop at every phase of AI evaluation, adoption, and implementation. This theme aligns closely with what eSpark has been hearing from administrators in conversations about Choice Texts, our popular, AI-powered reading comprehension experience.
Most educators are excited about the personalized learning potential of the technology, but that excitement is tempered by an undercurrent of concern about the role teachers will play in an AI-supported learning environment.
ACE: Always Center Educators
The Office of EdTech’s report used a phrase that should become commonplace as the impact of AI continues to grow: always center educators. As noted in the report, “practically speaking, practicing ‘ACE in AI’ means keeping a humanistic view of teaching front and center.”
That sounds nice, but what does it mean in practice? Let’s take a closer look at some concrete examples of where that recommendation might come into play:
1) Getting to know your students
Personalization of lesson content is generally considered to be one of the low-hanging fruits of artificial intelligence. In the past, the idea of “personalized learning” had more to do with using algorithms to serve up content on each student’s level. This approach took many forms, but it was really more about differentiation than true personalization. With generative AI, we can imagine a world where the lessons themselves are tailored to each student’s interests in the moment—a world where student agency finally means something more than just choosing which program to use or which subject to work on.
In a well-implemented ACE model, the information collected on student interests should be readily accessible to teachers with minimal effort. Understanding what motivates students is a key part of the role, especially early in the school year when teachers are getting to know a whole new group. Teachers can use this knowledge to frame their instruction and feedback, develop relevant lessons and activities, and connect with students on a more personal level.
2) Making real-time instructional decisions
The Office of EdTech refers to this as “the loop in which teachers make moment-to-moment decisions as they do the immediate work of teaching.” Much has been made of the potential application for AI to act as a kind of “personal tutor” for students, serving as an extension of the teacher when the teacher is not available to work 1:1 with a student. The benefits of this application of AI are obvious, but there is a delicate balance between trusting the AI to make the right decisions and empowering the teacher to do the same thing.
It’s easy to imagine an ACE model in which AI resources are asked to analyze the data and provide recommendations to teachers; e.g. “Jerome and Maria could use additional support on phonemic awareness; here are some recommended lessons for your consideration.” We should also think about this use case as a broad spectrum that could easily vary depending on teacher capacity and student needs.
Should teachers have control over how much of the decision-making is delegated to the AI tool? Can the AI tool be configured to make certain decisions, while monitoring and alerting the teacher for others? Do teachers have visibility into decisions that have been made and the rationale behind them? This kind of flexibility and transparency will be key to future product development.
3) Recognizing patterns and diagnosing students
For those who remain resistant to the idea of AI in the classroom, it’s hard to come to terms with more dystopian visions of what the future might hold–students staring blankly at screens as the software labels, groups, and diagnoses them along the way. Nobody wants that, but it is true that AI’s unmatched ability to recognize patterns can enable educators to identify and flag potential problems much earlier than has been possible through traditional methods.
Applications of this type of pattern recognition could include the ability to flag students for dyslexia, refer students for different intervention tiers or IEPs, refer students for behavior issues, refer students for accelerated/gifted programs, etc… Any of these possibilities carries significant weight in terms of what they mean for the student’s educational needs, priorities, and required supports. Decisions like this might also be heavily influenced by things like algorithm bias, which is among the most problematic of AI’s known flaws.
In an ACE-driven process, AI will support educators by recognizing these kinds of patterns and providing clear and actionable information to teachers about why certain courses of action are being recommended. Teachers and administrators would still be responsible for reviewing that information, filtering it through the lens of all the additional human context that goes into every individual scenario, and ultimately making the right decision for each student.
Feedback Loops for Everyone
We’ll be hearing a lot about feedback loops over the next few years of AI adoption. These have always existed, AI just adds new branches and layers that we need to account for. Until the recommendations outlined here become policy, educators at every level are encouraged to identify, examine, and evaluate the loops most relevant to any AI tool they consider for adoption, including:
- Students receiving real-time feedback directly from a program
- Students receiving asynchronous feedback from their teachers
- Teachers receiving feedback on which scaffolding and support strategies have worked best for a given student
- Teachers receiving feedback on what their students are working on and engaging with
- Teachers receiving feedback on student strengths, weaknesses, and instructional recommendations
- Administrators receiving feedback on the technology they’ve implemented and their academic return on investment
- Parents receiving feedback on what their students are working on
- Policy makers receiving feedback on how the technology is evolving, what new issues are arising, and how concerns like these are being addressed
- App developers receiving feedback from educators at every stage of the product life cycle
There are many more of these to consider, some specific to AI and some that apply to any kind of technology or instructional strategy. The one thing most of them will have in common is the need for humans to play a significant, decision-making role.
AI still lacks nuance, context, and common sense. It still hallucinates and gets things wrong. It is still inconsistent and sometimes ill-informed. It’s absolutely vital that any data flowing into or out of AI apps remains, in the words of the Office of EdTech, “inspectable, explainable, severable, and overridable.”
There’s so much to love about the potential for AI to help kids learn, but any technological revolution on this scale comes with growing pains. By keeping humans in the loop, we can mitigate risk and keep everybody focused on the same goals and outcomes.