“Why is this showing up in my newsfeed?”
It’s getting harder and harder to find the things you actually want to see online. Open up Instagram, and instead of seeing your best friend’s baby pictures, you’ll find videos of people you’ve never met pushing products you’ve never heard of. Google something, and the first result you’ll get is an AI overview of information that might be relevant to your search, but might very well not be. It can be frustrating, confusing, and at times, anxiety-inducing.
The term “algorithmic anxiety” was first coined in a 2018 study. At that time, companies like Airbnb were beginning to use AI algorithms to do things like rank and suggest Airbnb listings to potential guests. Academic researchers found that Airbnb hosts were exhibiting anxiety over “a perceived lack of control and uncertainty” over how the site’s algorithmic evaluation worked for or against them–and so, algorithmic anxiety was born.
Writer Kyle Chayka puts it perfectly in this article for The New Yorker: “Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision. At times, the computer sometimes seems more in control of our choices than we are.”
For students and educators, this anxiety can be particularly pronounced, given the growing use of AI for grading, personalized learning, and even behavioral predictions. Understanding what algorithmic anxiety is, recognizing its signs, and learning how to mitigate its effects will be crucial steps for fostering a healthier educational environment in the age of AI.
What Causes Algorithmic Anxiety?
Algorithmic anxiety stems from a fear of being judged or evaluated by an opaque system. Unlike traditional assessments where the criteria and evaluators are known, algorithmic decisions can seem mysterious and arbitrary. This anxiety can manifest when students are unsure how their grades are calculated, or when educators feel pressured to conform to algorithm-driven decisions.
Recognizing the signs in students
- Increased Stress and Frustration: Students may exhibit heightened stress levels, especially after receiving grades or feedback from AI systems. They might feel powerless or confused about how to improve.
- Avoidance Behavior: Students might avoid engaging with AI-based learning tools, skipping assignments or showing reluctance to participate in AI-facilitated activities.
- Erosion of Trust: A general distrust in the fairness of AI-driven assessments can develop, leading to skepticism about the value of schooling as a whole.
Recognizing the signs in educators
- Resistance to Technology: Teachers might resist using AI tools, fearing that these systems will undermine their professional autonomy or make their roles redundant.
- Professional Burnout: The pressure to constantly adapt to new technologies, coupled with the uncertainty of their impact, can contribute to professional burnout.
- Communication Breakdown: A lack of buy-in or difficulty understanding and explaining AI-driven decisions to students and parents can create additional tension.
How to Prevent Algorithmic Anxiety
Fortunately, there are a number of ways educators can work to prevent this anxiety from creeping into the classroom. As with most any technology-related concerns, the right balance of socialization, empathy, and direct instruction can alleviate much of the pain students might otherwise feel.
Increase transparency
- Demystify the Algorithms: Educators and institutions should strive to make AI systems as transparent as possible. Explaining how algorithms work and how decisions are made can alleviate fears and build trust.
- Incorporate AI Literacy: Integrating AI literacy into the curriculum can empower students with the knowledge to understand and critically evaluate the technology they interact with.
Emphasize human-AI collaboration
- Keep Humans in the Loop: While AI can be a powerful tool, it should complement rather than replace human judgment. Following the ACE (Always Center Educators) model, teachers should continue to play a central role in decision-making processes, providing a human touch that AI simply cannot replicate.
- Provide Support Systems: Establish support systems for students and educators to address concerns and questions about AI. This can include AI-focused professional development or even dedicated tech support teams.
Promote ethical AI use
- Adopt Ethical AI Practices: Issues like hallucinations and inherent biases are known risks of AI use and can contribute towards mistrust of AI tools. Schools should prioritize using AI systems that understand these risks and adhere to ethical guidelines, ensuring fairness, accountability, and transparency.
- Regular Reviews and Updates: AI systems should undergo regular reviews to ensure they remain fair and effective. Feedback from students and educators should be integral to these reviews. EdTech Evolved offers a variety of resources for administrators to support these efforts, including: an AI impact assessment template for K-12 schools, an AI vendor questionnaire template, and recommendations for setting up a cross-functional AI advisory committee.
As symptoms of anxiety, depression, and other mental health disorders have increased among young people in recent years, it’s especially vital to get ahead of any additional stressors that might negatively affect learning environments. Fostering an atmosphere of trust and collaboration between technology and human educators will be essential in ensuring that students and teachers can thrive without the looming shadow of algorithmic uncertainty.
EdTech Evolved is here to bring you up to speed on AI in K-12 education. Subscribe to receive updates featuring trending topics, best practices, and tips to help you get the most out of your technology.
Brought to you by eSpark, a leading provider of highly-personalized math, reading, and writing curriculum for grades K-8. eSpark’s AI-driven approach to personalized instruction has been featured in EdSurge, AP, and more.