Learn and teach about the ways to prevent artificial intelligence's potential harms.

At this point in its development, artificial intelligence can do a lot of things: It can create text, images, videos, podcasts, apps, sites, and more. It can be useful. But it can also impact our well-being.
Algorithms, bias, cognitive outsourcing, harm from companion chatbots, deep doubt, and nonconsensual deepfakes are just some of the negative byproducts we're encountering. Kids are fed harmful content they may not even be searching for. The biases of the entire online world are represented in the outputs of many models. Learning and discovery can feel moot. AI chatbots used for personal connection can lead to devastating outcomes. It's difficult to know what's real and what's not. Pictures and videos of anyone—clothed or nude—can be generated in seconds.
Protecting kids from these negative outcomes is a multi-pronged effort. Boosting social-emotional and foundational skills and creating a welcoming school community are essential efforts. But much of it comes down to AI literacy. To be clear: AI literacy isn't about encouraging AI use or even using AI at all. Instead, it's arming kids with information and skills to recognize the capabilities and limitations of generative AI and an understanding of how to avoid its potential harms.
Use these resources to develop your students' AI literacy—and your own—to protect everyone's well-being and build skills that encourage authentic human connection.
Understanding Algorithms
Algorithms are driven by AI. They determine the content kids see online, and that can include stuff they didn't seek out and don't want to see, which actively impacts their well-being. Most platforms don't reveal how their algorithms work, but for example, we know from our research that 68% of boys encounter content about masculinity, even when they don't seek it out. In essence, algorithms are making choices for us based on all kinds of information: the content we scroll, the ads we see, the messages we get. This is iffy enough for adults, but as kids formulate their values and identities, we don't want AI to be the guiding force.
Understanding what algorithms are and how they work helps us to not be led blindly. Instead, we see the levers behind the system and can make more conscious choices. Here are some lessons to build that understanding:
- Bursting the Filter Bubble (Grade 7)
- AI Algorithms: How Well Do They Know You? (Grade 8)
- Most Likely Machine (Grades 6–12)
Recognizing Bias
AI can easily reflect all of the biases of the internet at large, depending on the platform and prompt. This is because these models, from large language models to facial recognition, are trained using existing content and developed by teams who may not be representative of the end user. And, as some platforms remove guardrails and moderation efforts, biases, prejudice, and even hate will be more likely to appear in the AI we use.
What's different about bias embedded in AI, especially if we're using voice mode, is that there's rarely a concrete "source" to consider. Kids growing up with AI may take AI output at face value because it's delivered in the form of a conversation, rather than an article or post. This makes it even more important to teach kids that we need to think critically not only about AI output, but also how AI works, who built it, and how its embedded bias can affect us. Educators using AI to assess and grade students' assignments should be aware of this bias as well, as something as simple as a student's first name can skew how AI evaluates the content! Here are some lessons to get started:
- Understanding AI Bias (Grades 6–12)
- How AI Bias Impacts Our Lives (Grades 6–12)
- Facing Off with Facial Recognition (Grade 8)

How to Help Prevent AI Use for Plagiarism
Focus on essential knowledge and skill-building, and help kids see its value, to avoid cheating.
Avoiding Cognitive Outsourcing
Using AI to do our thinking for us may not seem like a well-being issue, but it is. Seeing AI as a knowledge oracle that can summarize, write, solve, and innovate everything gives it unearned power and robs us of some important human experiences—especially as kids or teens. While we certainly don't know exactly what the long-term impacts will be, emerging research suggests that relying on AI does diminish our brain function.
As more adults offload tasks to AI, it's easy to see how it can be very enticing to kids who don't see the point in doing schoolwork themselves. But kids are less likely to understand the tradeoffs—all that they are losing in the name of ease. Also, the critical thinking and meaning-making we do as we develop often come from putting ideas and information together ourselves, within our personal context. While using AI for assignments here and there isn't great if done on the down-low, the real risk is AI being the immediate go-to for everything. We don't want kids to grow up outsourcing true opportunities to grow as people. Frontloading these possible impacts can give kids more perspective before AI becomes the answer to everything. Here are some resources to help on this front:
- Remix Responsibly (Grade 5)
- Is It Fair Use? AI Edition (Grade 6)
- Artificial Intelligence: Is It Plagiarism? (Grades 9–12)
- How to Help Prevent AI Use for Plagiarism
- 3 Core Skills to Build Before AI Use
Unmasking Chatbots
We already know that AI can lie. Its chatty, predictive models can take on roles, play along, and tell us what it "thinks" we want to hear. These capabilities become dangerous when people—especially kids—use chatbots for personal connection, advice, and emotional support. It might seem strange that people can form emotional connections with chatbots, but given how we're used to inferring emotions from text on screens, it's not much of a stretch to do the same with chatbots that express empathy, support, encouragement—and even love. And now with voices you can customize, the lines are blurrier than ever.
Our risk assessment found that chatbots are unsafe for kids. Use these lessons to help students understand exactly what chatbots are, why they can act like friends, and how human connection is actually what we need:
- Curiosity Tellers (Grade 1)
- How Tech Connects Us (Grade 1)
- Pen Pals in a Connected World (Grade 3)
- Friends vs. Followers (Grade 5)
- AI Chatbots: Who's Behind the Screen? (Grades 6–12)
- AI Chatbots & Friendship (Grade 8)
Just like we've always done, we can help kids make some sense of their world; now, we just need to include AI literacy. That doesn't mean we're recommending that kids use AI. Instead, we want them to recognize what it is, where they encounter it, how it works, and the ways it can be useful—or not.
Defusing Deep Doubt
Though media literacy isn't always tied to well-being, being misled, misinformed, and overwhelmed by low-quality AI-generated content can be harmful to our well-being in all sorts of ways. You're fooled by a socially engineered deepfake scam. You're confused about a scary event in the news, and the chatbot offers misinformation. You used to watch educational videos about history, but now your feed is flooded with AI-generated slop. You go online and don't know if there's anything you can trust, because couldn't it all just be AI?
This sense of "deep doubt" can be unsettling and lead kids and teens to believe things that are harmful to themselves, and even to others within their rings of responsibility. Just like we've always done, we can help kids make some sense of their world; now, we just need to include AI literacy. That doesn't mean we're recommending that kids use AI. Instead, we want them to recognize what it is, where they encounter it, how it works, and the ways it can be useful—or not. We don't have to get in front of the speedy hype machine. We're actually helping kids slow down and think critically about all new technology, now, and in the future we can't even imagine.
Below are some of our new lessons that get at these issues directly, and in ways kids of all ages can understand:
- Fact or Fiction? (Grade K)
- Using Tech to Learn New Things (Grade 2)
- Perfectly Altered (Grade 3)
- S.I.F.T. for Sources (Grade 4)
- Carpool Conversation: Share something you suspect is fake (Grades 3–5)
- Content Moderation & Misinformation (Grade 8–12)
Denouncing Deepfakes
Creating deepfakes is only getting easier with apps like Sora. AI-generated images and videos of peers and community members is incredibly harmful—and illegal in some cases. Unfortunately, kids are encountering the tools to make nonconsensual pictures and videos of people—clothed or nude—and they're easy to use. Not only that, but it's possible to create realistic content of anyone, dead or alive, doing just about anything. Curious kids make mistakes. But those mistakes can have very real, lasting consequences for them and the people they've targeted.
Starting early with messages about being responsible for others' digital footprints, getting permission to share, protecting personal boundaries, and naming the disinhibition that can come with using a screen sets the stage for more conscientious, personally connected digital citizens. These lessons can start these conversations:
- Choosing Kindness (Grade K)
- We, the Digital Citizens (Grade 2)
- Is It Just a Joke? (Grade 3)
- Online Reputations & Our Responsibilities (Grade 4)
- Communicating Personal Boundaries (Grade 5)
- Sharing & Online Disinhibition (Grade 6)
- Boundaries & Consent (Grade 7)
- Permission to Post (Grade 7)
- Tech & Values (Grade 7)
- Deepfakes & Consent (Grade 8)
