I’ve spent much of my career working with school system leaders to ensure technology is used safely, responsibly, and in service of student learning in classrooms.
Lately, I’ve been watching policy conversations drift in a concerning direction. Well-intentioned worries about social media and personal devices are increasingly being applied to school-based technology. And in that shift, critical nuance is getting lost.
“Screentime” has become a catch-all term, but it’s imprecise. What students experience on personal devices—unfiltered content, algorithm-driven distraction, and social media designed to capture attention—is fundamentally different from how technology is designed and used in schools. In classrooms, educational technology is intentional. It’s instructionally designed. It’s filtered, supervised, and aligned to learning goals.
When policy treats these experiences as interchangeable, it risks solving the wrong problem.
Instead of asking “how much screentime is too much?” here are the questions I believe leaders should be asking:
Is the technology instructionally designed? Not all technology is created equal. High-quality instructional tools are intentionally designed to support teaching and learning. Passive screen exposure or digitized worksheets are not the same as tools built to deepen engagement, personalize instruction, or provide teachers with actionable insight.
Is it safe, supervised, and compliant with existing protections? Schools already operate within a strong framework of student protections, from FERPA to content-filtering requirements. While no system is perfect, students are almost always safer on a school network than on personal devices at home—a distinction that’s often lost in public debate.
Is there evidence that it actually improves learning? Districts shouldn’t chase the next shiny object. Procurement should be a first line of defense, requiring proof that tools meet privacy and cybersecurity standards and actually work to make teaching and learning stronger in K–12 classrooms.
Is it developmentally appropriate and flexible for different learners? Broad, one-size-fits-all restrictions can unintentionally harm students, especially those with special needs who may rely on technology as a primary access point for learning. Age, context, and learner needs matter.
This conversation becomes even more important as we think about AI.
Banning technology doesn’t eliminate its influence; it simply pushes learning into less supervised spaces.
CEO Consortium for School Networking (CoSN)
At the same time policymakers are calling for students to be “AI competitive,” some proposals would sharply limit technology use in schools. And yet, if we want students to learn responsible, ethical uses of AI, the best place for that learning to happen is in classrooms, guided by educators, with guardrails in place. Banning technology doesn’t eliminate its influence; it simply pushes learning into less supervised spaces.
None of this is to say that all technology is good or that students should be in front of screens all day. Especially in early grades, play, collaboration, and human connection are essential. But oversimplified policies based solely on minutes and mandates miss the bigger picture.
The real focus should be on making sure schools are using fewer, better, more connected tools, supported by thoughtful procurement, strong professional learning, and a clear understanding of instructional purpose.
***
Explore how HMH Performance Suite's seamless integration of curriculum and assessment allows teachers to determine students' next steps with confidence.
Protect student data with help from our AI privacy guide.