Why your AI policies should read more like guidelines

Date:

Share post:

School leaders looking to integrate AI into their classrooms successfully should take a guidelines approach, rather than enacting rigid policies, says Ken Shelton, an edtech thought leader and featured speaker at the 2026 Future of Education Technology Conference.

“The most effective guidelines encourage critical thinking, center human decision-making and allow for nuance,” Shelton says.

Shelton answered questions about his upcoming FETC appearance and how educators can make AI a force for good.

What guidance can you offer K12 leaders who are developing or updating AI policies?

The main guidance I share with leaders is to shift from a definitive policy to more of a guidelines approach. This is for several reasons. If you have a set of policies that are likely to be interpreted in a binary way, you’ll spend more time than you want constantly updating them.

For example, students are only permitted to use AI if the teacher provides consent. My follow-up question is always: Is this AI in general or generative AI? How can you enforce this when a student is no longer within the geographical and IT infrastructure boundaries of the school?

Guidelines provide a larger degree of framing that encourages ethical leadership, making them dynamic and more sustainable. In addition to this, guidelines ideally are drafted in collaboration with classroom educators, students and leadership.

The most effective guidelines align with your profile of a learner, profile of a graduate, strategic plan, single-site plan. This not only provides contextual framing but delineates an intentionality around the use of AI for a specific purpose.

Your AI guidelines should, at some point, merge with your existing digital responsible use guidelines and not be a wholly separate thing.

What are some examples of effective and ethical AI guidelines?

The most effective guidelines encourage critical thinking, center human decision-making and allow for nuance. For example, in some cases, I have seen the following language associated with a policy or guideline: “Students and educators are expected to consider how their use of artificial intelligence enhances teaching and learning, not replaces thinking.”

Another example is guidelines that encourage reflective thought along with conscious considerations, such as, “I can confidently identify where AI has supported my learning, not just helped me complete tasks faster.” I do think it is critical to point out that effective and ethical policies are specific within the context of the classroom, school, district and system.

While there are many frameworks available, it is crucial to adapt those to the relevant context, otherwise there is a risk of abstract interpretation, which is both impractical and unsustainable.

How does AI empower educators and learners?

When used in ways that are both ethical and responsible, AI can most definitely streamline tasks, support idea generation, automate many things and expand learning opportunities. The key here is to know when to use it for these purposes and to identify which platform works best.

For example, if an educator is looking to make learning more culturally responsive, a large language model or education equivalent may help with ideas on how to do this more effectively. The key here is for the educator to know their learners first and foremost, then utilize the AI platform to generate or expand existing ideas.

Another example, and this is one of my favorites, is to utilize an interactive chatbot to support student learning while providing a teacher with a window into their thinking.

Recently, I did this with a group of English language arts teachers where we considered all of the following:

  • How might a chatbot support students in generating or expanding ideas around a writing project?
  • What are the barriers to class participation the chatbot could remove?
  • What language must we include in the chatbot design to amplify the things a teacher would ask and how it would respond to student questions?

Lastly, we included in the prompt, “Do not write their essay for them.”

What does AI literacy mean?

I have a working definition of AI literacy that is the third iteration for me over the past 12 months. Far too often this terminology is used without definition and without context.

I always ask: Should the definition be the same in an 11th-grade science class as it is in a 2nd-grade class. This is precisely why I encourage educators to take ownership of and expand their understanding of AI so they can support student thinking around what AI literacy looks like in action.

My definition is: AI Literacy means knowing, understanding and using AI in smart and safe ways. It helps us ask good questions about how AI works, how it helps or hurts and how it can change or impact the world around us.

To further this definition, I apply a lens of:

  • Acumen: knowing how AI systems work
  • Fluency: the convergence of our comfort, confidence and competence in using these systems
  • Bias: understanding that all AI systems are human-constructed with human-generated data and are thus susceptible to the biases of both their designers and the data that drive them

How can educators improve their AI literacy? How do they help students become more AI literate?

This starts with rejecting the oversimplification of AI use and distilling our learning to only tips and tricks. It requires educators to ask more in-depth questions, apply their learning into the complexity of our learning environments, and support students in developing their own critical lens.

All of this connects to ethical leadership. Educators need to be supported in engaging in higher degrees of play to understand how different systems work, using these systems with higher degrees of transparency to model ethical leadership for students, and taking the time to analyze how it may work well and ways in which it may not.

I understand the allure of things like time savings and automation, but we must consider the following: What are we doing with the time we have? How might we regain some of that time outside of automating things? What is the line between efficiency and efficacy?

By centering our approach this way, we directly improve our understanding of AI literacy, while helping students become more AI literate. We have to go beyond the functional skills alone and balance that with our critical thinking skills.

What will educators learn in your presentation on visual AI storytelling?

This presentation is built on the core design principles of play and authentic learning. So referring back to my AI literacy definition, that is exactly how we will look at things like image and video generators specifically.

Educators will develop mechanisms to discern the differences between creativity and simple media generation. They will then be introduced to several core visual storytelling elements and how those can be augmented within the context of using AI platforms that generate images and media.

We will examine the bias in these platforms and learn the writing and creativity skills necessary to use them ethically and effectively. The creativity outlined in the session is not limited to image or video generation, but also creative writing, which is essential in creating prompts that yield intended and useful results.

The presentation answers the following questions:

  • Which platforms work to unleash and empower creativity in students and educators?
  • How do we use these platforms in safe and responsible ways?
  • How do we ensure the platforms amplify our own creativity without replacing our thinking?
Matt Zalaznick
Matt Zalaznick
Matt Zalaznick is the managing editor of District Administration and a life-long journalist. Prior to writing for District Administration he worked in daily news all over the country, from the NYC suburbs to the Rocky Mountains, Silicon Valley and the U.S. Virgin Islands. He's also in a band.

Related Articles