Ask an Expert: Associate Professor Shea Kerkhoff discusses AI in the classroom

by | Apr 6, 2026

Kerkhoff and Sam von Gillern, a colleague at the University of Missouri–Columbia, have been meeting with Missouri educators to explore how they can use AI appropriately to support teaching and learning.
Shea Kerkhoff

Shea Kerkhoff is an associate professor of literacy at the University of Missouri–St. Louis. Kerkhoff’s research on disciplinary and digital literacy has been featured in esteemed academic journals such as Reading and Writing: An Interdisciplinary Journal and Reading Research Quarterly. (Photo by Derik Holtmann)

When they began facilitating a monthly study group for Missouri educators, Shea Kerkhoff and Sam von Gillern heard one concern above all else: artificial intelligence and its role in the classroom.

Global adoption of generative AI tools is estimated to have reached 16.3% of the world’s population in 2025, and corporate investment in the emerging technology reached $252.3 billion in 2024. Tools such as OpenAI’s ChatGPT have become commonplace in the home, and AI technology has inevitably made its way to schools across the country.

Now teachers are grappling with how to use AI appropriately to support teaching and learning. Kerkhoff, an associate professor of literacy in the College of Education at the University of Missouri–St. Louis, and von Gillern, an assistant professor of literacy at the University of Missouri–Columbia, are trying to help educators navigate this new frontier.

The pair has been conducting the monthly study groups via Zoom since the start of the academic year. The meetings serve as a space for educators and education researchers to talk about the practical and ethical considerations surrounding the use of AI. Their work is supported by a multi-million-dollar grant from the Missouri Department of Elementary and Secondary Education to promote evidence-based literacy strategies in the state. The funds are part of the U.S. Department of Education’s larger Comprehensive Literacy State Development program.

In addition to being a co-principal investigator on the grant, Kerkhoff has published research on disciplinary and digital literacy in esteemed academic journals such as Reading and Writing: An Interdisciplinary Journal and Reading Research Quarterly.

In the latest installment of the Ask an Expert series, she joined UMSL Daily to discuss AI’s relationship to literacy, challenges and opportunities related to the emerging technology and how the College of Education is preparing future teachers for the new status quo.

First, in your view, how is AI connected to literacy?

I believe that what people need today is AI literacy. So, the ability to have agency over when, how and for what purpose they are using AI in their reading, writing and communication lives. AI, with chat bots and generative AI, has the ability to write for us or to support our writing. Having agency over that, and having the needed literacies to not just accept what AI has produced but to be able to critically look at it and to say, “OK, does this match my audience? Does this meet my purpose?”

One thing that AI does not have is the writer’s own voice. That’s a literacy – understanding voice and being able to insert one’s own voice into their writing – and then there’s the reading side of that as well. Because chat bots are generative, they are creating an answer to your question. They’re not just spitting out an answer from a database to that question. That means that there can be hallucinations. That’s the term for when the AI generates an answer that is factually incorrect. That requires a critical reading of what the AI is spitting out.

What questions/concerns are you hearing from teachers in these discussions about AI?

The first one is, what is a reasonable, acceptable use policy for AI with students and what’s acceptable at different grade levels? What’s acceptable for elementary, middle and high school, and how do we go about making those policies? That’s one of the top questions, and we talk about actually inviting students into those discussions so that students can be part of the conversation. The students are the ones that are using it, so they have that information about what is helpful for what purpose. That is important information, and their experience, their different lived experiences with it, bring different perspectives than teachers might have.

Secondly, when students are part of the conversation and feel that their voice is heard and have an opportunity to hear the concerns or the reasoning of their educators and their parents, they’re more likely to follow the policy than if it’s, “This is the rule. Follow the rule because I said so.” That’s one of our strategies: leading conversations at the classroom level with teachers and students to develop classroom-level policies and then making sure that those policies align with the school and district policies.

The other question is, what do we do when we suspect a student has used AI for an assignment that they weren’t supposed to use AI for, or in a way that they weren’t supposed to use AI for? Right now, there’s no 100% effective AI detection tool, so there is a risk of false positives with AI detection tools. What that means is that a student could have written something by themselves and be flagged by a detection tool as having used AI when they didn’t. So, we can’t rely on those because then we would be making innocent people guilty. That’s not what teachers want to do. On the other hand, we can’t just look away because we want students to be doing the thinking that we’re asking them to do. We’ve talked about going back to paper and pencil, bringing out those composition books that we used to have. We’ve also talked about having students reflect on their own writing. When they are reflecting on what is good or needs improvement in the writing, whether they did it or AI did it, they’re still involving those critical thinking processes of critiquing what’s there.

We also want to give students the information around AI literacy, so that they are part of it. What I see with students is a lack of confidence. They think AI can do a better job than they can, and they want to be successful. It’s really from a place of fear that they’re turning to AI. When they learn that perhaps it isn’t necessarily a better product, or is not perfect, or that it’s missing the voice, and what the teacher really wants to hear is their voice, then it’s an ounce of prevention, right? An ounce of prevention, rather than trying to go back and police it.

Are there any other challenges teachers face with AI?

There are so many AI tools out there. There’s actually a page that is dedicated to “What AI should I use?” There’s an AI for that. There are thousands of AI tools out there. So, what tools are useful? It’s kind of overloaded right now, because it’s new and it’s already saturated. That’s one of the challenges: sifting through all the tools to see what is actually helpful. Then the other tech challenge is using it effectively. That comes with experimentation but also hearing from other people and how they’re using it.

On the other hand, are there opportunities for teachers to positively incorporate AI into the classroom in a way that it augments instruction?

Yes, one of the ways that AI can be effective is for quick feedback on student writing. Research shows it’s not necessarily as high quality as a teacher would give a student. However, it’s quick. Let’s say a student wants to get some feedback on their writing they can immediately use. There are some AI tools that are accessible in schools, so they put it through there, get the feedback and then move forward without having to wait for the teacher to read it, give the feedback and get it back to them. We know that the amount of time that elapses between when a student writes and the feedback they get determines how impactful that feedback is. AI can help in that way.

Then, as far as text-to-speech apps that were already out there on the market, there are now AI text-to-speech apps that are much better that have a little bit more intonation and sound more human-like because of AI technology. That’s helpful especially for students who have that as a special need.

How do you see instruction evolving to account for AI in digital literacy?

In the same way that literacy is across the day, across the curriculum, and different teachers are working with their students for different disciplinary literacies– science literacy and using reading and writing to study history, etc. – I think that’s the way it’s going to be with digital literacy and AI literacy. Students are going to need to learn about this throughout the day, not just in a computer class or a technology class. Now, all those classes also offer an opportunity for students to begin understanding how the AI works, what goes into building AI. But it’s using AI that’s the literacy part. When they’re using it, that will be across the curriculum.

You already touched on this briefly, but how can AI be used to promote critical thinking?

Well, the fear is that people will use AI and then not engage in critical thinking, because the machine is doing the thinking. What we promote is agentic AI use, where the person is strategically thinking at the forefront of how they’re going to use AI for the particular purpose, and which AI tool will best meet that purpose, and then critically reflecting on the output of the AI and not just accepting whatever is produced. The more that people understand that AI is fallible, the more that they will realize that they have to engage critical thinking when using AI.

What do students need to know to use AI ethically and safely?

Students need to be aware of what they are putting into AI. They don’t want to be putting their personal information or other people’s information into AI. That’s for safety. Ethically, they also need to think about putting other people’s writing into AI. For example, if a teacher assigns a short story and says, “Summarize this story” or “Create a literary analysis of the story,” and a student puts that short story into the AI, they’re violating an ethical line there. This is a creation from a human, and you’re now putting it into the AI, where the AI can use that as data for how it will create and generate writing.

The College of Education is training the next generation of teachers. How are UMSL faculty members talking with teacher candidates about AI?

One of the things that UMSL faculty are doing is having conversations with their students. In the College of Education, we have to talk about both sides of the coin. What is appropriate use of AI when you’re a student in my class and you’re learning, and the other side of the coin when you’re a teacher? We have conversations about appropriate use of AI for assignments. We have conversations about the affordances and limitations of AI for different tasks that go along with teaching. We also have students tinker, play with some different AI tools, and then compare the outputs to other products that we made ourselves as teachers, or that they made for previous assignments, so that they can start to construct their own knowledge about the lines of what is appropriate and not to use as a teacher.