Prompting Progress: Teaching AI Literacy in the Classroom

Prompting Progress: Teaching AI Literacy in the Classroom

By The Educator Collaborative Fellow Troy Hicks

With a recent article from The Hill suggesting that “[e]xperts predict that 2025 will be the year artificial intelligence (AI) truly gets off the ground in K-12 schools,” the question that this trend raises is not as much When will AI come into play? as it does Why and how will teachers use AI in their planning, instruction, and assessment? 

Returning to the first post in this two-part series as well as to MLA/CCCC’s “Student Guide to AI Literacy,” I want to suggest just a few activities that educators might try related to the third, fourth, and fifth skill sets that are part of the guide they have created. In doing so, my intent is to demonstrate the deeper, more substantive ways in which AI literacy and the English language arts can be intertwined

MLA/CCCC Skillset 3: “You know how to prompt GenAI to produce useful outputs.”

This skillset speaks to the need for what has been previously described as “prompt engineering” (i.e., what a user needs to think clearly about the components of the prompt and how they fit together into a cohesive set of instructions). That said, perhaps “prompting” is now just enough, as we consider it a genre that students should be taught, explicitly, in which to write. Most students, I’ve learned, seem to still be using AI as they would a search engine, asking for information about a topic; e.g., “What is climate change?” Instead, teaching students how to ask more nuanced questions of the AI and providing it with “guardrails” can be an interesting writing task in and of itself, leading them to more compelling interactions with the AI. 

Consider the framing of these two prompts in ChatGPT, as well as the subsequent responses: 

Sequence #1:

I begin by simply asking ChatGPT, “What are Taylor Swift’s best songs?” I then provide it with more context and instructions: “You are a school counselor who is talking to a 13 year old female who is struggling with a sense of identity. She says that she likes Taylor Swift. Identify a list of ten of Taylor Swift’s songs where the lyrics could be helpful for this student to ask deeper questions of her identity. Provide a reason why each song would be added to the list and one question that she could be thinking about while listening to each song.”

What are Taylor Swift's best songs?
ChatGPT said:
ChatGPT
Taylor Swift has an extensive catalog of songs that have resonated with fans across her career, spanning genres like country, pop, and indie folk. Here are some of her best and most beloved songs:

Early Career (Country)
"Tim McGraw" (2006) – Her debut single, a nostalgic ballad.
"Teardrops on My Guitar" (2007) – A tale of unrequited love that defined early Swift.
"Love Story" (2008) – An iconic hit that blends Romeo and Juliet with a country-pop twist.
"You Belong with Me" (2008) – A quintessential teen love anthem.
Pop Transition Era
"We Are Never Ever Getting Back Together" (2012) – A catchy breakup anthem that marked her pop breakthrough.
"I Knew You Were Trouble" (2012) – A fusion of pop and dubstep that became a chart-topping hit.
Screen grab from Sequence #1

Here is the full transcript from that interaction. My suggestion to colleagues is to consider the ways in which we ask students to frame the context and audience for their arguments and how ChatGPT (and other, similar generative AI tools) can help them be more specific

  • For instance, in the first response, ChatGPT’s description of “Shake it Off” was limited to ”An infectious pop tune about resilience,” whereas in the second response, it provided both a reason for inclusion in the list as well as an additional question:
    • “Reason: A fun, upbeat anthem about ignoring criticism and embracing who you are, which can encourage confidence in moments of self-doubt.”
    • “Question: How can I let go of other people’s judgments and focus on what makes me happy and confident?”
Sequence #2

Consider ChatGPT’s response to my question, “What is climate change?” Here is ChatGPT’s general response to the simple question. Then, in this interaction, examine how I provided more context with more clearly defined roles (where I am acting as a television meteorologist trying to share information with a skeptical audience) and expectations for the output.

Log in

Sign up
This is a copy of a conversation between ChatGPT & Anonymous.

Report content
You said:
I am a television meteorologist preparing to write an email about the recent heat wave. I want to understand some of the key ideas and controversial issues related to climate change. My goal is to convince my producer that I should tell our viewers that this recent heat wave was made more intense because of human-induced climate change. 1) provide an overview of the issue, 2) explain five key points worth consideration and suggest sources of evidence to substantiate those points, 3) suggest up to three potential counter arguments and how I might address them, and 4) provide a final, substantive point that would be suitable for the conclusion. 
ChatGPT said:
ChatGPT
Overview of the Issue
Climate change, driven primarily by human activities such as burning fossil fuels and deforestation, has led to a significant increase in global temperatures. This phenomenon is causing more frequent and intense weather events, including heat waves. Scientists have established a strong link between human-induced climate change and the increasing severity of heat waves, making it a critical issue to communicate to the public.
Screen grab from Sequence #2, Part 2

Rather than simply providing the answer to “What is climate change?,” ChatGPT instead offered me a list of “Key Points and Evidence” as well as “Potential Counter Arguments and Responses,” ideas from which I could craft further writing.

  • For instance, as it relates to the “Natural Climate Variability” counter argument, ChatGPT summarized that perspective with the point that “[s]ome may argue that heat waves are a result of natural climate variability rather than human activities” and then offered the following:
    • “Response: While natural variability plays a role, the overwhelming consensus in the scientific community is that human activities have significantly increased the frequency and intensity of heat waves. Attribution studies provide robust evidence for this link.”

    In each of these cases, I liken the act of writing the prompt as similar to the process of creating a kind of “kernel essay” for an argumentative essay, where the writer is conceptualizing key ideas that become guardrails and expectations for the AI as it produces “useful outputs.” 

    MLA/CCCC Skillset 4: “You evaluate the relevance, usefulness, and accuracy of GenAI outputs.”

    While the problems with AI creating “hallucinations” are generally getting better as these models become more advanced (and scrape the web in real time for relevant information), a sad fact is that the world itself is still full of bias. This bias — as represented in the ways that computers code the patterns of everything from language to facial recognition — is something we need to explore with students. 

    For instance, we can ask AI to review existing content and identify elements of that content that are potentially misleading or may have a particular political perspective. In my next interaction with ChatGPT, I asked it to review an article from a reputable news source that is generally considered to be slightly more conservative in its perspective. In doing so, I crafted the following prompt. 

    Sequence #3:

    Review this article and identify at least five specific examples of descriptive words and phrases about climate-related policy decisions. After identifying these five specific examples of words and phrases, describe how someone with a conservative, free market perspective on climate change and someone else with a more liberal, environmental perspective would interpret these words and phrases. Finally, return to the five specific examples and offer suggestions for where a reader could find out more information about these aspects of climate-related policy making. 

    In doing so, ChatGPT returned some relevant phrases like “slash emissions,” “honor the commitment,” and “rally around this goal”–phrases that could be read from different perspectives, especially as they relates to the “why” and “how” aspects of climate change. It identified two other examples that were more statements of fact about the who, what, when, and where related to the announcement by the president’s representative. Still, it gave many examples of both “conservative” and “liberal” perspectives on the topics — including the “conservative idea” of “re-evaluating or rolling back policies” and a “liberal” concern that it could “setback for climate action, potentially undermining progress.” This would be a good opportunity for students to, in the MLA/CCCC’s terms, examine the “relevance, usefulness, and accuracy” of the AI’s output.

    Or, as a teacher of writing asking one’s students to examine the mentor text of another writer, consider this interaction (Sequence #4) that I crafted to have ChatGPT analyze a piece of writing as a mentor text. While we certainly could ask students to engage in the kinds of word counting and analysis of sentence structures that I tasked ChatGPT with, my main goal for a lesson would be to help students understand the importance of sentence variety and, in so doing, to have ChatGPT do an analysis for us so we could see the varied patterns in simple, compound, complex, and compound-complex sentences within the narrative.

    Percentages:

Simple Sentences: (65 / 470) * 100 ≈ 13.83%
Compound Sentences: (114 / 470) * 100 ≈ 24.26%
Complex Sentences: (183 / 470) * 100 ≈ 38.94%
Compound-Complex Sentences: (108 / 470) * 100 ≈ 22.98%

This should help your students understand the use of different sentence structures in the story.
    Screen grab from Sequence #4

    While one aspect of evaluating the accuracy of this output could be to confirm whether or not ChatGPT did get the labelling of the sentences correct (for example, it inaccurately labeled one sentence as “simple” even though it has two independent clauses), the main point of the exercise is to demonstrate how ChatGPT can provide an effective start for this kind of analysis rather than having an absolute accuracy in its tally. 

    In each case, what counts as “relevant,” “useful,” and “accurate” varies slightly, as it should. In using AI for varied tasks that can then spark the writer into new directions, this kind of prompting and interacting with AI invites us to consider more possibilities for what the tool can do to help us in these initial analyses, thus moving to more nuanced writing we can then create on our own

    MLA/CCCC Skillset 5: “You monitor your own learning as you use GenAI tools.”

    Finally, as we all continue to wrestle with what it means, in Ethan Mollick’s terms, to be “co-intelligent” with AI tools, there are many ways that we can ask students to document their learning. As shown above in my own examples, we can ask students to create links to their AI interactions/outputs. Also, as it relates to academic integrity, we can have students craft their own AI disclosure statements. Neither of these two tasks alone, however, will ensure that students are actively thinking about their uses of AI, being metacognitive as they think through their thinking, nor reflective as they analyze and evaluate those processes.  

    And, so perhaps this is the most difficult of all the tasks we ask students to do with AI. How might we, for instance, help them to document what they are doing with AI through periodic screenshots, text annotations, or even screencast recordings in which they 1) describe what they did with AI, 2) identify what was compelling and useful about that interaction, and 3) how they might take a different approach in the future? This is all time-consuming work and, of course, would be impossible for any educator to review for each and every piece of writing that a student created. Even if we wanted to examine all of our students’ thinking on all of these stages in the writing process (with or without AI), there would not be enough time in the day to do so. 

    Still, perhaps one way to have AI help students is to ask it to give focused, specific feedback. Yes, there is the possibility that students could upload an entire essay and ask for general feedback. Yet, asking AI to provide feedback on smaller chunks of writing (e.g., an opening paragraph, a thesis statement, integration of a key quote from a source) might be even more useful. Asking students to use the language of writing when prompting the AI can be helpful, too, especially if students are working on a particular rhetorical technique. A prompt might be crafted in this manner: 

    • “I am working on the rhetorical strategy of __. Examine my opening paragraph and evaluate how well I have used __. Offer at least one point of praise and one suggestion for improvement.” 
    • Or, perhaps: “I am writing a thesis statement that should state a clear, concise, and nuanced claim that can then be expanded upon in my essay. Evaluate the claim below for each of these criteria — clarity, concision, and nuance — and rate how well I have done on each criterion using a scale of 0-10. For each of the three criteria, describe one strength in my current thesis statement as well as a specific suggestion for improvement. Here is my claim: ____.” Here is an example of how ChatGPT responded to this prompt

    Based on ChatGPT’s feedback, students could reflect on the AI’s response and determine the extent to which they would want to make any further revisions, as well as to identify any aspects about the response that were helpful. 

    A recent, and perhaps slightly cynical, interaction with a colleague summarized many of the challenges that we all still face with AI. “So, let me get this straight,” the conversation began. “A teacher can now create an AI-generated writing prompt, students can respond to that prompt with AI, the teacher can have AI provide feedback on that piece of writing, and the student will, most likely, continue to ignore any suggestions.” We both chuckled a bit, but the point was clear. If ever there was a sense of hopelessness in what ELA teachers do in order to guide student writers, the current era of AI made it all feel even more hopeless. 

    While there are no absolutely clear cut solutions for how we might model effective AI use with our students — nor ensure that they are using AI in productive, ethical, and responsible ways — I believe that the kinds of approaches described above are a strong step in the right direction. In order to help students understand and capitalize on AI, and move beyond using it as a glorified web search or a shortcut for cheating, we need to show them how to prompt effectively, when to do so, and how to engage with AI outputs in a meaningful manner.

    Education is the business of hope, and I am always hopeful that we can use technology to help students plan, draft, and revise, and ultimately, to become more metacognitive about their own writing processes using some of the strategies outlined above.


    AI disclosure statement: I acknowledge the use of Google Gemini to summarize this blog post to create the social media post, Microsoft Copilot to generate potential titles, including the one I used for this post, “Prompting Progress: Teaching AI Literacy in the Classroom,” and Adobe Firefly to create the image based on the following prompt:  “Create an image of students in a writing classroom who are using AI technology to write.” AI was not used directly in the composing process.