Tips for Clarifying ChatGPT Policies in the High School Classroom

by Diana Drake

Generative AI burst into classrooms last spring, and with the start of a new school year educators are reconsidering how it might enhance both teaching and learning.

Many educators, resisting the urge to just ban it outright, spent the summer trying to figure out classroom guidelines for the use of OpenAI’s ChatGPT and other generative AI platforms, like Microsoft’s Bing and Google’s Bard. As Gerri Kimble, a high school business educator in Alabama, said in a recent Essential Educator blog: “I can’t defeat it. I can’t make it go away. So, I must find a way to help my students use it responsibly.”

Need some help with your AI roadmap for the year? The Wharton School of the University of Pennsylvania has a wealth of generative-AI resources developed for higher ed and ready to be applied to the high school classroom.

First, if you want a one-stop-shop of resources, then visit this Teaching with AI page, created by Wharton’s Ethan Mollick (more on him in a minute).

Brian Bushee, Wharton’s senior vice dean of teaching and learning, suggests that all educators should do the following to prepare for the school year:

✅ Test out your assignments and exam questions in ChatGPT.

✅ Think about whether you need to change any of your assignments as a result.

✅ Develop a course policy and clearly state it in your syllabus, during class, and on each assignment.

Here’s a sample policy that Dr. Bushee shared from Penn’s Center for Teaching & Learning that is for teachers who plan to allow moderate use of generative AI by their students:

You may use generative AI programs (e.g., tools like ChatGPT) to help generate ideas and brainstorm. However, you should note that the material generated by these programs may be inaccurate, incomplete, or otherwise problematic. Beware that use may also stifle your own independent thinking and creativity. You may not submit any work generated by an AI program as your own. If you include material generated by an AI program, it should be cited like any other reference material (with due consideration for the quality of the reference, which may be poor). Any plagiarism or other form of cheating will be dealt with severely under relevant [school] policies.

More sample policies can be found on the Penn CTL website.

The Ethan Mollick Effect

Few members of the Wharton community are more curious about the potential of generative AI than Ethan Mollick, an associate professor of management and academic director of Wharton Interactive, which develops business games and simulations in areas like entrepreneurship and leadership. He and Lilach Mollick, who works on interactive pedagogy and AI research at the Wharton School, have recorded a new “Practical AI for Teachers and Students” video series on YouTube, and Professor Mollick has written a number of generative AI-related articles on his Substack page.

Here is a summary of Ethan Mollick’s latest AI guidance for educators:

1️⃣ AI can probably do most of the homework assignments you give – something I (Dr. Mollick) have been calling The Homework Apocalypse. AI has improved by several times since it first emerged. The newest Claude 2 models can also read a book-length PDF and answer questions about it. GPT-4 with Code Interpreter can conduct PhD-level data analysis in Python with very little help. Some things current AI can do:

🧐 Write text at the level of human college students or better. The AI currently scores in the 99th percentile in the GRE verbal exam. It gets A’s when graded for Harvard essays.

🧐 Conduct online research and produce reports with accurate outside references. Hallucination and errors still occur, but less frequently in the most recent models. It is harder to spot these errors because they are more subtle.

🧐 “See” images, such as photographs, diagrams, and written text, and interpret and work with that information.

🧐 Read documents and PDFs, and answer questions about the content. Yes, that includes cases. It can often “crack” a case and tell a student exactly what to say.

🧐 Produce realistic images.

🧐 Do math, write code, and solve complicated problems using either Python or Wolfram Alpha.

🧐 Accept hundreds of megabytes of data and do complex data analysis on it.

2️⃣ AI is undetectable. Don’t use TurnItIn, because it doesn’t work and produces high levels of false positives, especially for non-native speakers. The bad writing you might have seen with early AI is remedied by just prompting the AI a few times to make the writing better. And, of course, cheating with AI is a relative term: is getting AI help with an outline cheating? Getting feedback on a paper? Using it to get help with a sentence you are stuck on? We don’t have any universal rules here.

3️⃣ AI can help you with teaching and help your students with learning. We have a paper on assigning AI for students (with prompts) here and one on how it can provide resources to instructors here. Expect your students to be relying on AI explanations, and maybe to see fewer hands raised in class.

4️⃣ AI isn’t a future threat, this is already happening. I think our students are going to want to understand what AI means for the things we teach.

Dr. Mollick also put together a guide on how to use AI that he shares with his students. He encourages educators to modify it as needed and share it with their students. If you haven’t spent time with ChatGPT, he suggests the guide as a good place to start. But also remember, he notes, this is the worst AI you will ever use. Systems are improving rapidly.

High school students are already beginning to integrate AI into everything they do – and they are thinking deeply about its power and impact. The long comment threads on recent Global Youth articles about AI and jobs and ChatGPT reflect both their excitement and concern. Educators need to be prepared.

Leave a Reply

Your email address will not be published. Required fields are marked *