5 Things @TCUSchieffer: AI in the Classroom

Images of three people, first Jordan Schonig, second Broc Sears and third Julie O'Neil

Jordan Schonig, Ph.D., Broc Sears and Julie O’Neil, Ph.D.

Early in 2023, OpenAI disrupted the lives and work of millions of people worldwide when it released ChatGPT, its generative artificial intelligence program. Since its introduction, ChatGPT and generative AI have changed the workforce and culture.

We spoke with faculty across the college to see how they are implementing AI in the classroom.

#1: AI is a tool. Just like with other tools, you must learn how to use it properly.

Jordan Schonig, Ph.D., assistant professor of Film, Television, and Digital Media, emphasizes the importance of viewing AI as a tool akin to other digital applications such as Adobe InDesign and Apple Final Cut.

He uses ChatGPT to enhance students’ comprehension of intricate texts, like an automated version of SparkNotes.

“While ChatGPT is often discussed as a generator of new ideas, this activity is designed to use the tool as a reader of texts (by pasting passages into ChatGPT and asking precise questions about those passages),” Schonig explains. “ChatGPT often does good job at explaining what an unfamiliar word, phrase, or sentence means in the context of a paragraph, and it can often simplify a difficult, jargon-ridden sentence.”

His aim is not for ChatGPT to replace the act of reading and deciphering difficult texts, but to offer a tool that encourages students not to skip over or skim difficult sentences when reading.

#2: The possibilities for AI usage are endless.

In his courses, Broc Sears, assistant professor of professional practice in Strategic Communication, uses AI to help students understand the tool’s depth and breadth.

By integrating ChatGPT into his coursework, Sears aims to broaden students’ understanding of the tool’s versatility. Through various assignments, including problem-solving tasks, Sears prompts students to juxtapose their solutions with those generated by ChatGPT.

“Students then compare and evaluate both solution sets to determine which are the best and share if the student or AI results were more successful,” Sears said. “I would say that AI delivers better solutions about a third of the time, but AI tends to give more varied solutions if prompted properly.”

In his Sports Communication course, students utilize ChatGPT to craft children’s books on sports history, acting as editors to create a compelling product to mitigate the robotic tone characteristic of AI-generated content.

#3: AI is fallible, and often makes mistakes that students can learn from.

By encouraging students to incorporate AI into research assignments, Sears underscores the importance of discerning the tool’s limitations. Students in his Diversity course recently used AI to generate one of three areas in a weekly research assignment.

“Students agree that results are good, but they have found that they have to prompt the AI in a way to have the content generated in AP Style,” Sears said. Students are also finding that while grammar and spelling are usually correct, the AI can be inaccurate fact-wise and will make things up to fill in the word count.

“One student did not proof their AI entry well enough to catch that the AI text included references to diversity in the workplace after World War I, World War II and World War III.”

Schonig echoed this sentiment as a downside to AI, “students may not have the ability to determine when an AI ‘translation’ is leading them astray. One of the biggest problems with ChatGPT is its utter confidence in everything it produces, whether that is entirely fabricated sources or, in my case, simplifications of a difficult text.”

#4: AI does not replace the human element.

While AI aims to replicate human intelligence and mannerisms, it cannot replicate human learning in the real world. Schonig grapples with this in his courses as he tries to approach student use of generative AI ethically.

“If you teach a student to understand a difficult text by ‘translating’ it into simpler prose with ChatGPT, you’re risking eliminating the student’s engagement with the text itself,” Schonig explains.

“It’s a bit like those mass-marketed editions of Shakespeare plays that ‘translated’ Shakespeare’s language into simpler speech on one side of the page. How many students would resist the temptation to only read the translated side instead of using the translation as a tool to illuminate or clarify the original?”

There is still learning that can happen in this space of uncertainty. Schonig further explained that the existence of AI poses a threat to these learning goals because AI can easily produce writing that is designed to hone critical faculties.

“I find that the more students think of AI as an interlocutor for their thought rather than a replacement for their thought, the more potential AI can have in the classroom,” says Schonig.

 #5: AI is here to stay. Let’s use it ethically.

Julie O’Neil, Ph. D., professor in Strategic Communication, associate dean for graduate studies and interim director of the Schieffer Media Insights Lab, has seen AI used in the lab with technology such as Brandwatch and Sprinklr. AI has already made a name for itself in the professional world, and it isn’t going anywhere.

“Communicators are using generative AI to create social media posts, write emails, brainstorm creative ads and slide decks, to measure and evaluate data, identify influencers, discern issues and more,” O’Neil said.

“I believe that professors should help students learn how to use AI effectively and ethically. For example, we can ask our students to use AI to write a draft of a survey or interview questions, but students must double-check the accuracy of the questions generated and ensure that they are not sharing any proprietary client data in their AI prompts.”

O’Neil advises that students must be taught how to edit for voice, alignment with organizational values and accuracy.

“Students can use AI-fueled Brandwatch or Sprinklr to analyze sentiment and topic themes of social media conversation, but they must still review individual posts to determine whether they agree with how the machine coded the results. I’ve found that many students disagree with how AI is coding the sentiment of conversations.”

As AI continues to permeate various facets of academia and professional spheres, the imperative of ethical AI utilization becomes increasingly pronounced.

Join the discourse on AI, ethics and communication in “Ethical Intelligence: The Implications of AI”, a discussion with Premier Green Honors Chair Jonnie Penn, Ph.D., professor of AI ethics and society at the University of Cambridge.