When ChatGPT was released in 2022, predictions of how AI would affect our lives began to pop up in the media. And while no one knows exactly how our relationship with AI will look in another year, Earlham faculty are grappling with both the challenges and opportunities this technology brings to the classroom — particularly when it comes to large language models (LLMs) like ChatGPT, Google’s Bard, and others. Here’s how Earlham faculty are engaging with these programs and AI-generated content — and how they see higher ed adapting now and in the future.
Asking AI the right questions
Nate Eastman, Director of First Year Success and Convener of the Honors Program
“The way I design assignments, I want students doing authentic, real-world tasks. If they’re aspiring teachers, I want them writing a lesson plan for Hamlet. If they’re aspiring writers, I want them looking at how Hamlet uses character webs or symbols so that they can learn how to use those same tools in their own writing. And so the target I’m trying to hit is technology-agnostic. If a student can use AI to write a better lesson plan, that’s what they should do. If they can use it to write a better story, they should do that.
I should add that there’s a pretty significant skill component to using an LLM in this way. As with most projects, half the work is asking the right questions and defining your criteria for success.
An AI is a collaborator. It’s a specific kind of collaborator that, for the most part, won’t ask you clarifying questions when your prompt is vague or ambiguous, and which may also have different criteria for success, or different assumptions about pertinent constraints, than you do. And so, teaching writing to students who have access to generative AI is a lot like teaching them to manage a project with a lot of contributors.”
Maintaining academic integrity
through clear expectations
Leanna Barlow, Associate Academic Dean for Students
“At first, I was very worried that we were going to see the end of independently written work, since AI and ChatGPT are so accessible, sophisticated and easy to use — which makes it very tempting. It’s a particularly accessible shortcut for students who are often busy or overwhelmed. Over time, though, I’ve seen how well faculty have adapted their assignments to either be difficult for AI alone to complete or to incorporate the use of AI in specific ways — so I’m less worried.
My primary emphasis when communicating with faculty about AI is really on the importance of setting very clear guidelines for the use of AI and ChatGPT in the syllabus or in each assignment, so students know what to expect. If students shouldn’t ever use AI, even in the idea generation phase, that should be clearly noted. If the use of AI is acceptable with a citation or in certain circumstances, that should also be clearly noted. I think it’s important to give students the information they need to make good decisions about their work and how to do it independently.”
Coaching students to be better (human) writers
Elliot Ratzman, Chair in Jewish Studies and Visiting Assistant Professor in Religion
“This has been my disappointment: Some students are using it to cut corners, rather than develop real, critical skills and become better writers. My fear is that the systems will become more and more sophisticated, making it harder to discern what is student-created and what is AI-generated — and also that students’ writing could become less colorful.
I’ve always given a speech about plagiarism at the beginning of my classes. Now, I’ve had to expand that to include ChatGPT. It goes something like this: ‘When reading the Hebrew Bible, I can tell when the Hebrew shifts from one century to another, from Bronze to Iron Age. And I can tell when an undergraduate’s writing shifts from sentences that are expected of an undergrad to perfectly polished, encyclopedia-level prose. I can’t help you become a better writer or a better thinker unless I am able to see your writing. It does not please me to read the perfect but soulless prose of a machine rather than to be able to coach you into becoming a better writer than you are right now.’
Rethinking the role
of the professor
Peng Yu, Associate Professor of Politics
“I was initially a little worried about students using AI in writing papers, but now I think that if used properly, ChatGPT can help students improve their writing. I encourage students to use AI as one of their sources for information, but not to completely rely on AI or use it to replace other sources. In writing papers, for example, students cannot use AI to replace their own effort and thought work in writing and revising the paper, but they may compare their work with those generated by AI and critically reflect on the difference.
AI has prompted me to rethink my relationship with students. In the context of AI’s rising influence, I have been reevaluating the role I play in my students’ learning experience, the guidance I provide for the students, and the extent to which I trust their responsible use of the technology.”
Exploring the possibilities —
and the limits
Marya Bower, Professor of Liberal Studies
“I’m at two levels in my thinking. I think AI in general is potentially a huge threat — the ability to create images and sounds that are not true is dangerous.
I think AI in the classroom is not as dangerous. It can be useful, particularly for students who don’t process the written word as easily as others. If it gets their ideas out in an easier manner — particularly if they’re facing any number of learning challenges — that’s amazingly wonderful.
I’m hoping to do a summer collaborative research project with students to explore the limits of AI-generated writing and art. I’m also interested in creating assignments where students take an initial idea, feed it into ChatGPT, and then critique what it creates. I think this could be a great way to examine writing and thinking and development.”
STORY BY JEN GOSE / PHOTOS BY JOSH SMITH
Like what you’ve read? Share this article with your network.