close
close

A teacher caught students using ChatGPT on their first assignment to introduce themselves. Her post about it sparked a debate.

A teacher caught students using ChatGPT on their first assignment to introduce themselves. Her post about it sparked a debate.

Students sitting in a lecture hall

Professor Megan Fritts caught several students using ChatGPT during the first week of the semester.photo alliance/Getty Images

  • A teacher’s students use ChatGPT for a simple introductory assignment in an ethics and technology class.

  • Professor Megan Fritts raised her concerns about X, sparking a debate about the role of AI in education.

  • Educators are divided over the impact of AI, with some arguing that it undermines critical thinking.

The first assignment Professor Megan Fritts gave her students was, she says, an easy A: “Briefly introduce yourself and what you hope to achieve with this class.”

However, many students who enrolled in her Ethics and Technology course decided to introduce themselves to ChatGPT.

“They all admitted it, to their credit,” Fritts told Business Insider. “But it was just really surprising to me that — what should have been a freebie — they even felt obligated to generate with an LLM.”

When Fritts, an assistant professor of philosophy at the University of Arkansas at Little Rock, raised her concerns with X, formerly Twitter, in a tweet that has since been viewed 3.5 million times, some comments suggested that students would naturally tackle “busy assignments” with similarly low-effort answers generated by AI.

However, Fritts indicated that the assignment was not only intended to familiarize students with using Blackboard’s online discussion forum, but that she was also “genuinely curious” about the introductory question.

“A lot of students who take philosophy classes, especially if they’re not majoring, don’t really know what philosophy is,” she said. “So I like to get a sense of what their expectations are so I know how to respond to them.”

However, the AI-written answers did not reflect what the students as individuals expected from the course, but rather a repetitive description of what a technology ethics course entails. From this, Fritts could infer that the answers were generated by ChatGPT or a similar chatbot.

“When you’re a professor and you’ve read dozens of essays about AI, you just notice it,” she said.

The Calculator Argument: Why ChatGPT Isn’t Just a Problem-Solving Tool

Fritts defended herself in her answers by saying that ChatGPT is like a calculator for mathematical problems. However, she said that it is a “wrong” comparison to see LLMs as simply a problem-solving tool, especially in the context of the humanities.

Calculators reduce the time it takes to solve mechanical operations that students have already learned to produce a single correct solution. But Fritts said the goal of humanities education is not to create a product, but to “shape people” by “giving them the ability to think about things that they wouldn’t naturally think about.”

“The goal is to create liberated minds — liberated people — and by handing over thinking to a machine, by definition, you don’t achieve that,” she said.

Lasting impact on students

In addition to cheating on tests, Fritts says that students’ thinking skills are generally declining. And they notice it.

“They’re like, ‘When I was younger, I loved to read, and now I can’t. I can’t even get through a chapter of a book,’” she said. “‘My attention span is so bad, and I know it’s because I’m always on my phone and I always have YouTube or TikTok on.’ And they’re sad about it.”

Fritts said that technology addiction has affected students’ overall agency when interacting with information. She cited a 2015 paper by Professor Charles Harvey, chair of the Department of Philosophy and Religion at the University of Central Arkansas, who examined the effects that interactions with technology may have had on human agency and concentration.

Harvey wrote that two different eye-tracking experiments showed that the vast majority of people quickly skim through online text, “jumping across the page” rather than reading line by line. Deep reading of printed text is broken up into “even smaller, disjointed” thoughts.

“The new generations are not experiencing this technology for the first time. They’ve grown up with it,” Fritts said. “I think we can expect a lot of changes in the really fundamental aspects of human behavior, and I’m not convinced that those changes are going to be good.”

Teachers are getting tired

Fritts acknowledges that educators have a certain obligation to teach students how to use AI in productive and educational ways. However, she said that placing the burden of stopping the trend of cheating on scientists who teach AI literacy to students is “naive to the point of implausibility.”

“Let’s not kid ourselves that students are using AI because they are so excited about the new technology and are not sure how to use it appropriately in the classroom,” Fritts said.

“And I’m not trying to put them down,” she added. “We’re all inclined to take measures to make it easier for ourselves.”

But Fritts is just as “pessimistic” about the alternative solution: teachers and institutions forming a “united front” to keep AI out of the classroom.

“That’s not going to happen because so many faculty members are now being fed by sentiments from the university administration,” Fritts said. “They’re being encouraged to put this into the curriculum.”

At least 22 state departments of education have published official guidelines for AI use in schools, The Information recently reported. A 2024 survey by the EdWeek Research Center found that 56% of more than 900 educators expected AI use to increase. And some are excited about it.

Curby Alexander, an associate professor of education at Texas Christian University, previously told BI that he uses AI to generate ideas and develop case studies “without taking up a lot of class time.”

ASU’s Anna Cunningham, a Dean’s Fellow, and Joel Nishimura, an associate professor in the Department of Mathematics and Natural Sciences, wrote an op-ed encouraging students to teach ChatGPT agents with programmed misunderstandings.

“With this, we are on track to provide all students with as many opportunities as they want to learn through teaching,” they wrote.

OpenAI has even partnered with Arizona State University to offer students and teachers full access to ChatGPT Enterprise for tutoring, coursework, research, and more.

Many educators, however, remain skeptical. Some professors have even gone back to pen and paper to combat the use of ChatGPT, but Fritts said many are tired of trying to fight the seemingly inevitable. And students remain caught in the middle of the love-hate relationship between education and AI.

“I think it’s understandable that it causes a lot of confusion and that people feel that the professors who say, ‘Absolutely not,’ are perhaps narrow-minded, outdated, or unnecessarily strict,” Fritts said.

Fritts isn’t the only professor to voice concerns about the use of AI among students. In a Reddit thread titled “ChatGPT: It’s getting worse,” several users who identified themselves as professors lamented the increased use of AI in classrooms, particularly in online courses. One said, “This is one of the reasons I’m really considering leaving academia.”

A professor in another post that received over 600 upvotes said ChatGPT was “ruining” their love of teaching. “The students are no longer interpreting a text, they are just giving me this automated vocabulary,” they wrote. “When I judge it as if they wrote it themselves, I feel complicit. It really makes me desperate.”

Read the original article on Business Insider