menu-control
The Jerusalem Post

In the age of ChatGPT, how does it feel to be accused of plagiarism? - study

 
 A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023. (photo credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)
A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023.
(photo credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)

Students wrongfully accused felt frustratedd at the accusation, and many expressed a breaking down of the relationship between themselves and the institution.

With the widespread of the artificial intelligence-driven large-language ChatGPT, students and many others welcome it as an easy way to investigate and write about subjects and present the texts as their own in just seconds. 

It has also triggered a lot of concern about the fact that an AI program can instantaneously crank out a passable college-level essay and change the future of teaching, and learning – even for scientists and medical students. This unease has produced many detection programs of varying effectiveness and a commensurate increase in accusations of cheating. 

But how are the students feeling about all of this? Assistant teaching Prof. Tim Gorichanaz, at Drexel University in Philadelphia provides a first look into some of the reactions of college students who have been accused of using ChatGPT to cheat.

The study, published in the journal Learning: Research and Practice entitled “Accused: How students respond to allegations of using ChatGPT on assessments” analyzed 49 posts on Reddit (an online network of communities where people discuss their hobbies and interests) looked at college students who had been accused of using ChatGPT on an assignment. 

Advertisement
 A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023.  (credit: DADO RUVIC/REUTERS)
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. (credit: DADO RUVIC/REUTERS)

Frustration from the wrongfully accused

Gorichanaz identified a number of themes in these conversations, most notably frustration from wrongly accused students, anxiety about the possibility of being wrongly accused and how to avoid it, and creeping doubt and cynicism about the need for higher education in the age of generative artificial intelligence.

“As the world of higher education collectively scrambles to understand and develop best practices and policies around the use of tools like ChatGPT, it’s vital for us to understand how the fascination, anxiety and fear that comes with adopting any new educational technology also affects the students who are going through their own process of figuring out how to use it,” 

Of the 49 students who posted, 38 of them claimed they didn’t use ChatGPT, but detection programs like Turnitin or GPTZero had nonetheless flagged their assignment as being AI-generated. Students asked how they could present evidence to prove that they hadn’t cheated, while some advised continuing to deny that they had used the program because the detectors are unreliable.

“Many of the students expressed concern over the possibility of being wrongly accused by an AI detector,” Gorichanaz said. “Some discussions went into great detail about how students could collect evidence to prove that they had written an essay without AI, including tracking draft versions and using screen recording software. Others suggested running a detector on their own writing until it came back without being incorrectly flagged.”


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Some students viewed colleges and universities as “gatekeepers” to success and, as a result, the high stakes associated with being wrongly accused of cheating. This led to questions about the institutions’ preparedness for the new technology and concerns that professors would be too dependent on AI detectors whose accuracy remains in doubt.

“The conversations happening online evolved from specific doubts about the accuracy of AI detection and universities’ policies around the use of generative AI, to broadly questioning the role of higher education in society and suggesting that the technology will render institutions of higher education irrelevant in the near future,” Gorichanaz said.

Advertisement

The study also highlighted an erosion of trust among students—and between students and their professors that resulted from the students' perception that they are persistently under suspicion of cheating. Generative AI technology has forced institutions of higher education to reconsider their educational assessment practices and policies about cheating. 

Gorichanaz noted that even the best AI detectors could produce enough false positives for professors to wrongly accuse dozens of students — which is clearly unacceptable, considering the stakes. “Rather than attempting to use AI detectors to evaluate whether these assessments are genuine, instructors may be better off designing different kinds of assessments: those that emphasize process over product or more frequent, lower-stakes assessments,” he wrote, in addition to suggesting that instructors could add modules on appropriate use of generative AI technology, rather than completely prohibiting its use.

Meanwhile, another story from Ohio State University advised people not to use Chat GPT to write a message to a friend because the use of AI can make partners less satisfied and more uncertain. The study was published online recently in the Journal of Social and Personal Relationships under the title “Artificial Intelligence and perceived effort in relationship maintenance: Effects on relationship satisfaction and Uncertainty.” 

Communication Prof. Bingjie Liu found that people in the study felt that a fictional friend who used AI assistance to write them a message didn’t put forth as much effort as a friend who wrote a message themselves. “After they get an AI-assisted message, people feel less satisfied with their relationship with their friend and feel more uncertain about where they stand,” she said. “People want their partners or friends to put forth the effort to come up with their own message without help – from AI or other people.” 

Dear Taylor

The study involved 208 adults who participated online. Participants were told that they had been good friends with someone named “Taylor” for years and were presented with one of three scenarios: They were experiencing burnout and needed support; were having a conflict with a colleague and needed advice; or their birthday was coming up. Participants were then told to write a short message to Taylor describing their current situation in a textbox on their computer screen.

All participants were told Taylor sent them a reply.  In the scenarios, Taylor wrote an initial draft.  Some participants were told Taylor had an AI system to help revise the message to achieve the proper tone, others were told a member of a writing community helped make revisions, and a third group was told Taylor made all edits to the message. In every case, people in the study were told the same thing about Taylor’s reply, including that it was “thoughtful.”

Participants had different views about the message they had supposedly received from Taylor. Those who received a reply helped by AI rated what Taylor did as less appropriate and more improper than those who received the reply that was written only by Taylor and expressed less satisfaction with their relationship, such as rating Taylor lower on meeting “my needs as a close friend.”

×
Email:
×
Email: