AI Detection Tools Falsely Accuse International Students of Cheating
Turnitin, a widely used plagiarism-checking tool in academia, is raising eyebrows for wrongly flagging papers written by non-native English speakers as AI-generated writing. A pattern has emerged over recent months, with the software being more prone to label international students’ work as AI-authored. This is due to the simplified language and predictable word choices often associated with non-native English speakers’ writing, which aligns with the characteristics of AI-generated text.
A group of Stanford computer scientists tested the reliability of AI detectors on non-native English speaker’s writing and found substantial bias. In their experiment, seven AI detectors falsely flagged non-native speaker’s writing as AI-generated 61 percent of the time, with 20 percent of papers suffering a unanimous incorrect assessment. This flagged writing could lead to harmful consequences for international students, such as the threat of deportation due to academic misconduct charges.
The issue has ignited a debate about whether the potential harm caused by these AI detectors outweighs their perceived benefits. International students are expressing concern over the potential risk to their academic standing and reputation from false positives. Some educators are starting to question the legitimacy of these tools and whether they should continue to be used in academia.
Turnitin, however, is standing by its claims of high accuracy. They argue that the tool was trained using writings from both native and non-native English speakers, so it shouldn’t inherently be biased. The company is conducting its own research into this matter, though the results are yet to be published. Despite this, Turnitin has updated its software to allow institutions to disable the AI writing indicator, leaving the decision in the hands of individual institutions.
Source: AI Detection Tools Falsely Accuse International Students of Cheating – The Markup