The short answer, you cannot.
There is no reliable way to detect AI use.
The technology doesn’t work. A recent study explored the detection rates of six major detectors (including Turnitin) and found that they were only able to identify 39% of the AI created content and that percentage fell to 17% when simple evasion techniques were used (Perkins et al., 2024).
Our human senses also struggle. Another study found that 200 teachers of varying experience levels were only able to correctly identify AI writing samples 37% of the time (Fleckenstein et al., 2024). You may be able to detect some simple uses of AI; however, you will struggle with catching more sophisticated uses. Did you know that you can ask an AI to intentionally make mistakes in an output? Don’t believe us? Try these two tests to see how you do:
Created by the WCU TLC
Select Start Game
Detection tools are also discriminatory as they “exhibit significant bias against non-native English authors” (Liang et al., 2023). They are more likely to falsely predict that writing from a non-native English person is AI generated. This is a finding the TLC has anecdotally confirmed with multiple members of the WCU community.
TLC has also encountered anecdotal examples of Turnitin AI detection falsely reporting high percentages of AI writing in situations where students used Grammarly to improve only grammatical aspects of their writing such as word choice and conciseness. These students did not use AI to write the bulk of their submission, yet the detector was indicating that they did.
Why Detection Doesn’t Work?
AI detectors attempt to make a prediction based on patterns of what supposed AI writing looks like. This differs from similarity detection which looks for exact matches between student writing and other writing samples. AI generated text doesn’t have exact patterns that always appear and as the AI tools improve, the supposed AI ‘patterns’ the detectors are looking for are becoming weaker. To quote Stefan Bauschard, “Since there is no certain pattern, false positives (false identification of an AI writing pattern by a student) and false negatives (failure to detect AI-text) are often produced.”
Don’t assume that a reliable detector is coming soon. The AI models are not static, but constantly evolving. As a model improves the writing styles it can generate become more varied. This necessitates that detectors also improve their detection abilities leading to a detection arms race. Furze (2024) argues that the large corporations creating the AI models are going to have more resources than the organizations creating detectors, meaning that the models will likely maintain an edge over detectors for some time.
What about the TurnItIn AI detector?
Turnitin has an ‘AI Detector’ tool to help faculty identify instances of AI generated writing. As we have established above, it is difficult for this or any tool to reliably detect AI-generated text.
There has been considerable concern expressed by faculty and institutions in the US and beyond about the potential impact of the AI Detector tool on courses, student learning, and the academic environment. Some institutions have made the decision to opt out of the tool because of these concerns.
- Lancaster University (UK)’s VP of Education released a statement calling on the university to turn off Turnitin AI Detection because of the impact they have observed with students being unjustly reported for academic misconduct based on the Turnitin Reports
- University of Michigan-Dearborn has opted out of the tool citing concerns about protecting students’ digital rights.
- Colorado State University paused the rollout of Turnitin AI Writing Detection Tool because of the potential impact of the tool on teaching and learning.
- University of Pittsburgh’s Teaching Center disabled the AI detection tool in Turnitin concluding that “use of the detection tool at this time is simply not supported by the data and does not represent a teaching practice that we can endorse or support.”
- UCLA opted out of Turnitin’s AI Writing Detection because of concerns and unanswered questions.
Other schools that have opted out of using Turnitin’s AI Writing Detector include:
- American University
- UC Berkley
- Colorado State
- DePaul University
- Georgetown University
- New York University
- Northwestern
- Oregon State University
- Saint Joseph’s University
- University of Maryland
It is apparent from these examples that there are serious legal and academic ramifications associated with using Turnitin AI Writing Detector.
It is risky to use the AI Detector Similarity Report as the sole basis for determining that a student has violated academic integrity because it is not wholly reliable.
The AI Writing Detection tool is currently enabled at WCU. The Teaching and Learning Center (TLC) and Information Systems &Technology (IS&T) urge faculty who may choose to use this tool to do so with extreme caution. Consider the following recommendations for using the tool ethically and responsibly:
- Err on the side of caution, because the Turnitin report is not completely accurate.
- Make comparisons to students’ previously submitted work.
- Talk with the student about the work that you are questioning and give them an opportunity to share their process for completing the assignment.
- If you suspect that the student used AI, consider giving them a chance to re-do the assignment.
- Keep in mind that as AI continues to advance, differentiating between AI-generated and human-generated content will only become harder over time.
Additional Reading: