GenAI in Higher Ed

A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study | Kelly Watcham,Alasdair Clarke,Etienne Roesch
The recent rise in artificial intelligence systems, such as ChatGPT, poses a fundamental problem for the educational sector. In universities and schools, many forms of assessment, such as coursework, are completed without invigilation. Therefore, students could hand in work as their own which is in fact completed by AI. Since the COVID pandemic, the sector has additionally accelerated its reliance on unsupervised ‘take home exams’. If students cheat using AI and this is undetected, the integrity of the way in which students are assessed is threatened. We report a rigorous, blind study in which we injected 100% AI written submissions into the examinations system in five undergraduate modules, across all years of study, for a BSc degree in Psychology at a reputable UK university. We found that 94% of our AI submissions were undetected. The grades awarded to our AI submissions were on average half a grade boundary higher than that achieved by real students. Across modules there was an 83.4% chance that the AI submissions on a module would outperform a random selection of the same number of real student submissions.
Original link
AI Detection in Education is a Dead End
When you live in a research/social media bubble like I do, it’s easy to take certain things for granted. For example, I always overestimate the number of people who are using generative AI re…
Original link
AI Detector Tools Teachers Guidelines | Vanier College
Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector | Vanderbilt University
In April of this year, Turnitin released an update to their product that reviewed submitted papers and presented their determination of how much of a paper was written by AI. As we outlined at that time, many people had important concerns and questions about this new tool, namely how the product exactly works and how...
Original link
OpenAI just admitted it can't identify AI-generated text. That's bad for the internet and it could be really bad for AI models.
In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.
Original link
Don’t Rely On AI Plagiarism Detection Tools, Warns OpenAI CEO Sam Altman
OpenAI CEO Issues Warning Against Relying on AI Plagiarism Detection
Original link