We have to rethink academic integrity in a ‘post-plagiarism era’ – Times Higher Education

Practical insights from and for
academics and university staff
Everything you need for each step
of your study abroad journey
What is the future of plagiarism as a concept in the AI age and what are the implications for assessment? This resource seeks to answer these questions, among others
Created in partnership with
Created in partnership with
You may also like
Popular resources
Plagiarism as a concept is relatively recent, and its definitions have evolved over time. It is widely known that Shakespeare, for example, took whole passages from earlier works and included them in his own writing. However, we do not generally accuse him of academic misconduct. 
AI tools are here to stay and will only become more sophisticated. It is futile to try to ban their use to protect academic integrity. We already accept the submission of some AI work, but now find it increasingly difficult to define what is acceptable and what is not. The concept of “post-plagiarism” first emerged in Sarah Eaton’s book Plagiarism in Higher Education and offers a way to move beyond defining plagiarism, focusing instead on ethical learning. As educators, we must therefore learn how to teach and assess in a world where descriptions of academic conduct have changed.
Hybrid human and AI writing will become normal: there is little, if any, writing being created now that does not have some input from AI. Every time I begin to type, an AI tool is there to “help” me, whether in the form of auto-correct, spelling and grammar check, or any other tool. Written content being created now is likely generated, at least in part, using AI tools.
Human creativity is enhanced: AI can be a stimulus to expand our thinking further. There are issues here, though. AI is “trained” using the creativity of others and its content often lacks attribution. Creative artists are concerned that their work is being hijacked, or that the tools can produce output that renders many creative roles obsolete. There is also a very real fear that the convenience of AI may make us lazy. 
Language barriers disappear: AI is the new Babel fish, a fictional species in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy that is inserted into the ear and translates any language into the first language of the wearer. I delivered a session to international students recently as part of their induction week. One table had a tablet that was translating everything I said, as I said it. Last year those students would likely not have understood me, requiring repetition and delaying delivery; this time they were immediately up to speed with what I said and the tasks at hand. 
But who is checking the accuracy of the translations? With any translation there is potentially inherent bias, whether intentional or not. Much has been written about bias in AI, and we could be at risk of further entrenching it by relying on these tools.
Humans can relinquish control but not responsibility: when we allow AI to produce content for us, we relinquish control and the ideas cannot be called our own. However, when that piece of work is disseminated, in whatever way, we are still responsible for its production. We, as human “authors”, must be responsible for what is published in our name. 
Historical definitions of plagiarism no longer apply: as discussed above, plagiarism was a concept for its time, but that time is past. We need to develop a new way to assess our students’ work. The use of AI tools can be seen on a continuum, from “no AI input” through to “entirely AI generated”. While as a community we try to establish where ethical use sits on that continuum, the world moves on, and AI is everywhere. 
Attribution remains important: in the post-plagiarism era, we need to support students to both develop and demonstrate their academic integrity. Attribution is a key element of that process. GenAI is another source, and as such must be properly attributed. A declaration form can help with this.
Students will not simply wait for us to come up with an “acceptable” way to integrate AI into their work. Just as some of our colleagues, and many in other industries, are doing, they will already be using AI tools. 
When we attempt to “police” AI use, and punish students for its detected use, we are punishing the students we are committed to supporting; the underrepresented students, those with less disposable income to subscribe to the best tools and those whose language skills do not allow them to edit the AI output to make it less recognisable. A punitive strategy is not the way forward.  
David Carless suggests that there are three purposes of assessment:
Develop assessment that supports academic integrity. This can be done by:
We need to be able to demonstrate that the assessment we deploy meets the intended learning outcomes of our module or programme, and that we have incorporated measures to ensure academic rigour. 
Karen Kenny is a senior educator developer at the University of Exeter. 
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
Register for free
and unlock a host of features on the THE site

source

Leave a Comment