“Artificial Intelligence (AI) can potentially pose a threat to academic integrity if it is used to create or generate academic work that is meant to be passed off as original, human-generated content. This is known as ‘contract cheating,’ where students use AI tools to write essays or complete other coursework. However, the use of AI tools also has the potential to detect and prevent academic misconduct by flagging instances of plagiarism and other forms of cheating. Overall, it depends on how AI is being used, but it can both pose a threat and also help to promote academic integrity.”
Interestingly enough, the entire first paragraph was copied and pasted from an AI chatbot called ChatGPT when I asked it, “Is AI a threat to academic integrity?” This chatbot was created by an artificial intelligence company called OpenAI, and it is known for being able to communicate with adequate human language styles and provide information. Its ability to create in-depth sentences opens up an opportunity for students to copy the information it gives them on essays and other assignments, which could be potentially problematic.
Since the answers ChatGPT gives its users are entirely original, this form of plagiarism would not be detected by sites like Turnitin. However, the chatbot does not provide longer overall responses, as most answer lengths are around a paragraph. The information is also fact-based, and students would struggle using its answers to form an argument for certain opinion-based papers. So, this type of artificial intelligence is not going to be ending creative essays or opinion writing anytime soon. Though, this type of factual information may pose a threat to journalists and threaten trustworthy sources.
With its ability to answer user questions and provide factual information, news reporting journalism may be at risk. Similarly, this technology may be used to create convincing fake news stories that sound like they have been generated by real journalists and human writers. This information could then be widespread and cause confusion, creating an untrustworthy news environment. If the bot also produces inaccuracies, it is hard to hold humans accountable for the errors as the information is generated without human input.
Despite these potential problems with the AI, it has seemingly already become favored over Google by many people. ChatGPT can explain complicated topics without requiring users to filter through unrelated and unnecessary sources like on Google. This type of technological progression may be beneficial for those seeking quick information without the annoyance of search engines, but is it worth the potential risks?
I think the answer comes down to what the AI said itself, that it largely depends on how it is being used. Technological progression is both convenient and helpful, so ChatGPT isn’t necessarily a bad thing. However, if it is being used to promote false information and simultaneously brings an end to news journalism, the way we receive information may completely change in the future. I do not think that plagiarism in academic settings should be the first concern when it comes to artificial intelligence, but its ability to infiltrate human work and human lives so easily.
Follow the Daily Wildcat on Twitter
Luke Lawson (he/him) is a sophomore intending to major in accounting. He enjoys discussing political events, hiking and watching films.