Generative Artificial Intelligence innovations — primarily natural language processing technologies, such as ChatGPT — are rapidly transforming the lives of college students. With AI becoming so frequently used among students, it almost feels as though not utilizing it puts one at a disadvantage. Arthur “Barney” Maccabe, the executive director of the Institute for the Future of Data and Computing at the University of Arizona, seemed to share this opinion.
“It’s not just a disadvantage, I think we should be teaching students how to use these tools ethically and effectively,” he said.
Questions Surrounding AI Ethics in Academia
Maccabe explained that resources like ChatGPT have risks that extend beyond mere plagiarism. Academics are now faced with complicated decisions about the role of AI in research.
“Say you’ve done original research. Can you use these tools to write a paper around what the results mean and how they fit into the world?” Maccabe said.
Natural language processors like ChatGPT aren’t always equipped to filter out bias and identify whether a source is credible.
“AI amplifies bias by amplifying human bias that it encounters in the data it analyzes. You’re losing the ability to evaluate the credibility of sources. How trustworthy is it? How biased is it?” Maccabe said.
Workforce Disruption
Maccabe pointed out AI’s potential to disrupt the workforce on a larger scale than previous technological innovations, such as the automation of the textile industry.
“When we were able to make weaving automated, we lost weavers, but what quickly adopted was tailoring because people now had multiple sets of clothing,” Maccabe said. “But now we’re starting to automate intellectual and creative content. […] AI will change what the creative arts look like.”
Maccabe provided the recent writer’s strike in Hollywood as an example, and said “[AI] is a reason why writers are on strike right now. It’s because of these technologies and what they are going to mean for the future.”
What to Know About Academic Policy
Maccabe expressed the opinion that the campus community is not yet equipped to navigate the complexities and ethical implications of AI.
“We are not equipped because we’re not having the conversation,” he said.
The University of Arizona’s official website states that “[UA] does not currently have a policy on the use of ChatGPT. However, instructors may have policies on how ChatGPT may or may not be used in classroom assignments.”
The Arizona Board of Regents Policy Manual does not include policy surrounding AI technology, either.
UNESCO has stated that “the education sector needs to make these ‘qualifying’ determinations on its own terms. It cannot rely on the corporate creators of AI to do this work.”
While Maccabe stated that the UA has employed “working groups this summer who are trying to address some of [these] issues,” it is unclear how the UA and Arizona as a whole will proceed in the policymaking process, or if instructors will be left to develop their own rules.
Despite the foggy ethical boundaries ahead, Maccabe feels that the rewards of AI technology far outweigh the risk — but only under the condition that we don’t “go into it blind without having any conversation.”