Artificial intelligence is here to stay. Though there are incredible benefits to this machine, it almost feels like the prologue to a science fiction film, both exciting and unsettling. But, quietly, over the past few years, it has subconsciously crept into the day-to-day lives of students, and we need to learn to regulate the proper use of it.
COIVD-19 popularized the use of technology-based learning, whether it was through the use of platforms such as Google Classroom or Canvas. The growing movement has made it nearly impossible for educators to put a halt to the use of AI.
The Center for Democracy and Technology, a nonprofit focused on technology policy, surveyed teachers about changing patterns within this space. “85 percent of teachers say that their school has a policy that either generally permits (subject to some conditions or limits) or bans the use of ChatGPT, or other generative AI tools, for schoolwork. And 71 percent of those teachers say that the current policy is the first their school has implemented,” Maady Dwyer and Elizabeth Laird said, claiming several institutions vary on their involvement with AI.
The rise of tools like Turnitin, GPTZero and Copyleaks shows how institutions are urgently responding to perceived threats to academic integrity. These tools, rather than simply acting as surveillance, can foster academic honesty and encourage students to take ownership of their original ideas, ultimately promoting deeper engagement with their work.
Unfortunately, as the years fly by, it feels as though the education system is giving up.
Take Victoria Livingstone, a writer for Time Magazine. In her most recent piece, “I Quit Teaching Because of ChatGPT,” Livingstone talks about her many attempts to keep students away from AI and more engaged with the curriculum. She looked for ways to incorporate the technology instead.
“I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it,” Livingstone said.
Though many teachers choose to adapt, it seems that less are willing to remain firm in getting through to students. Students are expected to feel the discomfort of independent learning, but their willingness to remain ignorant to this has made several teachers, such as Livingstone, feel helpless.
The young students’ fascination with AI is strictly attached to their sense of innocence, so it is inevitable for them to be tempted every now and then. But when faculty chooses to turn the other way, it only makes it harder for upcoming generations to truly master the balance of integrity.
For decades, learning was strictly based on a pen-to-paper, written coursework system. Then, with the introduction of technology, it slowly changed. This time, however, the uncertainty stems from the fact that AI is not just altering how assignments are completed but also raising questions about originality, integrity and the very purpose of education.
One way to truly evolve with this is by students refusing the desire to finish coursework faster. When students instead demand deeper, more meaningful assignments and call on educators and administrators to update outdated policies, the system can no longer ignore the greater problem at hand. Real change may take years at the institutional level, but even now, shifts can begin in individual classrooms as students and teachers work together to use technology as a tool for growth rather than a shortcut. This kind of advocacy can gradually create a more certain and empowered future for learning.

