For journalists, the proliferation of artificial intelligence tools powered by capable large language models has, at best, given us ways to streamline work and eliminate busywork. At worst, these technologies are an existential threat to our ability to learn and evaluate the quality of media.
I say all of this from my experience as a student, a research assistant and an emerging journalist. The AI question is one I am confronted with on every front.
In a writing-intensive course this past year, the instructor emailed our small class saying that she could tell students were using AI and told us of her disinterest in reading inauthentic, unoriginal writing. As the professor noted, using AI in a course that sought to develop our writing voice is counterintuitive.
In another course, so many students used ChatGPT answers, the instructor removed the written portion of a test making it multiple choice only.
In my experience, students are quickly getting wise to the abilities of AI. Its use is becoming normalized, but what is ethical use of AI in academia?
Certainly misuse is hard to prove and prevent.
Carleton’s AI policy, for example, does not accept AI detectors as proof that it was used. Throw in rapidly improving technologies like GPT-4 and students of any level can offload a significant portion of the heavy lifting that comes with writing a research paper or interpreting data tables.
As a journalist, my fear of AI replacing me in a newsroom walks a step behind what it means for future postsecondary students. If generative AI could theoretically carry students through completing assignments and other tasks, what is the value of the degree they obtain?
A 2023 survey by the business consulting firm KPMG found that more than half of its respondents 18 or older reported using generative AI in their schooling, despite 60 per cent of them feeling that it is cheating. Of the 5,140 Canadians polled, 81 per cent, however also believed students should learn how to use generative AI as a tool.
My fear of AI eroding the merit of being curious and using writing to work through ideas and problems is shared by Annette Vee, Composition Program Director at the University of Pittsburgh.
“I worry that students will lose a sense of curiosity and how to explore and develop their own ideas,” said Vee. “I think AI can be good for writing up ideas, for polishing things, for showing different styles and genres, interpreting data and a lot of other things. But it’s also a way to just get answers. How do you get good answers if you don’t have good questions? What are you learning?”
In her work, Vee has called a future interaction between a student and instructor as mediated by generative AI an “infinite ouroboros” — that is, a snake eating its own tail.
AI presents a gray area when it comes to learning and demonstrating acquired knowledge. How much use, and what kind, are universities willing to tolerate?
“The issue is where is this line between AI assisted and AI generated? So what if (students) have the AI write the basic paper and then they go through and massage it, fix up a few things, add in the discussions from class, add in the references. That’s pretty hard to detect. That would look just like a student paper,” said Donald Myles, TESL and EAP instructor in Carleton University’s School of Linguistics and Language Studies.
Myles said AI’s increasing capabilities present a “moving target” for instructors who are trying to evaluate learning outcomes in their courses. He said that despite problems with AI bias or hallucinations, its abilities in language and structure are exceptional – and that he defies any instructor to be accurate when trying to determine if a student has used AI in an assignment.
Jonah Grignon, a recent graduate of Carleton’s journalism school said he believes generative AI is an “expedited way of doing research” but that he wouldn’t ask it to summarize or paraphrase anything.
“All that is, is a better Google search,” said Grignon.
In working on academic research papers, Grignon said he turned to large language models when he found himself frustrated by other research options such as Google Scholar or online databases. Grignon says he doesn’t use it all that much anymore, but that he’d been first made aware of its research applications when conducting his own research into its applications in higher education.
For Myles, there’s little to be done in catching it or proving the use of AI despite the assurances of instructors in catching students using AI on assignments.
“The truth is there’s no way to secure your assessment or make it ‘AI proof’ unless it’s a proctored in-person pen and paper, no technology, no phone, you know, like an exam.”
These types of assessments are not appropriate ways to evaluate all kinds of learners or learning, Myles said.
Most of the conversations I’ve had about AI in postsecondary end up agonizing over the question of authorship if AI was used in any way for its production. As Myles told me: defining the red line of what is an ethical or unethical use of generative AI is hard to define, especially with their increasing ability to produce works that can intentionally include minor errors to appear human-produced.
So how do we cope?
Vee says there is no right answer or silver bullet. She said that instructors should consider what skills are necessary to develop for a student’s field of study when determining an AI policy.
“A developing student may need to have a different relationship with AI than a practitioner in the field. It’s not good to just ban AI because a teacher isn’t adapting. But it’s also not beneficial to just embrace AI and neglect that student’s emergent processes,” said Vee.
It’s interesting to come to the question of AI towards the end of my training as a journalist. I’ve no doubt that something like ChatGPT could summarize and write the quick news stories I do using press releases – but the skills that go into crunching those 300-word stories are the same ones that help me do investigative work. We can’t throw the baby out with the bathwater when it comes to offloading work to AI.
In thinking of how to move forward, Myles didn’t have an exact answer either, but instead referred to a quote from John Dewey from nearly a century ago: “If we teach today’s students as we taught yesterday’s, we rob them of tomorrow.”
“Are we going to be graduating students who know how to use AI ethically, critically? I mean, if we’re not preparing them for that future, what are we preparing them for?” said Myles.
For Grignon that means drawing solid boundaries on what AI use is appropriate for a given discipline, and leaning into that as opposed to banning it outright.
“I’m optimistic in that I don’t believe that we’re 100 per cent doomed. I’m optimistic in the same way that I am about things like climate catastrophe where I’m like, there is a way forward and I think we know what the way forward is. I just think it’s a matter of whether or not we all decide to land on that stage at the same time,” said Grignon.
As my own research supervisor often says “We don’t write writing, we write something.”
Victor Vigas Alvarez is a Bachelor of Journalism student at Carleton University.