A recently updated report from Carleton University on the use of Artificial Intelligence (AI) in teaching and learning points to a dilemma for educational institutions world-wide: How to embrace tools like Chat-GPT in the classroom while trying to curtail uses that would be considered cheating.

The report, produced by the university’s Working Group on the Use of Artificial Intelligence (AI) in Teaching and Learning, examines the “challenges and opportunities” presented by the rapidly developing technology.

“We looked at academic integrity issues, but we also looked at what possibilities artificial intelligence offers in the classroom,” said David Hornsby, Carleton’s Vice-Provost and Associate Vice-President (Academic) and chair of the working group.

Instead of discouraging the use of AI across the board, the university says it is more concerned with capitalizing on the opportunity to teach students how to engage with these tools in an ethical and useful way.

“We want students to understand this technology well. We want students to know what its powers are, but also what its limitations are, because this technology maintains really significant limitations now.” 

Over the last couple of years, generative AI-powered tools like Chat-GPT have become highly advanced for all kinds of tasks, including writing text and computer code, a temptation for many students. According to a report released by KPMG in late August, 52 per cent of Canadian students use generative AI to assist them in their schoolwork.

Ian Stirling, a fourth-year computer science student at Carleton, says that he uses Chat-GPT on a regular basis to help with understanding key concepts and content from his courses.

“I find it quite helpful as like a separate way of explaining things if I don’t originally get what the professor is trying to say,” Stirling said.

Although he uses it often in his work, Stirling is confident he understands when using AI can cross the line and constitute plagiarism.

“I’ve never worried about anything like that because I don’t do the copy and paste thing or use it for my assignments. I primarily use it as another perspective when I don’t understand a concept.”

Joshua Redstone, a Carleton professor who teaches a course on the philosophy of AI, says he is excited by the technology’s possible benefits but acknowledges that understanding how and when to use the technology it is a necessary step for anyone that involves it in their work.

“There’s an idea in the cognitive sciences called forward engineering. So if we want to understand … anything that a mind can do, including being creative, we should try to make an artificial system that can do those things. And that might be a way to inspire new theories of the mental, which would be a massive benefit (of AI),” said Redstone, whose research focuses on the interaction between humans and computers. 

Of course, the flip side of that is that AI and tools like Chat-GPT can and will be used by humans to do things that cross an ethical line, like cheat, and regulating that is impossible, considering how little we know about the technology.

“Not only can we not predict the uses we’ll find for different technologies, but technologies like Chat-GPT are kind of like black boxes in that we can’t really look inside and see how things are actually working… I can say what Chat-GPT does … but how all of that is actually unfolding in the network is kind of a mystery.”

Carleton’s report does strike some hopeful notes: “Generative AI tools could provide students with personalized learning (e.g., give personalized feedback to students based on information provided by students), help post-secondary institutions with administrative processes (e.g., AI tools respond to questions from prospective students), and help instructors with their research tasks (e.g., generate ideas for research questions, suggest data sources, etc.).”

Meanwhile, many in academia are calling for more regulation. Canada’s minister of Innovation, Science and Economic Development announced voluntary regulations around generative AI in September. But in an editorial shared by Universities Canada, Rhonda MacEwen, president of Victoria College at the University of Toronto, urged more regulation.

“The risk of not acting now is, as leading academics have already noted, taking us on a very precarious path across the broad spectrum of human life,” she wrote. “The iterative nature of AI means that without meaningful regulation, it will become easier for the average person to have the power to cause very serious public harm, should they so wish.”