The Instagram account seemed pretty convincing to Marie-Claude Fontaine. Comforting, even. So when the artist behind the account reached out to her via direct messages offering to pay her to be an art model, the 18-year-old student was too excited to be put off.
“I was like, ‘Oh my god, that’s really cool.’ I think (the message) said she was going to give me around $250 to $300 for one picture!”
After sending some of her photos, Fontaine received a cheque for $500, but was asked to send $250 back to the account.
“I asked, ‘That’s kind of weird. Why didn’t you just write it for $250?’ And she said that’s kind of just how she does things. I had already deposited the cheque.”
Fontaine made the payment. But a few days later the $500 cheque she had deposited bounced, leaving the first-year Durham College student tight on money and feeling helpless. Her name had been passed on by a friend who had fallen for a similar scam. She went to the bank right away but she never got her money back.
“(The bank) can’t do anything because it’s your fault you got scammed. That’s basically how they handle it,” she says.
Fontaine is not alone. According to one study, 63 per cent of Canadians aged 18 to 34 admit to having been a victim of fraud or scams, a sign that online scams now affect people of all ages — not only older demographics.
One email phishing scam, which promised a free iPad behind an innocent-seeming link, fooled 80 per cent of the people who received the email, according to a study in The Palgrave Handbook of International Cybercrime and Cyberdeviance, an e-book released in 2020.
But what if the iPad scam hadn’t been just another email? What if it had been a convincing phone call from a trusted friend— or even a FaceTime chat?
In Fontaine’s case, she says that in hindsight, she isn’t even sure whether it was a human messaging her. “It could go either way because I know for a fact that even if it’s automated messages, it can be like you’re talking to someone real, you know what I mean?” she said.
That is the kind of fraud people are up against now, as a major breakthrough in artificial intelligence has thrown open the floodgates to vastly more sophisticated scams. The “generative AI” behind ChatGPT and similar products can survey vast amounts of information and use it to create highly plausible text, images, video and audio — all in a matter of seconds. It can also write computer code, including malware.
This monumental change in AI capabilities could lead to major advances in everything from scientific research and medicine to law and even the arts. But it also puts unprecedented powers into the hands of cybercriminals around the world — and for free, in the case of ChatGPT.
Released in November 2022, ChatGPT was an instant sensation. By March, it had crossed the threshold of one billion users.
AI has reached a significant inflection point, says James Stewart, a former crime analyst and CEO of the security network, TrojAI Inc. He says he hasn’t “seen this level of innovation since the World Wide Web in the early ’90s.”
After working for over 10 years in Internet security and 20 years as a police constable, Stewart knows a thing or two about the Internet landscape. And he’s worried.
“Generative AI is enhancing traditional cybercrime, while lowering the bar for criminals to engage,” he says. Based in Saint John, N.B., Stewart founded TrojAI in 2020 with the goal of contributing to the robustness and security of AI systems.
‘Generative AI is enhancing traditional cybercrime, while lowering the bar for criminals to engage.’
— James Stewart, former crime analyst, CEO of security network TrojAI Inc.
He is not alone in his concerns. In March, more than 1,000 experts and industry executives, including Tesla and Twitter owner Elon Musk and Apple co-founder Steve Wozniak, posted an open letter requesting a pause in the development of AI to allow researchers to develop and implement shared safety protocols.
There are three main kinds of AI-powered scams, according to the risk management company Fraud.net, based in New York City:
- Voice Cloning — This is one of the most worrisome developments in AI-powered scamming. AI technology now needs only a few vocalized sentences, perhaps extracted from social media platforms such as YouTube, to create a convincing vocal duplication of a person. One trusting older couple in Regina reportedly nearly sent almost $9,400 to scammers who had used voice cloning to impersonate their grandson in March 2022.
- Deepfake scams — While deepfake scams are less common so far, there have already been a few notable instances. Take the famous Elon Musk video from 2022, in which a phony Musk was featured urging his fans to invest in sketchy crypto-currency. Though deepfakes are a lot more difficult to create convincingly and the Musk deepfake was quickly debunked, it’s a startlingly powerful advancement.
- AI-assisted spear phishing — Spear phishing, which targets specific individuals to obtain sensitive data such as banking information, may resemble the Nigerian prince scam, but scammers can now use free machine learning software to create much more complex social media scams. Picture a conversation with ChatGPT where the chatbot’s entire goal is to swindle you out of thousands of dollars. With AI technology at play, spear phishing no longer has the telltale spelling mistakes and bad grammar of pre-AI email scams.
AI-assisted phishing is a threat not only to individuals, but also to large institutions such as banks, experts warn. In a study published in the ABA Banking Journal in 2018, researchers reported that in a simulated attack on a single bank, 99.7 per cent of phishing URLS were blocked. But with the use of an AI URL generator, attack efficiency increased from 0.69 per cent to 21 per cent — a 3,000-per-cent increase.
Maura R. Grossman, a computer science professor at the University of Waterloo who specializes in technology and the law, said she’s concerned about the impact of the new generation of AI systems on bank security.
“Right now, if I want to do a transaction involving my chequing account, an email instruction to my bank isn’t enough,” says Grossman, who is also an adjunct professor at Osgoode Hall Law School. “I also have to call to authorize the transfer. I worry whether that’s safe enough anymore because someone can take a few minutes of my voice from YouTube and use that to have me say whatever they want. I think my bank is going to have to reevaluate that process.”
In her classes, Grossman teaches aspiring AI developers what she calls “responsible AI.” This means ensuring data they use is accurate and fair and that the algorithms they create are not biased, she says. It also means ensuring that the creation of artificial intelligence benefits those it serves and does no harm.
Responsible AI is a concept that Nicolas Sabouret and Lizete de Assis describe concisely in their 2021 book, Understanding Artificial Intelligence. In their definition, they argue, “One mustn’t confuse a tool and its use. An AI can be misused. An AI cannot, however, become malicious spontaneously.”
Deval Pandya is another proponent of responsible AI. Director of AI engineering at the Vector Institute for AI in Toronto, he says there are important social and ethical questions that will have to be addressed — and soon.
“Preventing the malicious use of AI will require a very comprehensive strategy, with deep collaboration between governments, corporations, researchers, and users. Governments and organizations will have to develop and enforce ethical AI frameworks for building and using AI systems,” he argues.
Pandya says these ethical guardrails are urgently needed in response to the current situation, where products such as ChatGPT can be released into the wild with few constraints.
However, Grossman says there will still always be workarounds to any restrictions put on AI misuse. She draws on the example of “Evil Twin Dan”, a hack for the popular AI chatbot ChatGPT that allowed users to gain access to otherwise unavailable profanity-laced responses. Though it sounds like an episode of Black Mirror, users were able to get their hands on all types of banned information by asking ChatGPT to respond as its “Evil Twin Dan.”
Preventing illegal or unethical uses of artificial intelligence is “like playing Whac-A-Mole,” Grossman adds. With cyber security constantly playing catch-up to a growing number of cyber criminals, taking down one criminal scheme only leads to another springing up elsewhere.
It doesn’t help that the law is falling further behind in the arms race against cybercrime, experts say.
Crime analyst James Stewart agrees with this. According to his metrics, cybercrime rates are expected to quadruple by 2027. This is due to not just the rising number of people with access to advanced scam techniques, but also the plodding nature of the legal system, which has so far struggled to match the breakneck speed of innovation, he says.
“Laws and regulations can’t keep pace with issues around AI security, privacy, robustness and bias issues,” Stewart says. He noted that on March 18, despite rising concerns about the misuse of AI, Microsoft — which touts itself as the ‘responsible AI company’ — disbanded its AI ethics and safety team.
“We can’t stop innovation,” says Stewart, “but we must put controls around it to protect against the near-limitless unintended consequences.”
Grossman says some steps have been taken to limit the most risky applications of artificial intelligence, such as the use of facial recognition by law enforcement. But overall, she believes the law has fallen short.
And she says there is another reason why regulators cannot get a handle on AI innovations: young tech whizkids coming out of school prefer careers in innovation to jobs in regulation.
“They are taught to ‘move fast and break things,’ ” says Grossman. “The government has trouble attracting top technical talent because new grads prefer the high salaries and perks of Silicon Valley to the lower government wages.”
While the law can’t always protect users from scams, there are still ways people can protect themselves.
The best methods of online protection are good password management and, crucially, multi-factor authentication, also known as MFA, says Jeff Raymer, an innovative educational facilitator at the Durham District School Board. Often used by banks, MFA requires users to verify an online activity, such as logging into an account, by using an additional authentication method, such as answering a text message.
With MFA, even if a scammer cracks an initial password, the second method of identification is usually enough to get them to move on to someone else, explains Raymer, who helps teachers and students negotiate issues such as digital citizenship, IT security, password management and online safety.
“In general, they’re just looking for low-lying fruit so that they can get their foot in the door,” says Raymer.
He adds that MFA methods are evolving to keep pace with advances in AI scamming.
“The first iteration of MFA was, ‘OK I am going to send you a text message.’ But it’s so easy now to spoof a sim card … (hackers) are able to reroute that text message.”
Cybersecurity companies have responded to this new problem by producing MFA authenticator apps, which generate random authentication codes that hackers can’t crack with current AI technology, Raymer says.
However, many people are not aware of this option, he notes.
“There is a lack of education around what are the best methods out there and MFA authenticator apps are what people should be using.”
Phishing-protection suggestions from the Canadian Centre for Cyber Security include:
- Never give away passwords to callers
- Assume that if a prize is too good to be true, it probably isn’t
- Be cautious about messages that require urgent action
- Do not click links sent by untrusted sources
Ironically, even the Canadian Centre for Cyber Security isn’t always safe from fraudsters. An excerpt from its website reads: “We have been made aware of recent scams impersonating the Cyber Centre. We do not make unsolicited telephone calls to individual Canadians. Any unsolicited phone calls to a private citizen claiming to be from the Cyber Centre are fraudulent.”
‘Imagine the impact on the justice system when judges and juries are no longer in a position to assess the veracity of the evidence in a case.’
— Maura R. Grossman, computer science professor, University of Waterloo
Raymer suggests that, given concerns about the security of face and voice recognition at the dawn of deepfakes, touch-identification on devices — a popular security feature around 10 years ago — may make a comeback.
Despite the ethical concerns, many observers say that, overall, the future of AI is bright. Pandya says he is very excited about the applications of this new technology, which is “critical to climate change, drug discovery, agriculture and health care.”
But others are wary. For Grossman, the future of machine learning raises a lot of social and ethical issues – ones with even more serious implications than Internet scams.
“As far as the future of artificial intelligence, I worry… more about things like bias and inequity, and a world where people can no longer trust their eyes and ears to know what is true,” says Grossman. “Imagine the impact on the justice system when judges and juries are no longer in a position to assess the veracity of the evidence in a case.”
Stewart says he is also deeply concerned at what the future may hold.
“Given the increasing power and accessibility of these tools, paired with the creativity of criminals who are already sharing capabilities on the dark web, we’re heading into the next Wild West in a big way.”