OpenAI Violated Canadian Privacy Law in ChatGPT Training: Probe Unveiled
OpenAI and Privacy Concerns in Canada
A joint investigation by Canadian privacy commissioners has revealed that OpenAI did not adhere to Canadian privacy laws when training its popular ChatGPT tool. This led to the collection and use of sensitive personal information, according to the findings.
The federal privacy commissioner, along with his counterparts in Quebec, British Columbia, and Alberta, released their findings on Wednesday morning regarding ChatGPT, a chatbot launched in 2022 that generates human-like responses to user inputs. The investigation began in 2023 after a complaint was filed alleging that OpenAI unlawfully collected, used, and disclosed personal information without consent.
According to the review, several concerns were identified, leading the watchdogs to conclude that the way OpenAI initially trained ChatGPT did not respect federal and provincial privacy laws. They found that OpenAI gathered vast amounts of personal information without safeguards to prevent its use in training models.
"This could include sensitive details such as individuals’ health conditions and political views, as well as information about children," said their report. It also found many users were unaware that their data was collected and used to train ChatGPT.
"OpenAI launched ChatGPT without having fully addressed known privacy issues. This exposed Canadians to potential risks of harm such as breaches and discrimination on the basis of information about them," said federal commissioner Philippe Dufresne's prepared remarks Wednesday.
Dufresne noted a "lack of accountability" from OpenAI regarding why it launched a product that didn’t follow Canadian law. "We have some statements from leaders of the organization at the time saying, 'We felt we had to move, we knew that there were others out there and so we launched it,' he said. "We found that problematic."
Need to Modernize Canada’s Laws
Despite the company expressing disagreement with the findings, stating it was compliant with various privacy acts "in most respects," the privacy watchdogs said OpenAI took steps to improve privacy protections and agreed to implement further measures to address their concerns.

The case highlights the need to modernize Canada’s privacy laws, according to Dufresne. "As AI is increasingly being integrated into personal and professional applications and while currently laws apply to AI, updated laws would help further support the safe deployment of new technologies to protect Canadians’ fundamental right to privacy," he said.
The investigation predates the fatal shooting in Tumbler Ridge, B.C. in February, but comes amid calls for the government to introduce regulations targeting AI chatbots. Seven lawsuits on behalf of those killed or injured in the rampage have been filed in California, accusing OpenAI and its co-founder Sam Altman of negligence.
Lawyers with the firm Rice Parsons Leoni & Elliott claim the Tumbler Ridge shooter's ChatGPT account was banned for "disturbing content," which allegedly included planning violent scenarios, prior to the February tragedy. "However, despite some 12 different OpenAI employees imploring the company to notify Canadian law enforcement about the shooter’s plans, nothing else was done," the firm said.
Late last month, Altman wrote an apology letter to the community for failing to alert RCMP about the account of the Tumbler Ridge shooter.
Balancing Protection and Access
Dufresne says a ban isn't the answer. The federal government has said it's reviewing whether the use of chatbots and social media should be age-restricted. Last year, Australia implemented a first-of-its-kind ban on youth under the age of 16 using major social media services including TikTok, X, Facebook, Instagram, YouTube, Snapchat, and Threads.
Asked if he would support a ban, Dufresne said a balance needs to be struck. "The first step need not necessarily be a ban. I think the first step should be, can we fix the underlying issue? Can we make it more privacy protective?" he said. "I think the goal is to reach this balance where you're protecting children, but you're also giving them the ability to evolve in this increasingly digital world."