CMOtech Canada - Technology news for CMOs & marketing decision-makers
Canada
Probe finds OpenAI violated privacy laws in ChatGPT dev

Probe finds OpenAI violated privacy laws in ChatGPT dev

Wed, 6th May 2026 (Yesterday)
Jake MacAndrew
JAKE MACANDREW Interview Editor

Regulators found that OpenAI's initial training of the GPT-3.5 and GPT-4 models did not comply with the privacy laws they enforce.

A joint investigation by the Privacy Commissioner of Canada (OPC) and provincial counterparts in Quebec, British Columbia, and Alberta identified overcollection of personal information with a lack of valid consent and transparency, factual inaccuracies involving personal information, and weak accountability for data under OpenAI's control.

The commissioner's office said Wednesday that during the investigation, OpenAI changed some of its practices and committed to further steps. It has significantly limited the personal and sensitive information used to train new ChatGPT models and said it would do more to inform Canadians about the implications of using the service.

"People are using ChatGPT in increasingly personal ways, including for questions and tasks that can touch sensitive parts of their lives. We recognize the deep responsibility that comes with that trust. We care deeply about the people who use ChatGPT," stated the organisation in a blog posted May 6.

At the federal level, the Privacy Commissioner found the complaint well-founded and conditionally resolved. In other words, the regulator found breaches but accepted that steps already taken, along with further commitments due in the coming months, would address the concerns identified under the Personal Information Protection and Electronic Documents Act.

The Office of the Information and Privacy Commissioner for British Columbia and the Office of the Information and Privacy Commissioner of Alberta found the complaint well-founded and unresolved, while the Commission d'accès à l'information du Québec found the complaint is well-founded and conditionally resolved on the issues of appropriate purposes, individual rights and accountability. Quebec found the issue of consent unresolved, but well-founded.

The case adds to broader scrutiny of how artificial intelligence systems are trained on large volumes of online material that may contain personal information. Privacy regulators have increasingly focused on whether companies can lawfully use that material without meaningful consent, and whether individuals have practical ways to challenge inaccurate outputs or remove data linked to them.

"The Offices found that OpenAI failed to obtain valid consent for its collection, use and disclosure of personal information for the purpose of developing and deploying the models," stated the OPC.

The federal regulator also used the case to argue for legislative reform. While current privacy law applies to artificial intelligence systems, the office said updated rules would better support the safe use of new technologies while protecting individuals' rights.

In a statement, Commissioner Philippe Dufresne described the investigation as an early intervention on emerging issues with AI systems.

"This milestone investigation highlights the importance of prioritizing privacy in the development, deployment and ongoing evolution of artificial intelligence so that Canadians are able to safely use and leverage the benefits of these technologies," he said.

In this case, the findings went beyond data collection. They also addressed how personal information appeared in ChatGPT responses, including concerns about factual inaccuracies and whether people could effectively exercise rights to access, correct or delete information relating to them.

The federal office said it would continue to monitor OpenAI's actions to ensure it continues to limit the impact of its AI tools on individuals' privacy. That signals ongoing oversight rather than a closed file, even though the complaint has been conditionally resolved at the federal level.

It comes as policymakers and regulators in several jurisdictions try to apply older privacy frameworks to systems that ingest vast datasets and generate conversational responses that can include personal details.

"I expect that the findings of this investigation will inform and advance the privacy-protective design of other AI-powered technologies. This investigation also further highlights the need to modernize Canada's privacy laws for the digital age," said Dufresne.