OpenAI has been in the headlines due to Elon Musk’s lawsuit, and the company’s problems do not appear to be going away anytime soon. Sam Altman’s company was reportedly found to be in violation of multiple Canadian privacy laws following an investigation conducted by Canada’s Privacy Commissioner and regulators from Alberta, Quebec, and British Columbia. This investigation looked into how the company collected and used personal data while training its AI models, including the technology that powers ChatGPT.
According to the findings, the regulators discovered that OpenAI collected large amounts of personal information without implementing adequate safeguards or obtaining meaningful user consent. The investigation also raised concerns about individuals’ limited control over data that could have been scraped or obtained from third-party datasets for AI training purposes.
Also read: Apple to invest Rs 100 crore in India and it is not for iPhones or Macs
The authorities have reportedly criticised the company’s approach to transparency and user rights, particularly in terms of accessing, correcting, or deleting personal information that may have been included in training data. Concerns were also raised about inaccuracies in AI-generated responses and a lack of clear mechanisms for users to challenge such results.
In response, OpenAI agreed to make several changes to better comply with Canadian privacy regulations. The company reportedly retired older AI models that did not meet the required standards and implemented filtering systems to detect and mask sensitive personal information in training datasets, such as names and phone numbers.
According to the report, OpenAI will provide more clear notices to ChatGPT users, particularly those who access the platform without signing in. These notices are said to explain how conversations can be used for AI training and warn users about sensitive information.
Furthermore, the company has promised to improve data export tools and protections for retired datasets, as well as test new safeguards to prevent AI systems from revealing sensitive information about minors associated with public figures.