Italy Slaps €15 Million Fine On OpenAI, Requires AI Awareness Campaign

Please note that we are not authorised to provide any investment advice. The content on this page is for information purposes only.

Italy’s data protection authority has fined OpenAI’s ChatGPT €15 million for an alleged data leak and the wrong use of personal information.

According to the regulator, the probe was launched in March 2023. The Italian Data Protection Authority (IDPA), also called the Garante, found several problems such as OpenAI not informing people about the leak and training its AI models using personal data without proper permission.

The Fine Reflects OpenAI’s Partial Cooperation During The Investigation

The IDPA said these actions went against rules about being clear under the General Data Protection Regulation (GDPR). There were also worries about not having proper ways to check ages, which might expose kids under 13 to responses that are not right for their age.

The IDPA has now asked OpenAI to run a six-month public awareness campaign through radio, newspapers, TV, and online platforms. The goal of this campaign is to help people learn about generative AI, the data it uses, and how users can use their GDPR rights, like fixing data or rejecting its use.

The fine shows that OpenAI partly helped during the check, which the IDPA noted in its final report. Also, OpenAI’s setting up of a European office in Ireland during the probe caused the GDPR’s “one-stop shop” rule to apply. This means Ireland’s Data Protection Authority will now handle further checks on compliance.

The regulator noted that ChatGPT users and others need to know how to stop generative AI from using their data and be able to use their GDPR rights.

In April, OpenAI invited many top leaders from Fortune 500 companies to meetings in San Francisco, New York, and London to show AI services. The promotion showed a business-level version of OpenAI’s chatbot. The goal of this offering is to meet specific needs in different areas.

OpenAI Promised To Keep Business Clients Data Safe

The company promised business clients that their data would stay safe and not be used in training its models. It wants to focus on businesses to earn more money and also bring its services to new markets.

Last week, OpenAI announced it was trying new reasoning AI models, o3 and o3 mini, as it competes with rivals like Google to build smarter models that can handle hard problems.

The Chief Executive Officer of OpenAI Sam Altman said the company plans to release the o3 mini by the close of January, followed by the full o3. These stronger language models could work better than the older ones and attract more users and funding, according to Altman.

OpenAI, with support from Microsoft, rolled out o1 models in September. These models are built to take more time to think through questions and solve tough problems.

The company stated in a blog post that the o1 models can reason through tough tasks and handle harder challenges in areas like science, coding, and math. It added that the new o3 and o3 mini models, which are currently being tested for safety, will be stronger than the o1 models.

About Ali Raza PRO INVESTOR

Ali is a professional journalist with experience in Web3 journalism and marketing. Ali holds a Master's degree in Finance and enjoys writing about cryptocurrencies and fintech. Ali’s work has been published on a number of leading cryptocurrency publications including Capital.com, CryptoSlate, Securities.io, Invezz.com, Business2Community, BeinCrypto, and more.