NIXsolutions: OpenAI Lobbied EU for Softer Approach to AI Regulation

OpenAI, the creator of chatbot ChatGPT, is drawing attention to the issue of regulating generative AI in the European Union, according to Time magazine, citing European Commission documents.

OpenAI’s focus is on the impact of proposed legislation regarding the use of artificial intelligence. The company has approached European legislators with a proposal to amend the draft Law on Artificial Intelligence. Specifically, OpenAI has proposed delisting general purpose AI systems (such as OpenAI ChatGPT and DALL-E) from the “high-risk” category. This would mean that they would not be subject to the strictest obligations of security and transparency. In a white paper sent to EU Commission and Council officials in September 2022, OpenAI stated: “GPT-3 itself is not a high-risk system, but has capabilities that could potentially be exploited in high-risk use cases.”

NIXsolutions

In June 2022, representatives of OpenAI met with representatives of the European Commission to discuss the risk categories proposed in the draft AI Law. The minutes of the meeting indicate that OpenAI representatives expressed concern about the inclusion of general-purpose AI systems in the list of high-risk systems, as well as the inclusion of more systems in the high-risk category. A source from the European Commission confirmed that at the meeting, representatives of OpenAI expressed their concern, believing that excessive regulation could negatively affect innovation in the field of artificial intelligence.

It appears that OpenAI’s lobbying efforts have been partially successful. In the final draft of the EU AI Law, passed on June 14, general-purpose systems such as OpenAI ChatGPT are not classified as high-risk systems. However, the law places higher requirements on the transparency of “base models” – powerful AI systems such as GPT-3 that can be used for a variety of tasks. This will require companies to conduct risk assessments and disclose whether copyrighted materials have been used in training their AI models.

An OpenAI spokesperson confirmed that the company supports including “base models” as a separate category in AI law, despite the company’s preference not to disclose the data sources used to train their models due to fears of possible copyright infringement lawsuits, notes NIXsolutions.

The EU AI law is still some time away before it comes into force. It is expected to be approved at the end of this year, and full implementation could take up to two years.