NIXsolutions: OpenAI Forms Preparedness Unit to Address AI Risks

OpenAI, the company responsible for ChatGPT, has established a new division named “Preparedness” to evaluate and research artificial intelligence models in the context of potential “catastrophic risks.”

NIXsolutions

Unit Leadership and Mission

The director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology, Aleksander Madry, leads the Preparedness team. Their primary mission is to monitor, predict, and safeguard against the risks arising from emerging AI systems. These risks encompass a wide range of concerns, from the persuasive capabilities of AI, as seen in phishing emails, to the generation of malicious code. Notably, some potential threats in categories like chemical, biological, radiation, and nuclear domains have not yet materialized in practice.

OpenAI’s Concerns and Public Involvement

OpenAI’s CEO, Sam Altman, has consistently expressed apprehensions that AI could pose existential risks to humanity. The establishment of the Preparedness division reflects this stance. The company is also committed to investigating “less obvious” areas of AI risk, as stated by Mr. Altman. OpenAI invites individuals to share their concerns, with the ten most outstanding submissions having the opportunity to receive a $25,000 cash prize or the chance to work at OpenAI. They challenge participants to envision using OpenAI’s advanced models for unique, albeit potentially catastrophically incorrect, scenarios.

Preparedness’ Responsibilities and Goals

The Preparedness team’s primary task is to formulate a “threat-informed development policy.” This policy will outline OpenAI’s approach to building AI model assessment and monitoring tools, efforts for threat mitigation, and a governance structure for overseeing model development. Their focus extends to both the pre- and post-deployment phases of AI models.

Looking Ahead

OpenAI’s leadership, including Sam Altman and Ilya Sutskever, Chief Scientist and Co-Founder of the company, anticipate the emergence of AI with capabilities surpassing human intelligence within the next decade, notes NIXsolutions. They acknowledge the uncertainty surrounding the benevolence of such AI and, consequently, emphasize the importance of studying methods to limit its potential risks.