At a US Senate hearing, OpenAI CEO Sam Altman proposed an agency to oversee AI models that exceed “a certain level of capability.” He stressed the need to issue licenses for advanced developments and withdraw them in case of violation of established rules.
Concerns about uncontrollable consequences
Altman’s proposal came in response to senators’ concerns about the tech industry’s overzealousness for AI. Rapid progress in development can lead to unpredictable consequences. Agreeing with the senators’ comments, Altman proposed the creation of an agency similar to the Nuclear Regulatory Commission, which would issue licenses for nuclear power plants and strictly control their operation.
Prevent loss of control
Altman confirmed fears that artificial intelligence could indeed get out of control. He stated, “I think if something goes wrong with this technology, it could go very wrong. We want to speak up about it and work with the government to make sure that doesn’t happen.”
Growing anxiety and challenges
The AI language model from OpenAI included in the GhatGPT chatbot has attracted considerable attention and has become one of the most successful developments in the field of artificial intelligence, notes NIXsolutions. However, initial excitement quickly gave way to apprehension. Industry leaders and celebrities have expressed their concern and signed an open letter demanding a temporary halt to AI development until appropriate regulations are in place.
Thus, the OpenAI CEO’s proposal to create an AI control agency and introduce a permit system is in the spotlight, as attempts are being made to prevent loss of control over emerging AI technology.