In early 2023, a hacker gained access to OpenAI’s internal messaging systems, stealing information about the company’s artificial intelligence (AI) technologies, according to The New York Times. While the company reported the incident to its employees, it did not notify the public or law enforcement. The hacker managed to obtain information from discussions on OpenAI’s employee forum, where the latest technologies were being discussed. However, the hacker was unable to penetrate the systems where the GPT models are stored and trained.
Two sources who provided this information to the NYT mentioned that some OpenAI employees are concerned that such attacks could be used by countries like China to steal AI technology, potentially jeopardizing US national security. After learning about the incident, some employees questioned how seriously the company takes safety, with reported disagreements about the risks associated with AI.
Internal Fallout and Security Concerns
In response to the incident, OpenAI’s former technical program manager, Leopold Aschenbrenner, wrote a memo to the company’s board of directors, stating that the company was not doing enough to prevent foreign adversaries from stealing secrets. Aschenbrenner mentioned the security breach on a podcast, leading to his termination from OpenAI, which he claims was politically motivated. The discovery of the hack and the ensuing discord among employees adds to the growing list of problems at the company. We’ll keep you updated on further developments.
More recently, several AI security researchers left OpenAI due to disagreements over super-consistency, a concept involving methods to control super-intelligent AI. According to prominent figures in the field of AI, such as Daniela Amodei, co-founder of Anthropic, if the latest generative AI developments were stolen, it would not pose a serious threat to national security. However, as this technology becomes more capable, this assessment may change, notes NIX Solutions.
The incident highlights the challenges OpenAI faces in securing its technologies and maintaining employee trust. The company’s ability to address these issues will be crucial in ensuring its long-term success and the safe development of AI technologies.