Meta CEO Mark Zuckerberg previously promised to make artificial intelligence (AGI), or human-level AI, widely available. However, a recent company policy document outlines scenarios where Meta might halt the release of advanced AI systems.
AI Systems Considered Too Risky
In the Frontier AI Framework, Meta identifies two categories of AI systems it deems too dangerous to release: “high-risk” and “critical-risk.” These systems could potentially help attackers bypass cybersecurity defenses or even conduct chemical and biological attacks. The key distinction is that “critical-risk” systems could cause a catastrophic outcome that cannot be mitigated, whereas “high-risk” systems are still dangerous but less reliable for attackers.
Meta provides examples of such threats, including a complete automated compromise of an enterprise-scale environment and the proliferation of a highly effective biological weapon. While the list is not exhaustive, it highlights the most urgent risks that could arise from powerful AI. The classification of systemic threats is based on internal and external research rather than practical experience. The company also notes that scientific tools are not yet reliable enough to provide definitive metrics for assessing AI threats.
Meta’s Approach to AI Security
If an AI system is classified as “high-risk,” Meta will restrict access within the company and delay its release until mitigation measures bring the risk to a moderate level. If a system is deemed “critical-risk,” Meta will implement unspecified security measures and halt development until the system becomes safer. The company plans to update its policy as the AI industry evolves and will release the document ahead of the France AI Action Summit this month, adds NIXSolutions.
Unlike other American tech giants, Meta open-sources its Llama AI models. This document may also serve as a way to differentiate itself from China’s DeepSeek, which has committed to open AI models but has yet to address security concerns.
We’ll keep you updated on any further developments in Meta’s AI policy.