Google has announced the launch of the Secure Artificial Intelligence Framework (SAIF), which is a conceptual framework for securing artificial intelligence systems. The company highlighted the need for such a framework in the public and private sectors to secure AI technology and create secure default models. SAIF is designed to mitigate the risks associated with AI systems such as model theft, training data poisoning, and malicious attacks. This new conceptual framework is an important step towards securing artificial intelligence.
The core principles of Google’s secure AI framework include:
- Extending security fundamentals to the AI ecosystem.
Infrastructure protection by default.
- Extension of detection and response.
- Monitor the input and output of artificial intelligence systems for anomaly detection.
- Using threat intelligence to predict attacks.
- Protection automation.
Improve the scale and speed of response to security incidents.
- Harmonization of control at the platform level.
Ensuring consistent security throughout the organization.
- Control adaptation.
- Create faster feedback loops for AI deployment.
- Using reinforcement learning based on incidents and user feedback.
- Contextualizing the risks of an AI system.
Conducting comprehensive risk assessments in connection with the introduction of artificial intelligence into business processes.
Google also promises to provide open source tools to help organizations put SAIF elements into practice and advance AI security, notes NIXsolutions. The company plans to expand bug-finding programs and encourage research into AI security.