"The potential of AI, especially generative AI, is immense. However, in the pursuit of progress within these new frontiers of innovation, there needs to be clear industry security standards for building and deploying this technology in a responsible manner," said Google in a blog announcing its Secure AI Framework (SAIF) yesterday. The "conceptual framework to help collaboratively secure AI technology" hopes to assist public and private companies in deploying secure-by-default AI models. "SAIF is designed to help mitigate risks specific to AI systems like stealing the model, data poisoning of the training data, injecting malicious inputs through prompt injection, and extracting confidential information in the training data," the blog post added. Why it matters: AI's taking the world by storm—and tech's top brass is already calling for introspection over its 'doomsday' deployment. As companies continue to push the innovation envelope in the absence of AI laws, guiding principles, whether from governments or the private sector, can provide a benchmark for trust and safety processes. The six core elements of Google's SAIF: The principles include: Strengthening the AI ecosystem's security foundations: This includes leveraging "secure-by-default infrastructure" protections to secure AI systems, users, and applications. Organisational expertise needs to be developed to keep up with advancements in AI and adapt infrastructure protections accordingly. Bringing AI into an organisation's "threat universe": Organisations should swiftly detect and respond to AI cyber-incidents by extending threat intelligence to them. This involves detecting anomalies by monitoring generative AI systems' inputs and outputs and anticipating attacks using threat intelligence. This requires collaboration with an organisation's counter-abuse,…
