About security
Welcome to the Security standards foundational group. This group is for people whose focus is on building a trusted, secure environment in which your AI or ML system can operate, such as:
- Building effective defences against attacks
- Evidencing your systems comply with UK laws and regulations, such as GDPR
- Ensuring you build and retain strong trust with your customers and investors.
Key Resources in Security
ISO/IEC 42001 is a key standard for those in this foundation in how it provides guidance for establishing, implementing, maintaining and continually improving an AI management system within an organization.
Search for more security foundational standards in the Standards section.
BSI has conducted limited research on other standards produced by industry and proposes the below may be of interest. We welcome your feedback on this selection and are interested to know what else you may be using.
- NCSC Guidelines for Secure AI System Development
- OWASP Machine Learning Top 10
- NIST - Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Take part in the Security foundational group
This group is open to all. To see all of the resources in this space and participate in security-related discussions, click or tap on the Join Group button above.
You will then also be able to see discussions and content from this area in the main Discussions, Blogs, Resources, and other areas on the main toolbar.
We are keen to understand what standards, best practices, and other guidance texts you currently use when developing and deploying AI systems, as well as what gaps you are finding for guidance. Share your thoughts and experiences in the Security discussions space.