NOT IN CLUB Jump to content

Harness the power of AI through standards support

A community for UK businesses in the Innovate UK BridgeAI programme to connect, and collaborate with BSI’s standards expertise, unlocking the full potential of AI in a responsible, ethical and trustworthy way.

Sign up
  • Community insight


    2 stories in this category

    • Where to start with AI Assurance: A Q&A with Saket Mohan

      By @Tahira, AI project manager, BSI 
      Having rigorous assurance in place for your AI deployments demonstrates to your stakeholders that your business can manage risks, operate safely, and achieve the potential AI can bring. Standards can guide you to implementing this secure foundation but for many SMEs it can be hard to know where to start or which standards to use. 
      Our upcoming webinar on AI Assurance looks to help with this and answer questions SMEs may have. One of the panellists who will be joining the event is Saket Mohan, an Innovate UK BridgeAI grant winner and founder of Secure Elements.
      Secure Elements is an SME which focuses on cybersecurity engineering in AI and provides automotive cybersecurity and safety analysis.
      In this blog we speak to Saket to get a better understanding of the AI assurance and standards journey, including the key questions an SME should ask when seeking an AI assurance provider. 
      What should SMEs and start-ups focus on when starting out on their AI journeys, particularly in the context of cybersecurity?
      The first thing to do is to establish a set of guidelines and process documents which include considerations for standards, local legislation, and regulations. You can usually find these easily in the public domain.
      Understanding and applying these guidelines is the best thing an organisation can do to show market compliance and become operational quickly. The next step is identifying or collecting high-quality data. You will then need to apply the established standards and regulations to this data. From here you can select the appropriate type of machine learning and guidance to train your model.
      Once your model generates the desired outputs, it should be rigorously stress-tested. Regarding cybersecurity, it’s vital to ensure the legitimacy and appropriateness of the data for training purposes and address any sensitive or critical information that could lead to reputational damage or legal breaches.
      What tools and resources are you using that have helped you?
      For testing and monitoring performance, we’ve used The Department for Science, Innovation and Technology (DSIT) open-source AI testing framework. It provides a performance score, so we can evaluate our model's effectiveness.
      We use a GDPR assessment to ensure the quality of our data. Good testing of any system considers various frameworks, such as the AI Risk Management Framework from NIST, especially when dealing with client data.
      We also utilize OWASP standards for software bill of materials using Cyclone DX, ensuring secure software exchange within the supply chain.
      What standards does your organization use or plan to use for developing AI solutions? 
      Secure Elements is a cybersecurity company, so we put standards and best practices at the forefront of what we do. All our tools and software are developed using standards and codes.
      A few examples of this are:
      ISO 21434,  a cyber security standard on road vehicles;  ISO 24747 on systems and software engineering; and,  we are planning to adopt ISO 32675 - information technology - DevOps.   We also adhere to UN regulations R155 – Cyber security and cyber security management system, and R156 – Software update and software update management system.  As we start incorporating AI algorithms into our software models, we are applying ISO 42001 too. 
      How have adopting these standards benefitted you?
      By adopting standards, which are widely adopted across 58 countries, it means our products will always align with industry mandates. We see compliance as important for selling into the supply chain, as non-compliance could prevent us from market entry.
      A key reason we adopt these standards and regulations, even if not locally mandated, is because they represent a common approach. Regulations are often based on standards and adhering to standards prepares us for future regulation and safeguards our business model. 
      This strategy means we’re always up to date and can have business continuity despite the introduction of new regulations and controls.
      Are you considering certification or assurance in cybersecurity, and why?
      Yes, we are pursuing ISO 27001 certification and aiming for a cybersecurity certificate of compliance from the National Cyber Security Centre (NCSC). We’re relying on these certifications to protect our business and show our clients that we are a trusted supplier. 
      Ultimately, our goal is to enter the market responsibly and with good cybersecurity practices. Because of that, we use standards as often as possible but expect to use more as we acquire more responsibility.
      From a data and cybersecurity perspective, what do you look for when seeking out new companies to do business with, especially regarding the development and deployment of AI?
      We prioritize the quality of data and training methods. Businesses should be asking questions about the model or method through which the AI has been trained. At Secure Elements, we ask for a verification validation matrix from suppliers to ensure the security of the models we procure or sell.
      What kind of questions would you ask a prospective vendor or AI solution provider from a data and cybersecurity perspective?
      Our perfect list of questions looks something like this: 
      Can you demonstrate your processes and governance models for handling cybersecurity and AI data? What applicable standards, regulations, and best practices do you consider when developing? Do you incorporate feedback from customers and clients to improve your models? Do you have a team dedicated to assessing model efficacy and safety? Can you explain your model and provide model explainability? Do you have questions you’d like to ask about AI assurance and cybersecurity?  Join us at our upcoming AI Assurance & Cybersecurity webinar on Thursday, 25 July, where Saket and others on our expert panel will share further insights and answer your questions. 
      Sign up for the webinar

      Open this page in a new window
    • Building an AI assurance ecosystem: tools and resources for SMEs

      In our upcoming webinar on building AI assurance, we have guest speaker Nuala from the Responsible Technology Adoption (RTA) unit delving into the tools and guidance available for SMEs to build AI assurance. Nuala shared some thoughts in advance of the webinar: 
      AI is transforming the way we work and live, with rapid developments in its capabilities creating exciting opportunities to support public services and improve lives. However, as AI becomes increasingly embedded across the economy, identifying, mitigating, and governing any potential risks will be key to developing and deploying trustworthy and responsible systems, giving organisations the confidence to use AI and drive future adoption. 
      Appropriate governance measures are necessary if we are to maximize and reap the benefits of these technologies while mitigating potential risks and harms.
      Building an AI assurance ecosystem
      Since 2021, the UK Government has been working to drive the development of a flourishing AI Assurance ecosystem, to build justified trust in AI systems. 
      Assurance is the process of measuring, evaluating, and communicating about a system or process - in the case of AI, assurance measures, evaluates and communicates whether AI systems are trustworthy. 
      There are several different techniques for assuring AI systems that range from qualitative techniques, including impact assessments and evaluations, to more formal, quantitative techniques, including bias audits, performance testing, and formal verification.

      These assurance techniques are then underpinned by industry-led consensus-based technical standards, developed by Standards Development Organizations (SDOs). These standards help to create a shared set of expectations, or baseline, to ensure coherence and consistency across AI assurance service providers.
      Responsible Tech Adoption Unit (RTA), helping SMEs to implement tools for trustworthy AI 
      Over the last two years, the Responsible Technology Adoption Unit (RTA), a directorate of the UK Government’s Department for Science, Innovation and Technology (DSIT) has been developing a suite of tools and guidance to help drive demand and grow supply of AI assurance products and services in the UK. 
      Introduction to AI Assurance
      In February 2024, we published the Introduction to AI Assurance, which aims to help organisations better understand how AI assurance can be implemented.
      The guide is designed to be accessible to a range of users, such as developers and product managers, who may not engage with assurance on a day-to-day basis. It introduces users to core assurance definitions and concepts and outlines how these can be applied to support the development and use of trustworthy AI. This guide aims to provide an accessible introduction to both assurance mechanisms and global technical standards, to help industry and regulators better understand how to build and deploy responsible AI systems.
      Portfolio of AI Assurance Techniques
      Alongside the Introduction to AI assurance, RTA has also been developing resources to help start-ups and SMEs better understand AI Assurance and how it can be applied practically, across a range of different sectors and use cases. 
      In June 2023, in collaboration with techUK, RTA launched the Portfolio of AI Assurance Techniques. The Portfolio features real-world case studies of AI assurance mechanisms being applied by organizations across a range of sectors. It is designed to support organizations identify relevant assurance techniques and standards for their context of use. The Portfolio is a living resource with over 60 case studies, that is regularly updated to ensure that case studies are up to date and reflect current good practice. 
      Industry guidance: Responsible AI in HR and recruitment
      The RTA has also been developing resources to help organisations implement assurance good practices in particular sectors and contexts of use. In March 2024, we published our updated guidance on Responsible AI in Recruitment. This guidance focuses on assurance good practice for the procurement and deployment of AI systems in HR and recruitment, with a specific focus on technologies used in the hiring process (e.g., sourcing, screening, interview and selection). It identifies key questions, considerations, and assurance mechanisms that may be used to ensure the safe and trustworthy use of AI in this domain.
      Next steps
      The Responsible Tech Adoption Unit will continue to develop a suite of tools and guidance to help start-ups and SMEs better understand and engage with assurance mechanisms and standards.
      If you’d like to learn more about our work or feed into the development of future products to ensure these meet your organization's needs, please get in touch at ai-assurance@dsit.gov.uk.
      Nuala will be among the other expert speakers at the upcoming AI Assurance: insights and practices for SMEs. Join us on Thursday, 25 July to hear more from her and other expert speakers, and ask any questions you may have about AI assurance. 
       
  • Sign up for the BridgeAI Standards Community

    The InnovateUK BridgeAI programme empowers SMEs in the high-potential sectors of Agriculture/Agrifood, Construction, Creative, and Transportation to bridge the gap to successful AI adoption, unlocking the potential for greater growth and productivity.  

    The BridgeAI Standards Community supports SMEs in the programme to collaborate and learn together how to harness the power of AI in a safe and trustworthy way through the use of standards. Membership in the community is free.

×
×
  • Create New...