In our upcoming webinar on building AI assurance, we have guest speaker Nuala from the Responsible Technology Adoption (RTA) unit delving into the tools and guidance available for SMEs to build AI assurance. Nuala shared some thoughts in advance of the webinar:
AI is transforming the way we work and live, with rapid developments in its capabilities creating exciting opportunities to support public services and improve lives. However, as AI becomes increasingly embedded across the economy, identifying, mitigating, and governing any potential risks will be key to developing and deploying trustworthy and responsible systems, giving organisations the confidence to use AI and drive future adoption.
Appropriate governance measures are necessary if we are to maximize and reap the benefits of these technologies while mitigating potential risks and harms.
Building an AI assurance ecosystem
Since 2021, the UK Government has been working to drive the development of a flourishing AI Assurance ecosystem, to build justified trust in AI systems.
Assurance is the process of measuring, evaluating, and communicating about a system or process - in the case of AI, assurance measures, evaluates and communicates whether AI systems are trustworthy.
There are several different techniques for assuring AI systems that range from qualitative techniques, including impact assessments and evaluations, to more formal, quantitative techniques, including bias audits, performance testing, and formal verification.
These assurance techniques are then underpinned by industry-led consensus-based technical standards, developed by Standards Development Organizations (SDOs). These standards help to create a shared set of expectations, or baseline, to ensure coherence and consistency across AI assurance service providers.
Responsible Tech Adoption Unit (RTA), helping SMEs to implement tools for trustworthy AI
Over the last two years, the Responsible Technology Adoption Unit (RTA), a directorate of the UK Government’s Department for Science, Innovation and Technology (DSIT) has been developing a suite of tools and guidance to help drive demand and grow supply of AI assurance products and services in the UK.
Introduction to AI Assurance
In February 2024, we published the Introduction to AI Assurance, which aims to help organisations better understand how AI assurance can be implemented.
The guide is designed to be accessible to a range of users, such as developers and product managers, who may not engage with assurance on a day-to-day basis. It introduces users to core assurance definitions and concepts and outlines how these can be applied to support the development and use of trustworthy AI. This guide aims to provide an accessible introduction to both assurance mechanisms and global technical standards, to help industry and regulators better understand how to build and deploy responsible AI systems.
Portfolio of AI Assurance Techniques
Alongside the Introduction to AI assurance, RTA has also been developing resources to help start-ups and SMEs better understand AI Assurance and how it can be applied practically, across a range of different sectors and use cases.
In June 2023, in collaboration with techUK, RTA launched the Portfolio of AI Assurance Techniques. The Portfolio features real-world case studies of AI assurance mechanisms being applied by organizations across a range of sectors. It is designed to support organizations identify relevant assurance techniques and standards for their context of use. The Portfolio is a living resource with over 60 case studies, that is regularly updated to ensure that case studies are up to date and reflect current good practice.
Industry guidance: Responsible AI in HR and recruitment
The RTA has also been developing resources to help organisations implement assurance good practices in particular sectors and contexts of use. In March 2024, we published our updated guidance on Responsible AI in Recruitment. This guidance focuses on assurance good practice for the procurement and deployment of AI systems in HR and recruitment, with a specific focus on technologies used in the hiring process (e.g., sourcing, screening, interview and selection). It identifies key questions, considerations, and assurance mechanisms that may be used to ensure the safe and trustworthy use of AI in this domain.
Next steps
The Responsible Tech Adoption Unit will continue to develop a suite of tools and guidance to help start-ups and SMEs better understand and engage with assurance mechanisms and standards.
If you’d like to learn more about our work or feed into the development of future products to ensure these meet your organization's needs, please get in touch at ai-assurance@dsit.gov.uk.
Nuala will be among the other expert speakers at the upcoming AI Assurance: insights and practices for SMEs. Join us on Thursday, 25 July to hear more from her and other expert speakers, and ask any questions you may have about AI assurance.
Recommended Comments
There are no comments to display.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now