Expert Q&A: How the UK Government is supporting the AI assurance market
15 December 2024
The Department for Science, Innovation, and Technology (DSIT) recently published a Research and Analysis paper laying out the UK Government’s approach to proactively grow and strengthen the AI assurance market sector in the UK.
DSIT have collated a significant amount of primary source information on the AI assurance sector through a variety of means such as reviewing results from recent surveys to identifying current and future opportunities. Alongside this, they have designed a toolset and roadmap to mitigate AI risk and support the adoption of safe AI.
We caught up with Gordon Baggott, 4most’s Director of AI, to learn more about how this paper impacts organisations in the UK looking to adopt AI powered solutions.
Q. What type of organisation does this impact?
Firms within financial services looking to adopt or accelerate AI productivity, specialised companies offering AI assurance, and regulatory bodies alike should take note of DSIT’s direction outlined in the paper, and make themselves aware of the different initiatives available to support their AI assurance objectives.
Q. What does the paper imply about the UK Government’s current and future position on AI?
It brings a welcome focus to the critical, but frequently under-explored world of AI Assurance.
From my point of view as an AI developer and SME responsible for assessing and remediating safety issues with AI systems designed by others, this spotlight really says to me that the UK is aiming to get ahead of the curve by supporting those at the sharp end of AI adoption.
Q. How will DSIT practically help firms within financial services deliver their AI strategies?
AI powered solutions, such as Large Language Models (LLM), have the potential to significantly boost productivity, however it’s imperative firms take the appropriate precautions when adopting these powerful tools to optimise cost efficiency and minimise disruption to core business functions and processes.
DSIT’s AI Assurance Platform, outlined in the paper, aims to support the mitigation of risks relating to AI robustness, safety, and security. However, it can be a bit daunting looking at the number of safety tools available in the embryonic DSIT artificial intelligence assurance techniques library, with many hundreds more becoming apparent on wider searching.
For Financial businesses of all sizes embedding AI, the new AI Assurance platform will provide helpful guidance when identifying those general approaches and considerations that will be invaluable when assessing and remediating your own AI safety concerns. Careful measures will need to be taken to align current compliance approaches to encompass AI assurance to avoid duplication of effort or missing key risks, but with the right technical and safety experts it can be done in a cost-effective way.
Q. What can the rest of the paper tell us about the developing AI regulatory landscape in the UK?
When it comes to risk management in financial institutions, there is a distinct lack of both regulator guidance and wider organisational knowledge and skills in both AI in general and assurance specifically. This has been demonstrated in recent Bank of England communications and evident from our own interactions with clients. This is expected given the rapid evolution of generative AI, but it must be swiftly remedied for organisations big and small to grasp competitive advantage from the very practical, and often simple, implementations of AI that bring most value.
Having centralised resources based on robust guidance such as ISO, DSIT provides a very good platform for cross sector AI safety and provides the financial regulators a good base on which to build their own approaches to managing these risks.
Q. How does this publication support UK financial companies that have an international footprint?
When considering the international nature of our own organisation as well as that of our clients, I am really pleased to see DSIT’s focus on international interoperability. Whether you are developing fraud detection models, cybersecurity threat hunting systems, capital adequacy predictions or hyper-personalised customer journeys, being able to communicate using harmonised terminology domestically and across the globe remains a basic tenet of realising efficiency and cost saving from AI approaches.
AI assurance, like much of the wider AI and Machine Learning lexicon has suffered from inconsistent terminology and erroneous labelling of concepts for many years and it is high time it ceased – this is a great step in the right direction.
Q. What are your overall thoughts on the implications of this paper?
This publication makes me optimistic about the rapid adoption of safe AI. Concerns over AI risks are warranted but have been a huge decelerator in terms of AI adoption in the highly regulated world of financial services.
I am confident that our clients’ trust in safe AI being achievable in the short term will build rapidly as our own experience adopting safe AI has shown.
My eyes are now fixed on the financial regulators, both domestically and internationally, hoping they will take the baton from DSIT and pro-actively create AI assurance rules and guidance tailored to our critical sector.
How can 4most help?
4most works hard to be one of the UK’s c500 firms supplying AI assurance goods and services to those organisations adopting AI in what is growing to be a huge market.
If you would like support understanding this paper in more detail or want to learn more about how we can support the curation and implementation of your AI strategy, please get in touch – info@4-most.co.uk.
Interested in learning more?
Contact usInsights
UK Deposit Takers Supervision – 2026 Priorities: What banks and building societies need to know about the PRA’s latest Dear CEO letter
21 Jan 26 | Banking
EBA publish final ‘Guidelines on Environmental Scenario Analysis’: What European banks need to know about the future of managing ESG risks
19 Dec 25 | Banking
Solvency II Level 2 Review finalised: What insurers should focus on before 2027
17 Dec 25 | Insurance