The Alan Turing Institute, in coordination with the Ministry of Defence (MoD) and Defence AI Centre, has published research including a new workflow and system card to support the assurance of safe and responsible AI in Defence. The study was produced in collaboration with Accenture and published on Monday.
The research represents the first examination of AI assurance from a UK Defence perspective. The study gathered evidence from AI assurance success stories to provide clarity on AI assurance and safety in defence contexts, addressing gaps in academic research that predominantly focuses on non-defence AI use cases.
The Turing assembled an interdisciplinary defence-specialist team and convened UK and US government and military organisations, uncrewed systems engineers, testing and evaluation experts, international law experts, and AI safety researchers to undertake the research.
The report identifies specific bottlenecks slowing assured AI adoption for UK Defence, including accountability and liability issues, the absence of standardised performance metrics, and the lack of a single, clear list of guidelines for developers. Researchers developed a commander's guide using uncrewed systems as a use case to illustrate specific operational implications of computer vision errors.
The research provides a new workflow and system card enabling defence organisations to iteratively identify, document, and communicate risks affecting AI assurance and mission success. The study offers practical solutions for defence organisations to implement MoD AI assurance policy without halting operations.
The report recommends five strategic initiatives: testing the proposed AI assurance workflow and building a library of AI use cases; establishing long-term financial support for AI and data-centric capability enablers; implementing information sharing and peer reviewing processes; requiring contractual transparency from suppliers regarding system properties and training data; and building national AI testing and simulation facilities to address scaling challenges.
Anna Knack, Senior Research Associate and lead author, stated the workflow "documents the assurance case evidence for military AI systems, showing an auditable trail of developers' hard work to demonstrate compliance with the appropriate legal, regulatory and defence ethical standards."