Peter Spayne is a Chartered Engineer with academic qualifications at PhD, MSc and BEng level, specialising in autonomous systems, uncrewed underwater vehicles, artificial intelligence and lethal autonomous weapon systems.
His expertise was developed over a two-decade career in the Royal Navy, where he worked at the intersection of advanced technology, operations and safety in complex environments.
Consultancy focuses on the integration of artificial intelligence into complex military systems, including uncrewed platforms, weapons systems and AI-enabled decision support tools. His work spans system design, assurance, governance and operational integration, with particular emphasis on safety, trust, and the credible deployment of autonomy in mission-critical contexts.
Peter is an active academic researcher and published author, with a body of influential work on AI safety, assurance and autonomous systems. He is the author of the Safe to Operate Itself Safely Framework, an assurance framework designed to inform the safe deployment of autonomous weapon systems.
Beyond research, Peter contributes widely across policy, industry and academia. He regularly guest lectures at universities and professional institutions, engages with think tanks and advisory bodies, and provides independent consultancy to government, defence organisations and industry.
He acts as an expert witness on artificial intelligence in weapons systems at select committees and contributes to international, ministerial-level discussions on the governance, assurance and responsible use of military AI.

