I'm also an affiliate at the Oxford Martin AI Governance Initiative, and, until recently, was a research scholar at the Centre for the Governance of AI (GovAI) and a technical advisor at the UK's AI Security Institute.
In previous lives I've studied mathematics at Durham University and computational science at Uppsala University; worked as an actor in northern Finland; completed Prague Marathon in 3:57; solo backpacked from the Baltics to the Balkans; and earned a diploma (DipABRSM) on the violin.
I'm excited to be co-principal organiser for the first ICML workshop on Technical AI Governance (TAIG)!
Publications
Preprints
-
Ben Bucknall, Robert F. Trager, and Michael A. Osborne
Position: Ensuring mutual privacy is necessary for effective external evaluation of proprietary AI systems
Under review at ICML, 2025 -
Fazl Barez et al.
Open Problems in Machine Unlearning for AI Safety
Under review at ICML, 2025
Peer-Reviewed Publications
-
Ben Bucknall*, Saad Siddiqui* et al.
In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?
FAccT, 2025 -
Anka Reuel*, Ben Bucknall* et al.
Open Problems in Technical AI Governance
TMLR, 2025 -
Edward Kembery, Ben Bucknall, and Morgan Simpson
Position Paper: Model Access should be a Key Concern in AI Governance
Socially Responsible Language Modelling Research Workshop at NeurIPS, 2024 -
Anka Reuel, Lisa Soder, Benjamin Bucknall, and Trond Arne Undheim
Position: Technical Research and Talent is Needed for Effective AI Governance
ICML, 2024
Oral Presentation (Top 5% of accepted papers) -
Stephen Casper, Carson Ezell et al.
Black-Box Access is Insufficient for Rigorous AI Audits
FAccT, 2024 -
Alan Chan*, Ben Bucknall*, Herbie Bradley, and David Krueger
Hazards from Increasingly Accessible Fine-Tuning of Downloadable Models
Socially Responsible Language Modelling Research Workshop at NeurIPS, 2023 -
Markus Anderljung et al.
Towards Publicly Accountable Frontier LLMs: Building an external scrutiny ecosystem under the ASPIRE framework
Socially Responsible Language Modelling Research Workshop at NeurIPS, 2023 -
Benjamin S. Bucknall and Shiri Dori-Hacohen
Current and Near-Term AI as a Potential Existential Risk Factor
AIES, 2022
Whitepapers & Technical Reports
-
Marie Davidsen Buhl, Ben Bucknall, and Tammy Masterson
Emerging Practices in Frontier AI Safety Frameworks
UK AI Security Institute, 2025 -
Marta Ziosi et al.
AISIs’ Roles in Domestic and International Governance
Oxford Martin AI Governance Initiative, 2024 -
Benjamin S. Bucknall and Robert F. Trager
Structured Access for Third-Party Research on Frontier AI Models: Investigating researchers' model access requirements
Oxford Martin AI Governance Initiative, 2023 -
Elizabeth Seger, Noemi Dreksler, Richard Moulange, Emily Dardaman, Jonas Schuett, K. Wei et al.
Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives
Centre for the Governance of AI, 2023