We are an ambitious research group at Imperial College London studying the privacy and safety risks arising from Artificial Intelligence (AI) systems and mechanisms for using and sharing large scale datasets.
As AI systems become increasingly powerful, their integration into everyday life will bring substantial benefits -
but also raise pressing concerns around privacy, safety, and beyond. The Computational Privacy Group aims to
provide leadership, in the UK and internationally, in the privacy-preserving, safe, and ethical use of AI
systems, ranging from synthetic data and LLMs to agentic systems. Similarly, we study the responsible use and
sharing of large-scale datasets, derived from Internet of Things (IoT) devices, mobile phones, credit cards.
We primarily consider an adversarial perspective to identify and quantify vulnerabilities, which we believe to be a critical foundation
for developing safe, secure and privacy-preserving systems. Our research has studied the limits of anonymization, demonstrated how machine
learning models can leak sensitive data, and identified safety vulnerabilities in AI systems.
While technical in nature, our work has had significant public policy implications, informing for instance the International AI Safety Report,
reports of the United Nations, FTC, the European Commission as well as in briefs to the U.S. Supreme Court.
Personal data is increasingly used to train AI models, power intelligent agents, and generate synthetic data,
as well as to enable privacy-preserving mechanisms for statistical releases.
As these systems are deployed across sensitive domains like healthcare and finance, understanding and mitigating
the privacy risks they pose become critical.
We have studied membership inference attacks (MIAs) against synthetic data [1,2,3],
image classifiers [4],
and LLMs [5,6],
as well as employed evolutionary search to uncover weaknesses in query-based systems [7,8].
More recently, we have also studied more fundamental memorization in ML models, quantifying how
individual training examples affect memorization of other samples [9],
and have developed more effective ways to quantify and estimate privacy leakage [4].
We take an adversarial approach to explore the many safety and security challenges that come with AI systems. Our security research covers multiple angles of AI system
vulnerabilities, with an eye toward future risks as AI capabilities expand.
To audit safety risks of AI systems, we have used strong adversaries to develop and analyse threats
such as jailbreaks and prompt injection attacks [10].
We also studied the safety in perceptual hashing algorithms for client-side scanning [11,12]
and examined how LLMs memorize adversarially crafted training data [13].
A major focus is the emerging security challenges of agentic AI systems. We investigate new attack vectors and scenarios for misuse introduced by developments
like the Model Context Protocol (MCP), which enables AI assistants to connect with external data sources and tools. We are particularly interested in
how these risks evolve as agents become more capable and deploy at scale.
Our research is grounded in the belief that technical advances in AI and data science must be matched by careful consideration of their societal implications.
We hence regularly investigate how modern technologies influence broader systems of accountability, fairness, and trust.
For example, we have studied the potential for collusion in algorithmic markets [14] and examined how copyright traps can be used to detect
unauthorized use of protected content in AI training [13]. By exploring these intersections of technology and society, we aim to inform responsible
development and contribute to ongoing policy and regulatory discussions.
We were lucky enough to win the Best paper Award at SaTML 2025 with our Systemization of Knowledge (SoK) on Membership Inference Attacks (MIAs) against LLMs.
Yves-Alexandre de Montjoye will be presenting at the ELSA Workshop on Privacy-Preserving Machine Learning taking place on 17-21 March, 2025. The workshop brings together researchers and practitioners to discuss recent developments in privacy-preserving machine learning techniques …
Yves-Alexandre de Montjoye was invited to give a talk at the Dagstuhl Seminar "PETs and AI: Privacy Washing and the Need for a PETs Evaluation Framework."
More News
Email:X@Y
where X=demontjoye
, Y=imperial.ac.uk
.
Administrator (if urgent): Amandeep Bahia, +44 20 7594 8612
We are located at the Data Science Institute in the William
Penney Laboratory. The best entry point is via Exhibition road, through the Business school (see map
below). From there, just take the stairs towards the outdoor court. Enter the outdoor corridor after the
court and the institute will be on your right (please press the Data Science intercom button for access).
Please address mails to:
Department of Computing
Imperial College London
180 Queens Gate
London SW7 2AZ