Verifying differentially private Bayesian inference

Marco Gaboardi

School of Computing
University of Dundee

Thursday, 3 December 2015, 14:00
Cybernetica Bldg (Akadeemia tee 21), room B101


Slides from the talk [pdf]

Abstract: Differential privacy is a promising approach to the privacy-preserving release of data: it offers a strong guaranteed bound on the increase in harm that a user incurs as a result of participating in a differentially private data analysis, even under worst-case assumptions. A standard way to ensure differential privacy is by adding some statistical noise to the result of a data analysis and several mechanisms have been designed to ensure differential privacy and achieve at the same time a good accuracy.

Most of the tools designed so far for verifying differential privacy are successful in proving programs differentially private when they use standard mechanisms. However, they offer limited support for reasoning about differential privacy in presence of explicit conditional distributions as in the case of Bayesian Inference.

To overcome this situation we propose a system where we can reason about differential privacy and bayesian inference. In this system, bayesian inference is expressed in terms of observations and using an inference engine while differential privacy can be proved by using a type system based on the notion of relational higher-order refinement types. Verified programs can be then executed using the Infer.net framework.


Tarmo Uustalu
Last update 3 December 2015