Differentially private Bayesian learning on distributed data

Show full item record



Permalink

http://hdl.handle.net/10138/293043

Citation

Heikkila , M , Lagerspetz , E , Kaski , S , Shimizu , K , Tarkoma , S & Honkela , A 2017 , Differentially private Bayesian learning on distributed data . in I Guyon , U V Luxburg , S Bengio , H Wallach , R Fergus , S Vishwanathan & R Garnett (eds) , Advances in Neural Information Processing Systems 30 (NIPS 2017) . vol. 30 , Advances in Neural Information Processing Systems , vol. 30 , NEURAL INFORMATION PROCESSING SYSTEMS (NIPS) , Annual Conference on Neural Information Processing Systems , Long Beach , United States , 04/12/2017 .

Title: Differentially private Bayesian learning on distributed data
Author: Heikkila, Mikko; Lagerspetz, Eemil; Kaski, Samuel; Shimizu, Kana; Tarkoma, Sasu; Honkela, Antti
Other contributor: University of Helsinki, Department of Mathematics and Statistics
University of Helsinki, Department of Computer Science
University of Helsinki, Aalto University
University of Helsinki, Content-Centric Structures and Networking research group / Sasu Tarkoma
University of Helsinki, Department of Mathematics and Statistics
Guyon, I.
Luxburg, U.V.
Bengio, S.
Wallach, H.
Fergus, R.
Vishwanathan, S.
Garnett, R.







Publisher: NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)
Date: 2017
Language: eng
Number of pages: 10
Belongs to series: Advances in Neural Information Processing Systems 30 (NIPS 2017)
Belongs to series: Advances in Neural Information Processing Systems
URI: http://hdl.handle.net/10138/293043
Abstract: Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness, or add prohibitive amounts of noise. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost.
Subject: NOISE
112 Statistics and probability
113 Computer and information sciences
Rights:


Files in this item

Total number of downloads: Loading...

Files Size Format View
6915_differenti ... ng_on_distributed_data.pdf 352.5Kb PDF View/Open

This item appears in the following Collection(s)

Show full item record