[1912.03817] Machine Unlearningcontact arXivarXiv Twitter

Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because any model trained with said data may have memorized it, putting users at risk of a successful privacy attack exposing their information. Yet, having models unlearn is notoriously difficult. After a data point is removed from a training set, one often resorts to entirely retraining downstream models from scratch. We introduce SISA training, a framework that decreases the number of model parameters affected by an unlearning request and caches intermediate outputs of the training algorithm to limit the number of model updates that need to be computed to have these parameters unlearn. This framework reduces the computational overhead associated with unlearning, even in the worst-case setting where unlearning requests are made uniformly across the training set. In some cases, we may have a prior on the distri

1 mentions: @SebNieh
Date: 2020/02/10 15:01

Referring Tweets

@SebNieh Reading recommendation: t.co/mPX83ZXxel In particular with regard to the right to be forgotten! #MachineLearning #datagovernance #DataScience #dataprivacy #AI #ArtificialIntelligence

Related Entries

Read more COTA: Improving Uber Customer Care with NLP & Machine Learning
0 users, 0 mentions 2018/06/11 19:29
Read more GitHub - asavinov/lambdo: Feature engineering and machine learning: together at last!
3 users, 23 mentions 2018/12/05 22:45
Read more GitHub - slundberg/shap: A unified approach to explain the output of any machine learning model.
0 users, 0 mentions 2018/06/27 10:28
Read more Variational Autoencoder in Tensorflow - facial expression low dimensional embedding - Machine learni...
0 users, 0 mentions 2018/04/22 03:40
Read more Proceedings of Machine Learning Research
0 users, 8 mentions 2019/05/25 08:18