[2010.05516] Explaining Neural Matrix Factorization with Gradient Rollbackopen searchopen navigation menucontact arXivsubscribe to arXiv mailings

Explaining the predictions of neural black-box models is an important problem, especially when such models are used in applications where user trust is crucial. Estimating the influence of training examples on a learned neural model's behavior allows us to identify training examples most responsible for a given prediction and, therefore, to faithfully explain the output of a black-box model. The most generally applicable existing method is based on influence functions, which scale poorly for larger sample sizes and models. We propose gradient rollback, a general approach for influence estimation, applicable to neural models where each parameter update step during gradient descent touches a smaller number of parameters, even if the overall number of parameters is large. Neural matrix factorization models trained with gradient descent are part of this model class. These models are popular and have found a wide range of applications in industry. Especially knowledge graph embedding meth

4 mentions: @caro__lawrence@Mniepert@caro__lawrence
Date: 2020/10/14 06:53

Referring Tweets

@Mniepert An exciting result (to me) of our recent paper (w/ @caro__lawrence) is the theoretical link we establish between the notion of stability of learning algorithms and the bound on the error we make in approximating influence of training samples (1/5) t.co/HeqFerAsaq t.co/89O52dIASz
@caro__lawrence In the future, we will explore applying GR to other types of models. Also, we want to test how these explanations can help humans in applied projects. Read the full details here: t.co/ToOmJX7LDJ Questions? Feel free to reach out!
@caro__lawrence Want to make your NN more explainable? We present Gradient Rollback (GR) which tracks how training examples influence the model & use this to explain predictions. We apply GR to knowledge base completion. #ExplainableAI #KnowledgeGraph #ML t.co/ToOmJX7LDJ Overview below:

Related Entries

Read more [2001.10818] Convergence Guarantees for Gaussian Process Approximations Under Several Observation Mo...
0 users, 3 mentions 2020/02/01 00:51
Read more GitHub - jihoo-kim/awesome-RecSys: A curated list of awesome Recommender System (Books, Conferences,...
0 users, 1 mentions 2020/02/04 15:50
Read more [2006.01034] Ordinal Non-negative Matrix Factorization for Recommendationopen searchopen navigation ...
0 users, 2 mentions 2020/06/03 15:51
Read more [2007.04470] Finite mixture models are typically inconsistent for the number of componentsopen searc...
0 users, 3 mentions 2020/07/10 14:21
Read more [2010.01160] Automatic Extraction of Rules Governing Morphological Agreementopen searchopen navigati...
0 users, 2 mentions 2020/10/13 03:51