Interpretable Machine Learning with H2O and SHAP - Sefik Ilkin Serengil

Interpretable Machine Learning with H2O and SHAP - Sefik Ilkin Serengil

Previously, we’ve made explanations for h2o.ai models with lime. Lime enables questioning for made predictions of built models. Herein, SHAP … More

1 mentions: @serengil
Keywords: shap
Date: 2019/10/10 17:57

Related Entries

Read more GitHub - slundberg/shap: A unified approach to explain the output of any machine learning model.
0 users, 0 mentions 2018/06/27 10:28
Read more My Internship at Zillow Group AI Part 1: Attribute Recognition in Real Estate Listings - Zillow Tech...
2 users, 1 mentions 2019/09/24 18:20
Read more Explaining Black Box Models: Ensemble and Deep Learning Using LIME and SHAP
0 users, 17 mentions 2020/01/21 15:51
Read more [1909.09020] Shape and Time Distortion Loss for Training Deep Time Series Forecasting Models
0 users, 2 mentions 2019/09/20 02:18
Read more [1911.02508] How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
0 users, 10 mentions 2019/11/08 02:21