Depends on the definition

it's about machine learning, data science and more

Tag

xai

How explainable AI fails and what to do about it

Nowadays, there are a lot of people talking about and advertising the methods of “Explainable AI”. The derived explanations are often not reliable, and can be misleading.

How the LIME algorithm fails

You maybe know the LIME algorithm from some of my earlier blog posts. It can be quite useful to “debug” data sets and understand machine learning models better. But LIME is fooled very easily. We use the eli5 TextExplainer which is based on LIME and the 20newsgroup data set to show how LIME can fail.

Interpretable Named entity recognition with keras and LIME

In the previous posts, we saw how to build strong and versatile named entity recognition systems and how to properly evaluate them. But often you want to understand your model beyond the metrics. So in this tutorial I will show you how you can build an explainable and interpretable NER system with keras and the LIME algorithm.

Explain neural networks with keras and eli5

In this post I’m going to show you how you can use a neural network from keras with the LIME algorithm implemented in the eli5 TextExplainer class. For this we will write a scikit-learn compatible wrapper for a keras bidirectional LSTM model.

Debugging black-box text classifiers with LIME

Often in text classification, we use so called black-box classifiers. By black-box classifiers I mean a classification system where the internal workings are completly hidden from you. A famous example are deep neural nets, in text classification often recurrent or convolutional neural nets. But also linear models with a bag of words representation can be considered black-box classifiers, because nobody can fully make sense of thousands of features contributing to a prediction.

© 2020 Depends on the definition

Up ↑