In every machine learning project, the training data is the most valuable part of your system. In this article I introduce you to confident learning to find potentially erroneously labeled examples in your training data.
Nowadays, there are a lot of people talking about and advertising the methods of “Explainable AI”. The derived explanations are often not reliable, and can be misleading.
You maybe know the LIME algorithm from some of my earlier blog posts. It can be quite useful to “debug” data sets and understand machine learning models better. But LIME is fooled very easily. We use the eli5 TextExplainer which is based on LIME and the 20newsgroup data set to show how LIME can fail.
Deep learning models have shown amazing performance in a lot of fields such as autonomous driving, manufacturing, and medicine, to name a few. However, these are fields in which representing model uncertainty is of crucial importance. The standard deep learning… Continue Reading →
In the previous posts, we saw how to build strong and versatile named entity recognition systems and how to properly evaluate them. But often you want to understand your model beyond the metrics. So in this tutorial I will show you how you can build an explainable and interpretable NER system with keras and the LIME algorithm.
Since a lot of people recently asked me how neural networks learn the embeddings for categorical variables, for example words, I’m going to write about it today. In this article you will learn what an embedding layer really is and how neural nets can learn representations for categorical variables with it.
Sign up for my NEWSLETTER and always get more hands-on python machine learning news and my latest posts.