Depends on the definition

it's about machine learning, data science and more

Data validation for NLP machine learning applications

Data quality series: Find label issues with confident learning for NLP Data validation for NLP machine learning applications   An important part of machine learning applications, is making sure that the data quality is not degenerating while a model is… Continue Reading →

Find label issues with confident learning for NLP

In every machine learning project, the training data is the most valuable part of your system. In this article I introduce you to confident learning to find potentially erroneously labeled examples in your training data.

How explainable AI fails and what to do about it

Nowadays, there are a lot of people talking about and advertising the methods of “Explainable AI”. The derived explanations are often not reliable, and can be misleading.

How the LIME algorithm fails

You maybe know the LIME algorithm from some of my earlier blog posts. It can be quite useful to “debug” data sets and understand machine learning models better. But LIME is fooled very easily. We use the eli5 TextExplainer which is based on LIME and the 20newsgroup data set to show how LIME can fail.

Cluster discovery in german recipes

If you are dealing with a large collections of documents, you will often find yourself in the situation where you are looking for some structure and understanding what is contained in the documents. Here I’ll show you a convenient method… Continue Reading →

That was PyCon DE & PyData Berlin 2019

Last weekend PyCon DE and PyData Berlin joined in Berlin for a great conference event that I was lucky to attend. The speaker line-up was great and often it was hard to choose which talk or tutorial to attend. I… Continue Reading →

Model uncertainty in deep learning with Monte Carlo dropout in keras

Deep learning models have shown amazing performance in a lot of fields such as autonomous driving, manufacturing, and medicine, to name a few. However, these are fields in which representing model uncertainty is of crucial importance. The standard deep learning… Continue Reading →

Interpretable Named entity recognition with keras and LIME

In the previous posts, we saw how to build strong and versatile named entity recognition systems and how to properly evaluate them. But often you want to understand your model beyond the metrics. So in this tutorial I will show you how you can build an explainable and interpretable NER system with keras and the LIME algorithm.

Introduction to entity embeddings with neural networks

Since a lot of people recently asked me how neural networks learn the embeddings for categorical variables, for example words, I’m going to write about it today. In this article you will learn what an embedding layer really is and how neural nets can learn representations for categorical variables with it.

« Older posts

© 2020 Depends on the definition

Up ↑