Depends on the definition

it's about machine learning, data science and more

Model uncertainty in deep learning with Monte Carlo dropout in keras

Deep learning models have shown amazing performance in a lot of fields such as autonomous driving, manufacturing, and medicine, to name a few. However, these are fields in which representing model uncertainty is of crucial importance. The standard deep learning tools for regression and classification do not capture model uncertainty. Model uncertainty is indispensable for the deep learning practitioner. In this article we will see how to represent model uncertainty of existing dropout neural networks with keras. This approach, called Monte Carlo dropout, will mitigates the problem of representing model uncertainty in deep learning without sacrificing either computational complexity or test accuracy and can be used for all kind of models trained with dropout.

Interpretable Named entity recognition with keras and LIME

In the previous posts, we saw how to build strong and versatile named entity recognition systems and how to properly evaluate them. But often you want to understand your model beyond the metrics. So in this tutorial I will show you how you can build an explainable and interpretable NER system with keras and the LIME algorithm.

Introduction to entity embeddings with neural networks

Since a lot of people recently asked me how neural networks learn the embeddings for categorical variables, for example words, I’m going to write about it today. In this article you will learn what an embedding layer really is and how neural nets can learn representations for categorical variables with it.

Introduction to n-gram language models

You might have heard, that neural language models power a lot of the recent advances in natural language processing. Namely large models like Bert and GPT-2. But there is a fairly old approach to language modelling that is quite successful… Continue Reading →

Learn to identify ingredients with neural networks

Today, I want to show you how you can build an NLP application without explicitly labeled data. I use the “German Recipes Dataset” I recently published on kaggle, to build a neural network model, that can identify ingredients in cooking… Continue Reading →

How to use magnitude with keras

This time we have a look into the magnitude library, a feature-packed Python package for utilizing vector embeddings in machine learning models in a fast, efficient, and simple manner. We want to utilize the embeddings magnitude provides and use them in keras.

Text analysis with named entities

This is the second post of my series about understanding text datasets. Here we use the named entities to get some information about our data set.

Named Entity Recognition with Bert

One of the latest milestones in pre-training and fine-tuning in natural language processing is the release of BERT. This is a new post in my NER series. I will show you how you can fine-tune the Bert model to do state-of-the art named entity recognition in pytorch.

Understanding text data with topic models

This is the first post of my series about understanding text data sets. In practice, you often want and need to know, what is going on in your data. In this post we will focus on applying a Latent Dirichlet allocation (LDA) topic model to the “Quora Insincere Questions Classification” data set on kaggle.

LSTM with attention for relation classification

Once named entities have been identified in a text, we then want to extract the relations that exist between them. As indicated earlier, we will typically be looking for relations between specified types of named entity. I covered named entity… Continue Reading →

« Older posts

© 2019 Depends on the definition

Up ↑

STAY UP-TO-DATE

Sign up for my NEWSLETTER and always get more hands-on python machine learning news and my latest posts.