Depends on the definition

it's about machine learning, data science and more

Tag

NLP

How to use magnitude with keras

This time we have a look into the magnitude library, a feature-packed Python package for utilizing vector embeddings in machine learning models in a fast, efficient, and simple manner. We want to utilize the embeddings magnitude provides and use them in keras.

Text analysis with named entities

This is the second post of my series about understanding text datasets. Here we use the named entities to get some information about our data set.

Named Entity Recognition with Bert

One of the latest milestones in pre-training and fine-tuning in natural language processing is the release of BERT. This is a new post in my NER series. I will show you how you can fine-tune the Bert model to do state-of-the art named entity recognition in pytorch.

Understanding text data with topic models

This is the first post of my series about understanding text data sets. In practice, you often want and need to know, what is going on in your data. In this post we will focus on applying a Latent Dirichlet allocation (LDA) topic model to the “Quora Insincere Questions Classification” data set on kaggle.

LSTM with attention for relation classification

Once named entities have been identified in a text, we then want to extract the relations that exist between them. As indicated earlier, we will typically be looking for relations between specified types of named entity. I covered named entity… Continue Reading →

Evaluate sequence models in python

An important part of every machine learning project is the proper evaluation of the performance of the system. In this post I will show you how evaluate sequence models with token-based labels. This way you can get a proper understanding of you sequence model performance.

State-of-the-art named entity recognition with residual LSTM and ELMo

This is the sixth post in my series about named entity recognition. This time I’m going to show you some cutting edge stuff. We will use a residual LSTM network together with ELMo embeddings, developed at Allen NLP. You will learn how to wrap a tensorflow hub pre-trained model to work with keras. The resulting model with give you state-of-the-art performance on the named entity recognition task.

Explain neural networks with keras and eli5

  In this post I’m going to show you how you can use a neural network from keras with the LIME algorithm implemented in the eli5 TextExplainer class. For this we will write a scikit-learn compatible wrapper for a keras… Continue Reading →

Debugging black-box text classifiers with LIME

Often in text classification, we use so called black-box classifiers. By black-box classifiers I mean a classification system where the internal workings are completly hidden from you. A famous example are deep neural nets, in text classification oftern recurrent or… Continue Reading →

Enhancing LSTMs with character embeddings for Named entity recognition

This is the fifth in my series about named entity recognition with python. The last time we used a CRF-LSTM to model the sequence structure of our sentences. While this approach is straight forward and often yields strong results there are some potential shortcomings. If we haven’t seen a word a prediction time, we have to encode it as unknown and have to infer it’s meaning by it’s surrounding words. To encode the character-level information, we will use character embeddings and a LSTM to encode every word to an vector. We can use basically everything that produces a single vector for a sequence of characters that represent a word.

© 2019 Depends on the definition

Up ↑