2019-06-08 | Tobias Sterbak


Interpretable named entity recognition with keras and LIME

In the previous posts, we saw how to build strong and versatile named entity recognition systems and how to properly evaluate them. But often you want to understand your model beyond the metrics. So in this tutorial I will show you how you can build an explainable and interpretable NER system with keras and the LIME algorithm.

What does explainable mean?

Deep neural networks are quite successful in many use-cases, but these models can be hard to debug and to understand what’s going on. Our aim is to understand how much certain words influence the prediction of our named entity tagger. We want a human-understandable qualitative explanation which enables an interpretation of the underlying algorithm.

Load the data

We use the data set, you already know from my previous posts about named entity recognition.

import pandas as pd
import numpy as np
from tqdm import tqdm, trange

data = pd.read_csv("ner_dataset.csv", encoding="latin1").fillna(method="ffill")
data.tail(10)

Sentence #WordPOSTag
1048565Sentence: 47958impactNNO
1048566Sentence: 47958..O
1048567Sentence: 47959IndianJJB-gpe
1048568Sentence: 47959forcesNNSO
1048569Sentence: 47959saidVBDO
1048570Sentence: 47959theyPRPO
1048571Sentence: 47959respondedVBDO
1048572Sentence: 47959toTOO
1048573Sentence: 47959theDTO
1048574Sentence: 47959attackNNO
words = list(set(data["Word"].values))
n_words = len(words); n_words
35178
tags = list(set(data["Tag"].values))
n_tags = len(tags); n_tags
17
class SentenceGetter(object):
    
    def __init__(self, data):
        self.n_sent = 1
        self.data = data
        self.empty = False
        agg_func = lambda s: [(w, p, t) for w, p, t in zip(s["Word"].values.tolist(),
                                                           s["POS"].values.tolist(),
                                                           s["Tag"].values.tolist())]
        self.grouped = self.data.groupby("Sentence #").apply(agg_func)
        self.sentences = [s for s in self.grouped]
    
    def get_next(self):
        try:
            s = self.grouped["Sentence: {}".format(self.n_sent)]
            self.n_sent += 1
            return s
        except:
            return None
getter = SentenceGetter(data)
sentences = getter.sentences

This is how the sentences in the dataset look like.

labels = [[s[2] for s in sent] for sent in sentences]
sentences = [" ".join([s[0] for s in sent]) for sent in sentences]
sentences[0]
'Thousands of demonstrators have marched through London to protest the war in Iraq and demand the withdrawal of British troops from that country .'

The sentences are annotated with the BIO-schema and the labels look like this.

print(labels[0])
['O', 'O', 'O', 'O', 'O', 'O', 'B-geo', 'O', 'O', 'O', 'O', 'O', 'B-geo', 'O', 'O', 'O', 'O', 'O', 'B-gpe', 'O', 'O', 'O', 'O', 'O']

Preprocess the data

We first build a vocabulary of the most common 10000 words and map the rest to the “UNK” token.

from collections import Counter
from keras.preprocessing.sequence import pad_sequences

word_cnt = Counter(data["Word"].values)
vocabulary = set(w[0] for w in word_cnt.most_common(5000))
Using TensorFlow backend.

Now we create the word index and pad the sequence to a common length.

max_len = 50
word2idx = {"PAD": 0, "UNK": 1}
word2idx.update({w: i for i, w in enumerate(words) if w in vocabulary})
tag2idx = {t: i for i, t in enumerate(tags)}
X = [[word2idx.get(w, word2idx["UNK"]) for w in s.split()] for s in sentences]
X = pad_sequences(maxlen=max_len, sequences=X, padding="post", value=word2idx["PAD"])
y = [[tag2idx[l_i] for l_i in l] for l in labels]
y = pad_sequences(maxlen=max_len, sequences=y, padding="post", value=tag2idx["O"])

Lastly, we split the data in train and test set.

from sklearn.model_selection import train_test_split

X_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.1, shuffle=False)

Now we are ready to build our model.

Setup the NER model

We use the simple LSTM model from this earlier post. But the procedure shown here applies to all kinds of sequence models.

from keras.models import Model, Input
from keras.layers import LSTM, Embedding, Dense, TimeDistributed, SpatialDropout1D, Bidirectional
word_input = Input(shape=(max_len,))
model = Embedding(input_dim=n_words, output_dim=50, input_length=max_len)(word_input)
model = SpatialDropout1D(0.1)(model)
model = Bidirectional(LSTM(units=100, return_sequences=True, recurrent_dropout=0.1))(model)
out = TimeDistributed(Dense(n_tags, activation="softmax"))(model)
model = Model(word_input, out)
model.compile(optimizer="rmsprop",
              loss="sparse_categorical_crossentropy",
              metrics=["accuracy"])
history = model.fit(X_tr, y_tr.reshape(*y_tr.shape, 1),
                    batch_size=32, epochs=5,
                    validation_split=0.1, verbose=1)
Train on 38846 samples, validate on 4317 samples
Epoch 1/5
38846/38846 [==============================] - 176s 5ms/step - loss: 0.1452 - acc: 0.9632 - val_loss: 0.0720 - val_acc: 0.9790
Epoch 2/5
38846/38846 [==============================] - 124s 3ms/step - loss: 0.0650 - acc: 0.9809 - val_loss: 0.0613 - val_acc: 0.9822
Epoch 3/5
38846/38846 [==============================] - 191s 5ms/step - loss: 0.0586 - acc: 0.9826 - val_loss: 0.0576 - val_acc: 0.9829
Epoch 4/5
38846/38846 [==============================] - 242s 6ms/step - loss: 0.0556 - acc: 0.9833 - val_loss: 0.0570 - val_acc: 0.9832
Epoch 5/5
38846/38846 [==============================] - 222s 6ms/step - loss: 0.0533 - acc: 0.9839 - val_loss: 0.0547 - val_acc: 0.9836

Explain the predictions

To explain the predictions, we use the LIME algorithm implemented in the eli5 library. We assume you already now what the algorithm is doing. You can read more about it in this post.

from eli5.lime import TextExplainer
from eli5.lime.samplers import MaskingTextSampler

Now we create a small python class, that holds our preprocessing and prediction of the model. To apply LIME we just need a function to make predictions on texts. We use the closure pattern in get_predict_function which returns a function that takes a list of texts, processes them and returns the predictions of our previously trained model.

The trick

To make the LIME algorithm work for us, we need to rephrase our problem as a simple multiclass classification problem. We do this by selecting before-hand for which word we want to explain the prediction. This is done by passing the word_index to the get_predict_function method.

class NERExplainerGenerator(object):
    
    def __init__(self, model, word2idx, tag2idx, max_len):
        self.model = model
        self.word2idx = word2idx
        self.tag2idx = tag2idx
        self.idx2tag = {v: k for k,v in tag2idx.items()}
        self.max_len = max_len
        
    def _preprocess(self, texts):
        X = [[self.word2idx.get(w, self.word2idx["UNK"]) for w in t.split()]
             for t in texts]
        X = pad_sequences(maxlen=self.max_len, sequences=X,
                          padding="post", value=self.word2idx["PAD"])
        return X
    
    def get_predict_function(self, word_index):
        def predict_func(texts):
            X = self._preprocess(texts)
            p = self.model.predict(X)
            return p[:,word_index,:]
        return predict_func

Let’s have a look at some interesting samples. For example the 46781th text in our data set.

index = 46781
label = labels[index]
text = sentences[index]
print(text)
print()
print(" ".join([f"{t} ({l})" for t, l in zip(text.split(), label)]))
Nigeria 's President Olusegun Obasanjo expressed his condolences , noting the late pontiff promoted religious tolerance and democracy in the West African nation .

Nigeria (B-geo) 's (O) President (B-per) Olusegun (I-per) Obasanjo (I-per) expressed (O) his (O) condolences (O) , (O) noting (O) the (O) late (O) pontiff (O) promoted (O) religious (O) tolerance (O) and (O) democracy (O) in (O) the (O) West (O) African (B-gpe) nation (O) . (O)
for i, w in enumerate(text.split()):
    print(f"{i}: {w}")
0: Nigeria
1: 's
2: President
3: Olusegun
4: Obasanjo
5: expressed
6: his
7: condolences
8: ,
9: noting
10: the
11: late
12: pontiff
13: promoted
14: religious
15: tolerance
16: and
17: democracy
18: in
19: the
20: West
21: African
22: nation
23: .

Now start to explain the prediction. We first initialize our generator object.

explainer_generator = NERExplainerGenerator(model, word2idx, tag2idx, max_len)

We want to explain the NER prediction for the word “Obasanjo”, so we pick word_index=4 and generate the respective prediction function.

word_index = 4
predict_func = explainer_generator.get_predict_function(word_index=word_index)

Here we have to specify a sampler for the LIME algorithm. This controls how the algorithm samples perturbed samples from the text we want to explain. Read more about this in this article or the eli5 documentation.

sampler = MaskingTextSampler(
    replacement="UNK",
    max_replace=0.7,
    token_pattern=None,
    bow=False
)
samples, similarity = sampler.sample_near(text, n_samples=4)
print(samples)
("Nigeria 's President Olusegun Obasanjo expressed his UNK , noting the UNK pontiff promoted religious tolerance UNK democracy in the West UNK nation .", "Nigeria 'UNK UNK UNK UNK UNK UNK UNK , UNK the UNK pontiff promoted UNK UNK and UNK in UNK UNK UNK UNK .", "UNK 'UNK President Olusegun Obasanjo expressed UNK condolences , UNK the UNK pontiff UNK UNK tolerance UNK democracy in the UNK UNK nation .", "Nigeria 'UNK President UNK UNK UNK UNK condolences , noting the UNK pontiff promoted UNK UNK and democracy in UNK West African nation .")

Finally, we set up the TextExplainer and explain the prediction.

te = TextExplainer(
    sampler=sampler,
    position_dependent=True,
    random_state=42
)

te.fit(text, predict_func)

te.explain_prediction(
    target_names=list(explainer_generator.idx2tag.values()),
    top_targets=3
)

y=I-per (probability 0.963, score 3.679) top features

Contribution?Feature
+4.076Highlighted in text (sum)
-0.398<BIAS>

Nigeria 's President Olusegun Obasanjo expressed his condolences , noting the late pontiff promoted religious tolerance and democracy in the West African nation .

y=B-per (probability 0.014, score -4.229) top features

Contribution?Feature
-1.924Highlighted in text (sum)
-2.306<BIAS>

Nigeria 's President Olusegun Obasanjo expressed his condolences , noting the late pontiff promoted religious tolerance and democracy in the West African nation .

y=O (probability 0.007, score -4.970) top features

Contribution?Feature
-0.762<BIAS>
-4.208Highlighted in text (sum)

Nigeria 's President Olusegun Obasanjo expressed his condolences , noting the late pontiff promoted religious tolerance and democracy in the West African nation .

Very nice! As expected, the model predicted I-per for a later part of a person name. The word President is a strong indicator that the following word is part of a name. This indicates, that in the dataset, President is often part of the annotation of a Person.

In this article you learned a handy method to digg deeper into what your named entity system does and how it interacts with your dataset and what signals it picked up. I hope you found it useful and enjoyed it. See you next time.


Buy Me A Coffee



PrivacyImprintRSS

© depends-on-the-definition 2017-2022