from itertools import chain
import nltk
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import LabelBinarizer
import sklearn
import pycrfsuite
print(sklearn.__version__)
0.15-git
CoNLL2002 corpus is available in NLTK. We use Spanish data.
nltk.corpus.conll2002.fileids()
['esp.testa', 'esp.testb', 'esp.train', 'ned.testa', 'ned.testb', 'ned.train']
%%time
train_sents = list(nltk.corpus.conll2002.iob_sents('esp.train'))
test_sents = list(nltk.corpus.conll2002.iob_sents('esp.testb'))
CPU times: user 3.13 s, sys: 108 ms, total: 3.24 s Wall time: 3.24 s
Data format:
train_sents[0]
[('Melbourne', 'NP', 'B-LOC'), ('(', 'Fpa', 'O'), ('Australia', 'NP', 'B-LOC'), (')', 'Fpt', 'O'), (',', 'Fc', 'O'), ('25', 'Z', 'O'), ('may', 'NC', 'O'), ('(', 'Fpa', 'O'), ('EFE', 'NC', 'B-ORG'), (')', 'Fpt', 'O'), ('.', 'Fp', 'O')]
Next, define some features. In this example we use word identity, word suffix, word shape and word POS tag; also, some information from nearby words is used.
This makes a simple baseline, but you certainly can add and remove some features to get (much?) better results - experiment with it.
def word2features(sent, i):
word = sent[i][0]
postag = sent[i][1]
features = [
'bias',
'word.lower=' + word.lower(),
'word[-3:]=' + word[-3:],
'word[-2:]=' + word[-2:],
'word.isupper=%s' % word.isupper(),
'word.istitle=%s' % word.istitle(),
'word.isdigit=%s' % word.isdigit(),
'postag=' + postag,
'postag[:2]=' + postag[:2],
]
if i > 0:
word1 = sent[i-1][0]
postag1 = sent[i-1][1]
features.extend([
'-1:word.lower=' + word1.lower(),
'-1:word.istitle=%s' % word1.istitle(),
'-1:word.isupper=%s' % word1.isupper(),
'-1:postag=' + postag1,
'-1:postag[:2]=' + postag1[:2],
])
else:
features.append('BOS')
if i < len(sent)-1:
word1 = sent[i+1][0]
postag1 = sent[i+1][1]
features.extend([
'+1:word.lower=' + word1.lower(),
'+1:word.istitle=%s' % word1.istitle(),
'+1:word.isupper=%s' % word1.isupper(),
'+1:postag=' + postag1,
'+1:postag[:2]=' + postag1[:2],
])
else:
features.append('EOS')
return features
def sent2features(sent):
return [word2features(sent, i) for i in range(len(sent))]
def sent2labels(sent):
return [label for token, postag, label in sent]
def sent2tokens(sent):
return [token for token, postag, label in sent]
This is what word2features extracts:
sent2features(train_sents[0])[0]
['bias', 'word.lower=melbourne', 'word[-3:]=rne', 'word[-2:]=ne', 'word.isupper=False', 'word.istitle=True', 'word.isdigit=False', 'postag=NP', 'postag[:2]=NP', 'BOS', '+1:word.lower=(', '+1:word.istitle=False', '+1:word.isupper=False', '+1:postag=Fpa', '+1:postag[:2]=Fp']
Extract the features from the data:
%%time
X_train = [sent2features(s) for s in train_sents]
y_train = [sent2labels(s) for s in train_sents]
X_test = [sent2features(s) for s in test_sents]
y_test = [sent2labels(s) for s in test_sents]
CPU times: user 2.41 s, sys: 230 ms, total: 2.65 s Wall time: 2.65 s
To train the model, we create pycrfsuite.Trainer, load the training data and call 'train' method.
Create pycrfsuite.Trainer and load the training data to CRFsuite:
%%time
trainer = pycrfsuite.Trainer()
for xseq, yseq in zip(X_train, y_train):
trainer.append(xseq, yseq)
CPU times: user 3.28 s, sys: 45 ms, total: 3.33 s Wall time: 3.33 s
Set training parameters. We will use L-BFGS training algorithm (it is default) with Elastic Net (L1 + L2) regularization.
trainer.set('c1', 1.0) # coefficient for L1 penalty
trainer.set('c2', 1.0) # coefficient for L2 penalty
trainer.set('max_iterations', 100)
# include transitions that are possible, but not observed
trainer.set('feature.possible_transitions', True)
Possible parameters for the default training algorithm:
trainer.params()
['feature.minfreq', 'feature.possible_states', 'feature.possible_transitions', 'c1', 'c2', 'max_iterations', 'num_memories', 'epsilon', 'period', 'delta', 'linesearch', 'max_linesearch']
Train the model:
%%time
trainer.train('conll2002-esp.crfsuite')
CPU times: user 27.6 s, sys: 49.2 ms, total: 27.6 s Wall time: 27.6 s
trainer.train saves model to a file:
!ls -lh ./conll2002-esp.crfsuite
-rw-r--r-- 1 kmike staff 516K May 14 15:14 ./conll2002-esp.crfsuite
To use the trained model, create pycrfsuite.Tagger, open the model and use "tag" method:
tagger = pycrfsuite.Tagger()
tagger.open('conll2002-esp.crfsuite')
<contextlib.closing at 0x12facb9d0>
Let's tag a sentence to see how it works:
example_sent = test_sents[0]
print(' '.join(sent2tokens(example_sent)), end='\n\n')
print("Predicted:", ' '.join(tagger.tag(sent2features(example_sent))))
print("Correct: ", ' '.join(sent2labels(example_sent)))
La Coruña , 23 may ( EFECOM ) . Predicted: B-LOC I-LOC O O O O B-ORG O O Correct: B-LOC I-LOC O O O O B-ORG O O
def bio_classification_report(y_true, y_pred):
"""
Classification report for a list of BIO-encoded sequences.
It computes token-level metrics and discards "O" labels.
Note that it requires scikit-learn 0.15+ (or a version from github master)
to calculate averages properly!
"""
lb = LabelBinarizer()
y_true_combined = lb.fit_transform(list(chain.from_iterable(y_true)))
y_pred_combined = lb.transform(list(chain.from_iterable(y_pred)))
tagset = set(lb.classes_) - {'O'}
tagset = sorted(tagset, key=lambda tag: tag.split('-', 1)[::-1])
class_indices = {cls: idx for idx, cls in enumerate(lb.classes_)}
return classification_report(
y_true_combined,
y_pred_combined,
labels = [class_indices[cls] for cls in tagset],
target_names = tagset,
)
Predict entity labels for all sentences in our testing set ('testb' Spanish data):
%%time
y_pred = [tagger.tag(xseq) for xseq in X_test]
CPU times: user 513 ms, sys: 1.96 ms, total: 515 ms Wall time: 514 ms
..and check the result. Note this report is not comparable to results in CONLL2002 papers because here we check per-token results (not per-entity). Per-entity numbers will be worse.
print(bio_classification_report(y_test, y_pred))
precision recall f1-score support B-LOC 0.76 0.73 0.74 1084 I-LOC 0.86 0.94 0.90 634 B-MISC 0.67 0.42 0.51 339 I-MISC 0.86 0.94 0.90 634 B-ORG 0.80 0.87 0.83 735 I-ORG 0.86 0.94 0.90 634 B-PER 0.60 0.52 0.56 557 I-PER 0.86 0.94 0.90 634 avg / total 0.79 0.81 0.80 5251
from collections import Counter
info = tagger.info()
def print_transitions(trans_features):
for (label_from, label_to), weight in trans_features:
print("%-6s -> %-7s %0.6f" % (label_from, label_to, weight))
print("Top likely transitions:")
print_transitions(Counter(info.transitions).most_common(15))
print("\nTop unlikely transitions:")
print_transitions(Counter(info.transitions).most_common()[-15:])
Top likely transitions: B-ORG -> I-ORG 6.841320 I-ORG -> I-ORG 6.491844 B-PER -> I-PER 6.359002 I-MISC -> I-MISC 5.250787 B-LOC -> I-LOC 5.215655 B-MISC -> I-MISC 4.544261 I-PER -> I-PER 4.285555 I-LOC -> I-LOC 4.060816 O -> O 1.718951 B-LOC -> B-LOC 1.231519 O -> B-ORG 1.018327 O -> B-MISC 0.966135 O -> B-LOC 0.745272 B-ORG -> B-LOC 0.732438 I-PER -> B-LOC 0.662843 Top unlikely transitions: B-MISC -> O -1.117662 B-MISC -> I-LOC -1.143083 I-PER -> I-MISC -1.206864 B-MISC -> B-MISC -1.228871 I-LOC -> B-PER -1.228928 B-PER -> B-PER -1.262047 I-PER -> B-PER -1.283181 I-ORG -> I-LOC -1.317342 I-MISC -> I-LOC -1.396608 I-PER -> I-LOC -1.457825 I-PER -> B-ORG -1.515254 O -> I-ORG -4.431892 O -> I-PER -4.734070 O -> I-MISC -5.151976 O -> I-LOC -5.495769
We can see that, for example, it is very likely that the beginning of an organization name (B-ORG) will be followed by a token inside organization name (I-ORG), but transitions to I-ORG from tokens with other labels are penalized. Also note I-PER -> B-LOC transition: a positive weight means that model thinks that a person name is often followed by a location.
Check the state features:
def print_state_features(state_features):
for (attr, label), weight in state_features:
print("%0.6f %-6s %s" % (weight, label, attr))
print("Top positive:")
print_state_features(Counter(info.state_features).most_common(20))
print("\nTop negative:")
print_state_features(Counter(info.state_features).most_common()[-20:])
Top positive: 4.202838 B-ORG word.lower=psoe-progresistas 3.892675 O word.istitle=False 3.276510 B-ORG word.lower=efe-cantabria 3.035500 B-PER -1:word.lower=según 2.962076 O BOS 2.785710 B-ORG word.lower=telefónica 2.693288 B-LOC -1:word.lower=cantabria 2.622198 B-MISC word.isupper=True 2.590672 B-ORG word.lower=efe 2.582553 B-ORG word[-2:]=iU 2.582553 B-ORG word[-3:]=CiU 2.566220 B-ORG word.lower=ciu 2.500637 O word.lower=a 2.435766 B-ORG word[-2:]=-e 2.413508 O postag[:2]=Fp 2.277078 B-MISC word.lower=justicia 2.242137 B-LOC -1:word.lower=nuboso 2.177250 B-MISC word.lower=internet 2.123603 O word[-3:]=Día 2.104597 B-LOC word.lower=líbano Top negative: -1.270007 I-ORG -1:word.isupper=True -1.272688 O -1:word.isupper=False -1.282985 B-ORG word[-2:]=ro -1.286593 B-ORG postag[:2]=SP -1.286593 B-ORG postag=SP -1.315087 I-LOC BOS -1.370016 B-LOC word[-3:]=ión -1.371311 B-PER word[-2:]=ón -1.397264 B-PER word.istitle=False -1.546525 O word[-2:]=nd -1.595949 B-PER word[-2:]=os -1.689601 B-MISC -1:word.isupper=True -1.749930 B-MISC word.istitle=False -1.946208 I-PER -1:word.lower=san -2.149653 O postag[:2]=NP -2.149653 O postag=NP -2.598432 O word[-2:]=om -2.653626 B-PER -1:word.lower=del -3.025112 O word.istitle=True -4.539458 O word.isupper=True
As we can see, the model remembered names of some entities (maybe it is overfit). Also it learned that UPPERCASED or TitleCased words are likely entities of some kind, and that proper nouns are often entities.
The model in this notebook is just a starting point; you certainly can do better!