4

我正在使用命名实体识别和分块器在 NLTK 上做一些工作。我为此重新训练了一个分类器nltk/chunk/named_entity.py,并得到了以下测量结果:

ChunkParse score:
    IOB Accuracy:  96.5%
    Precision:     78.0%
    Recall:        91.9%
    F-Measure:     84.4%

但我不明白在这种情况下 IOB Accuracy 和 Precision 之间的确切区别是什么。实际上,我在文档(此处)中找到了以下特定示例:

IOB 标记准确度表明超过三分之一的单词被标记为 O,即不在 NP 块中。然而,由于我们的标注器没有找到任何块,它的精度、召回率和 f-measure 都为零。

那么,如果 IOB 准确度只是 O 标签的数量,那么在那个例子中,为什么我们没有块并且 IOB 准确度不是 100%?

先感谢您

4

1 回答 1

6

维基百科上对精度和准确性之间的区别有非常详细的解释(参见https://en.wikipedia.org/wiki/Accuracy_and_precision),简而言之:

accuracy = (tp + tn) / (tp + tn + fp + fn)
precision = tp / tp + fp

回到 NLTK,有一个名为ChunkScore的模块可以计算系统accuracyprecisionrecall。这是 NLTK 计算tp,fp,tn,fnforaccuracy和的方式有趣的部分precision,它以不同的粒度计算。

为了准确,NLTK 计算使用 POS 标签和 IOB 标签正确猜测的令牌总数(NOT CHUNKS!!),然后除以金句中的令牌总数。

accuracy = num_tokens_correct / total_num_tokens_from_gold

对于精度召回率,NLTK 计算:

  • True Positives通过计算正确猜测的块数(NOT TOKENS!!! )
  • False Positives通过计算被猜到但错误的块( NOT TOKENS!!! )的数量。
  • True Negatives通过计算系统未猜到的块(NOT TOKENS!!! )的数量。

然后计算精度和召回率:

precision = tp / fp + tp
recall = tp / fn + tp

为了证明以上几点,试试这个脚本:

from nltk.chunk import *
from nltk.chunk.util import *
from nltk.chunk.regexp import *
from nltk import Tree
from nltk.tag import pos_tag

# Let's say we give it a rule that says anything with a [DT NN] is an NP
chunk_rule = ChunkRule("<DT>?<NN.*>", "DT+NN* or NN* chunk")
chunk_parser = RegexpChunkParser([chunk_rule], chunk_node='NP')

# Let's say our test sentence is:
# "The cat sat on the mat the big dog chewed."
gold = tagstr2tree("[ The/DT cat/NN ] sat/VBD on/IN [ the/DT mat/NN ] [ the/DT big/JJ dog/NN ] chewed/VBD ./.")

# We POS tag the sentence and then chunk with our rule-based chunker.
test = pos_tag('The cat sat on the mat the big dog chewed .'.split())
chunked = chunk_parser.parse(test)

# Then we calculate the score.
chunkscore = ChunkScore()
chunkscore.score(gold, chunked)
chunkscore._updateMeasures()

# Our rule-based chunker says these are chunks.
chunkscore.guessed()

# Total number of tokens from test sentence. i.e.
# The/DT , cat/NN , on/IN , sat/VBD, the/DT , mat/NN , 
# the/DT , big/JJ , dog/NN , chewed/VBD , ./.
total = chunkscore._tags_total
# Number of tokens that are guessed correctly, i.e.
# The/DT , cat/NN , on/IN , the/DT , mat/NN , chewed/VBD , ./.
correct = chunkscore._tags_correct
print "Is correct/total == accuracy ?", chunkscore.accuracy() == (correct/total)
print correct, '/', total, '=', chunkscore.accuracy()
print "##############"

print "Correct chunk(s):" # i.e. True Positive.
correct_chunks = set(chunkscore.correct()).intersection(set(chunkscore.guessed()))
##print correct_chunks
print "Number of correct chunks = tp = ", len(correct_chunks)
assert len(correct_chunks) == chunkscore._tp_num
print

print "Missed chunk(s):" # i.e. False Negative.
##print chunkscore.missed()
print "Number of missed chunks = fn = ", len(chunkscore.missed())
assert len(chunkscore.missed()) == chunkscore._fn_num
print 

print "Wrongly guessed chunk(s):" # i.e. False positive.
wrong_chunks = set(chunkscore.guessed()).difference(set(chunkscore.correct()))
##print wrong_chunks
print "Number of wrong chunks = fp =", len(wrong_chunks)
print chunkscore._fp_num
assert len(wrong_chunks) == chunkscore._fp_num
print 

print "Recall = ", "tp/fn+tp =", len(correct_chunks), '/', len(correct_chunks)+len(chunkscore.missed()),'=', chunkscore.recall()

print "Precision =", "tp/fp+tp =", len(correct_chunks), '/', len(correct_chunks)+len(wrong_chunks), '=', chunkscore.precision()
于 2013-12-02T16:05:13.640 回答