初学者|不能不会的NLTK

点击上方蓝色字体,关注AI小白入门


跟着博主的脚步,每天进步一点点




本文简绍了NLTK的使用方法,这一个被称为“使用Python进行计算语言学教学和工作的绝佳工具”。



简介


NLTK被称为“使用Python进行计算语言学教学和工作的绝佳工具”。它为50多种语料库和词汇资源(如WordNet)提供了易于使用的界面,还提供了一套用于分类,标记化,词干化,标记,解析和语义推理的文本处理库。接下来然我们一起来实战学习一波~~


官网地址:http://www.nltk.org/

Github地址:https://github.com/nltk/nltk



实战


1.Tokenize


# 安装:pip install nltkimport nltksentence = 'I love natural language processing!'tokens = nltk.word_tokenize(sentence)print(tokens)['I', 'love', 'natural', 'language', 'processing', '!']
import nltk
sentence = 'I love natural language processing!'
tokens = nltk.word_tokenize(sentence)
print(tokens)

['I', 'love', 'natural', 'language', 'processing', '!']


2.词性标注


tagged = nltk.pos_tag(tokens)print(tagged)[('I', 'PRP'), ('love', 'VBP'), ('natural', 'JJ'), ('language', 'NN'), ('processing', 'NN'), ('!', '.')]

[('I', 'PRP'), ('love', 'VBP'), ('natural', 'JJ'), ('language', 'NN'), ('processing', 'NN'), ('!', '.')]


3.命名实体识别


# 下载模型:nltk.download('maxent_ne_chunker')nltk.download('maxent_ne_chunker')[nltk_data] Downloading package maxent_ne_chunker to[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...[nltk_data]   Unzipping chunkers\maxent_ne_chunker.zip.Truenltk.download('words')[nltk_data] Downloading package words to[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...[nltk_data]   Unzipping corpora\words.zip.Trueentities = nltk.chunk.ne_chunk(tagged)print(entities)(S I/PRP love/VBP natural/JJ language/NN processing/NN !/.)
nltk.download('maxent_ne_chunker')
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping chunkers\maxent_ne_chunker.zip.
True

nltk.download('words')
[nltk_data] Downloading package words to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\words.zip.
True

entities = nltk.chunk.ne_chunk(tagged)
print(entities)

(S I/PRP love/VBP natural/JJ language/NN processing/NN !/.)


4.下载语料库


# 例如:下载brown# 更多语料库:http://www.nltk.org/howto/corpus.htmlnltk.download('brown')[nltk_data] Downloading package brown to[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...[nltk_data]   Package brown is already up-to-date!Truefrom nltk.corpus import brownbrown.words()['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', ...]
# 更多语料库:http://www.nltk.org/howto/corpus.html
nltk.download('brown')
[nltk_data] Downloading package brown to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Package brown is already up-to-date!
True

from nltk.corpus import brown
brown.words()

['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', ...]


5.度量


# percision:正确率# recall:召回率# f_measurefrom nltk.metrics import precision, recall, f_measurereference = 'DET NN VB DET JJ NN NN IN DET NN'.split()test = 'DET VB VB DET NN NN NN IN DET NN'.split()reference_set = set(reference)test_set = set(test)print("precision:" + str(precision(reference_set, test_set)))print("recall:" + str(recall(reference_set, test_set)))print("f_measure:" + str(f_measure(reference_set,test_set)))precision:1.0recall:0.8f_measure:0.8888888888888888
# recall:召回率
# f_measure
from nltk.metrics import precision, recall, f_measure
reference = 'DET NN VB DET JJ NN NN IN DET NN'.split()
test = 'DET VB VB DET NN NN NN IN DET NN'.split()
reference_set = set(reference)
test_set = set(test)
print("precision:" + str(precision(reference_set, test_set)))
print("recall:" + str(recall(reference_set, test_set)))
print("f_measure:" + str(f_measure(reference_set,
test_set)))

precision:1.0
recall:0.8
f_measure:0.8888888888888888


6.词干提取(Stemmers)


# Porter stemmerfrom nltk.stem.porter import *# 创建词干提取器stemmer = PorterStemmer()plurals = ['caresses', 'flies', 'dies', 'mules', 'denied']singles = [stemmer.stem(plural) for plural in plurals]print(' '.join(singles))caress fli die mule deni# Snowball stemmerfrom nltk.stem.snowball import SnowballStemmerprint(" ".join(SnowballStemmer.languages))arabic danish dutch english finnish french german hungarian italian norwegian porter portuguese romanian russian spanish swedish# 指定语言stemmer = SnowballStemmer("english")print(stemmer.stem("running"))run
from nltk.stem.porter import *
# 创建词干提取器
stemmer = PorterStemmer()
plurals = ['caresses', 'flies', 'dies', 'mules', 'denied']
singles = [stemmer.stem(plural) for plural in plurals]
print(' '.join(singles))

caress fli die mule deni

# Snowball stemmer
from nltk.stem.snowball import SnowballStemmer
print(" ".join(SnowballStemmer.languages))
arabic danish dutch english finnish french german hungarian italian norwegian porter portuguese romanian russian spanish swedish
# 指定语言
stemmer = SnowballStemmer("english")
print(stemmer.stem("running"))

run


7.SentiWordNet接口


# 下载sentiwordnet词典import nltknltk.download('sentiwordnet')[nltk_data] Downloading package sentiwordnet to[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...[nltk_data]   Unzipping corpora\sentiwordnet.zip.True# SentiSynsets: synsets(同义词集)的情感值from nltk.corpus import sentiwordnet as swnbreakdown = swn.senti_synset('breakdown.n.03')print(breakdown)print(breakdown.pos_score())print(breakdown.neg_score())print(breakdown.obj_score())<breakdown.n.03: PosScore=0.0 NegScore=0.25>0.00.250.75# Lookup(查看)print(list(swn.senti_synsets('slow')))[SentiSynset('decelerate.v.01'), SentiSynset('slow.v.02'), SentiSynset('slow.v.03'), SentiSynset('slow.a.01'), SentiSynset('slow.a.02'), SentiSynset('dense.s.04'), SentiSynset('slow.a.04'), SentiSynset('boring.s.01'), SentiSynset('dull.s.08'), SentiSynset('slowly.r.01'), SentiSynset('behind.r.03')]happy = swn.senti_synsets('happy', 'a')print(list(happy))[SentiSynset('happy.a.01'), SentiSynset('felicitous.s.02'), SentiSynset('glad.s.02'), SentiSynset('happy.s.04')]
import nltk
nltk.download('sentiwordnet')
[nltk_data] Downloading package sentiwordnet to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\sentiwordnet.zip.
True

# SentiSynsets: synsets(同义词集)的情感值
from nltk.corpus import sentiwordnet as swn
breakdown = swn.senti_synset('breakdown.n.03')
print(breakdown)
print(breakdown.pos_score())
print(breakdown.neg_score())
print(breakdown.obj_score())

<breakdown.n.03: PosScore=0.0 NegScore=0.25>
0.0
0.25
0.75

# Lookup(查看)
print(list(swn.senti_synsets('slow')))
[SentiSynset('decelerate.v.01'), SentiSynset('slow.v.02'), SentiSynset('slow.v.03'), SentiSynset('slow.a.01'), SentiSynset('slow.a.02'), SentiSynset('dense.s.04'), SentiSynset('slow.a.04'), SentiSynset('boring.s.01'), SentiSynset('dull.s.08'), SentiSynset('slowly.r.01'), SentiSynset('behind.r.03')]
happy = swn.senti_synsets('happy', 'a')
print(list(happy))

[SentiSynset('happy.a.01'), SentiSynset('felicitous.s.02'), SentiSynset('glad.s.02'), SentiSynset('happy.s.04')]


更多用法:http://www.nltk.org/howto/index.html


代码已上传:

https://github.com/yuquanle/StudyForNLP/blob/master/NLPtools/NLTKDemo.ipynb



The End


▼往期精彩回顾▼ 新年送福气|您有一份NLP大礼包待领取
自然语言处理中注意力机制综述
达观杯文本智能处理挑战赛冠军解决方案

长按二维码关注
AI小白入门

ID:StudyForAI

学习AI学习ai(爱)

期待与您的相遇~

你点的每个赞,我都认真当成了喜欢
全部评论

相关推荐

头像
04-26 15:05
已编辑
腾讯_后端开发
小红书 iOS社区技术 年薪52w+包三餐大小周
点赞 评论 收藏
转发
点赞 收藏 评论
分享
牛客网
牛客企业服务