初学者|别说不会用Stanfordcorenlp

点击上方蓝色字体,关注AI小白入门


跟着博主的脚步,每天进步一点点




本文是对Stanfordcorenlp工具使用方法的描述。Stanford CoreNLP提供了一套人类语言技术工具。 支持多种自然语言处理基本功能,Stanfordcorenlp是它的一个python接口。



简介


Stanford CoreNLP提供了一套人类语言技术工具。 支持多种自然语言处理基本功能,Stanfordcorenlp是它的一个python接口。


Stanfordcorenlp主要功能包括分词、词性标注、命名实体识别、句法结构分析和依存分析等等。


Github地址:https://github.com/stanfordnlp/CoreNLP



实战


1.安***r>


# 安装:pip install stanfordcorenlp# 先下载模型,下载地址:https://nlp.stanford.edu/software/corenlp-backup-download.html# 支持多种语言,这里记录一下中英文使用方法from stanfordcorenlp import StanfordCoreNLPzh_model = StanfordCoreNLP(r'stanford-corenlp-full-2018-02-27', lang='zh')en_model = StanfordCoreNLP(r'stanford-corenlp-full-2018-02-27', lang='en')zh_sentence = '我爱自然语言处理技术!'en_sentence = 'I love natural language processing technology!'
# 先下载模型,下载地址:https://nlp.stanford.edu/software/corenlp-backup-download.html
# 支持多种语言,这里记录一下中英文使用方法
from stanfordcorenlp import StanfordCoreNLP
zh_model = StanfordCoreNLP(r'stanford-corenlp-full-2018-02-27', lang='zh')
en_model = StanfordCoreNLP(r'stanford-corenlp-full-2018-02-27', lang='en')
zh_sentence = '我爱自然语言处理技术!'
en_sentence = 'I love natural language processing technology!'


2.分词


print ('Tokenize:', zh_model.word_tokenize(zh_sentence))print ('Tokenize:', en_model.word_tokenize(en_sentence))Tokenize: ['我爱', '自然', '语言', '处理', '技术', '!']Tokenize: ['I', 'love', 'natural', 'language', 'processing', 'technology', '!']'Tokenize:', zh_model.word_tokenize(zh_sentence))
print ('Tokenize:', en_model.word_tokenize(en_sentence))

Tokenize: ['我爱', '自然', '语言', '处理', '技术', '!']
Tokenize: ['I', 'love', 'natural', 'language', 'processing', 'technology', '!']


3.词性标注


print ('Part of Speech:', zh_model.pos_tag(zh_sentence))print ('Part of Speech:', en_model.pos_tag(en_sentence))Part of Speech: [('我爱', 'NN'), ('自然', 'AD'), ('语言', 'NN'), ('处理', 'VV'), ('技术', 'NN'), ('!', 'PU')]Part of Speech: [('I', 'PRP'), ('love', 'VBP'), ('natural', 'JJ'), ('language', 'NN'), ('processing', 'NN'), ('technology', 'NN'), ('!', '.')]'Part of Speech:', zh_model.pos_tag(zh_sentence))
print ('Part of Speech:', en_model.pos_tag(en_sentence))

Part of Speech: [('我爱', 'NN'), ('自然', 'AD'), ('语言', 'NN'), ('处理', 'VV'), ('技术', 'NN'), ('!', 'PU')]
Part of Speech: [('I', 'PRP'), ('love', 'VBP'), ('natural', 'JJ'), ('language', 'NN'), ('processing', 'NN'), ('technology', 'NN'), ('!', '.')]


4.命名实体识别


print ('Named Entities:', zh_model.ner(zh_sentence))print ('Named Entities:', en_model.ner(en_sentence))Named Entities: [('我爱', 'O'), ('自然', 'O'), ('语言', 'O'), ('处理', 'O'), ('技术', 'O'), ('!', 'O')]Named Entities: [('I', 'O'), ('love', 'O'), ('natural', 'O'), ('language', 'O'), ('processing', 'O'), ('technology', 'O'), ('!', 'O')]'Named Entities:', zh_model.ner(zh_sentence))
print ('Named Entities:', en_model.ner(en_sentence))

Named Entities: [('我爱', 'O'), ('自然', 'O'), ('语言', 'O'), ('处理', 'O'), ('技术', 'O'), ('!', 'O')]
Named Entities: [('I', 'O'), ('love', 'O'), ('natural', 'O'), ('language', 'O'), ('processing', 'O'), ('technology', 'O'), ('!', 'O')]


5.句法成分分析


print ('Constituency Parsing:', zh_model.parse(zh_sentence) + "\n")print ('Constituency Parsing:', en_model.parse(en_sentence))Constituency Parsing: (ROOT  (IP    (IP      (NP (NN 我爱))      (ADVP (AD 自然))      (NP (NN 语言))      (VP (VV 处理)        (NP (NN 技术))))    (PU !)))Constituency Parsing: (ROOT  (S    (NP (PRP I))    (VP (VBP love)      (NP (JJ natural) (NN language) (NN processing) (NN technology)))    (. !)))'Constituency Parsing:', zh_model.parse(zh_sentence) + "\n")
print ('Constituency Parsing:', en_model.parse(en_sentence))

Constituency Parsing: (ROOT
 (IP
   (IP
     (NP (NN 我爱))
     (ADVP (AD 自然))
     (NP (NN 语言))
     (VP (VV 处理)
       (NP (NN 技术))))
   (PU !)))

Constituency Parsing: (ROOT
 (S
   (NP (PRP I))
   (VP (VBP love)
     (NP (JJ natural) (NN language) (NN processing) (NN technology)))
   (. !)))


6.依存句法分析


print ('Dependency:', zh_model.dependency_parse(zh_sentence))print ('Dependency:', en_model.dependency_parse(en_sentence))Dependency: [('ROOT', 0, 4), ('nsubj', 4, 1), ('advmod', 4, 2), ('nsubj', 4, 3), ('dobj', 4, 5), ('punct', 4, 6)]Dependency: [('ROOT', 0, 2), ('nsubj', 2, 1), ('amod', 6, 3), ('compound', 6, 4), ('compound', 6, 5), ('dobj', 2, 6), ('punct', 2, 7)]'Dependency:', zh_model.dependency_parse(zh_sentence))
print ('Dependency:', en_model.dependency_parse(en_sentence))

Dependency: [('ROOT', 0, 4), ('nsubj', 4, 1), ('advmod', 4, 2), ('nsubj', 4, 3), ('dobj', 4, 5), ('punct', 4, 6)]
Dependency: [('ROOT', 0, 2), ('nsubj', 2, 1), ('amod', 6, 3), ('compound', 6, 4), ('compound', 6, 5), ('dobj', 2, 6), ('punct', 2, 7)]


代码已上传:

https://github.com/yuquanle/StudyForNLP/blob/master/NLPtools/StanfordcorenlpDemo.ipynb

The End


▼往期精彩回顾▼ 初学者|一文掌握HanLP用法
初学者|知否?知否?一文学会Jieba使用方法
初学者|今天掌握SnowNLP好不好

长按二维码关注
AI小白入门

ID:StudyForAI

学习AI学习ai(爱)

期待与您的相遇~

你点的每个赞,我都认真当成了喜欢
全部评论

相关推荐

点赞 收藏 评论
分享
牛客网
牛客企业服务