Home > Backend Development > Python Tutorial > Using python for Chinese word segmentation

Using python for Chinese word segmentation

高洛峰
Release: 2016-10-18 09:18:34
Original
1530 people have browsed it

At present, the participles I often use include stuttering participles, NLPIR participles, etc.

I have been using stuttering participles recently. I made some recommendations and it is quite easy to use.

1. Introduction to stuttering word segmentation

Using stuttering word segmentation for Chinese word segmentation, there are three basic implementation principles:

Efficient word graph scanning is implemented based on the Trie tree structure, and a directed acyclic composed of all possible word formations of Chinese characters in the sentence is generated Figure (DAG)

Dynamic programming is used to find the maximum probability path and the maximum segmentation combination based on word frequency

For unregistered words, an HMM model based on the word-forming ability of Chinese characters is used, and the Viterbi algorithm is used

2. Installation and use (Linux)

1. Download the tool package, unzip it and enter the directory, run: python setup.py install

Hint: a. A good habit is to read the readme of the downloaded software first, and then Perform operations. (If you don’t read the readme, try + Baidu directly, you will take many detours);

  b. When running the installation command, an error occurred: no permission! (Some people may encounter this problem because of insufficient permissions. Execution: sudo !! where "!!" means the previous command, here refers to the installation command above), it can run normally after using sudo.


2. When using Jieba for word segmentation, the function you will definitely use is: jieba.cut(arg1, arg2); this is a function for word segmentation. We only need to understand the following three points before you can use it.

 The a.cut method accepts two input parameters: the first parameter (arg1) is the string that needs to be segmented, and the arg2 parameter is used to control the word segmentation mode.

  The word segmentation mode is divided into two categories: the default mode, which attempts to cut the sentence into the most accurate form, suitable for text analysis; the full mode, which scans out all the words in the sentence that can be formed into words, suitable for search engines

b. To be waited for The string of word segmentation can be a gbk string, a utf-8 string or a unicode

People who use Python should pay attention to encoding issues. Python processes characters based on ASCII codes. When characters that do not belong to ASCII appear (such as in the code (using Chinese characters in ), an error message will appear: "ASCII codec can't encode character". The solution is to add a statement at the top of the file: #! -*- coding:utf-8 -*- to tell the Python compiler: " My file is encoded using utf-8. When you want to decode, please use utf-8". (Remember here, this command must be added at the top of the file. If it is not at the top, the encoding problem will still exist and will not be solved.) Regarding encoding conversion, you can refer to the blog post (ps: Personal understanding of "import sys reload( sys) sys.setdefaultencoding('utf-8')" These sentences are equivalent to "#! -*- coding:utf-8 -*- ")

The structure returned by c.jieba.cut is an iterable generator, you can use a for loop to get each word (unicode) obtained after word segmentation, or you can use list (jieba.cut(...)) to convert it into a list

3. The following is an example of how to use it in jieba As a description:

#! -*- coding:utf-8 -*-
import jieba
seg_list = jieba.cut("我来到北京清华大学", cut_all = True)
print "Full Mode:", ' '.join(seg_list)
  
seg_list = jieba.cut("我来到北京清华大学")
print "Default Mode:", ' '.join(seg_list)
Copy after login

The output result is:

Full Mode: 我/ 来/ 来到/ 到/ 北/ 北京/ 京/ 清/ 清华/ 清华大学/ 华/ 华大/ 大/ 大学/ 学  
Default Mode: 我/ 来到/ 北京/ 清华大学
Copy after login

3. Other functions of stuttering Chinese word segmentation

1. Add or manage a custom dictionary

All dictionary contents of stuttering are stored in dict. txt, you can continuously improve the content in dict.txt.

2. Keyword extraction

Extract key keywords by calculating the TF/IDF weight of the keywords after word segmentation.


Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template