Natural language processing (NLP) is the automatic or semi-automatic processing of human language. NLP is closely related to linguistics and has links to research in cognitive science, psychology, physiology, and mathematics. In the computer science domain in particular, NLP is related to compiler techniques, formal language theory, human-computer interaction, machine learning, and theorem proving. This Quora question shows the different advantages of NLP.
In this tutorial I'm going to walk you through an interesting Python platform for NLP called the Natural Language Toolkit (NLTK). Before we see how to work with this platform, let me first tell you what NLTK is.
The Natural Language Toolkit (NLTK) is a platform used for building programs for text analysis. The platform was originally released by Steven Bird and Edward Loper in conjunction with a computational linguistics course at the University of Pennsylvania in 2001. There is an accompanying book for the platform called Natural Language Processing with Python.
Let's now install NLTK to start experimenting with natural language processing. It will be fun!
Installing NLTK is very simple. I'm using Windows 10, so in my Command Prompt (sent_tokenize() method.
Consider the following text.
"Python is a very high-level programming language. Python is interpreted."<br>
Let's tokenize it using the word_tokenize() method. Let's use the same text and pass it through the word_tokenize()<code>word_tokenize()
method.
from nltk.tokenize import word_tokenize
text = "Python is a very high-level programming language. Python is interpreted."<br>print(word_tokenize(text))
Here is the output:
['Python', 'is', 'a', 'very', 'high-level', 'programming', 'language', '.', 'Python', 'is', 'interpreted', '.']<br>
As you can see from the output, punctuation marks are also considered to be words.
Sometimes we need to filter out useless data to make the data more understandable by the computer. In natural language processing (NLP), such useless data (words) are called stop words. These words have no meaning to us, so we would like to remove them.
NLTK provides us with some stop words to start with. To see those words, use the following script:
from nltk.corpus import stopwords<br>print(set(stopwords.words('English')))<br>
In which case you will get the following output:
What we did is that we printed out a set (unordered collection of items) of stop words in the English language. If you were using another language, for example German, you have to define it as follows:
from nltk.corpus import stopwords<br>print(set(stopwords.words('german')))<br>
How can we remove the stop words from our own text? The example below shows how we can perform this task:
from nltk.corpus import stopwords<br>from nltk.tokenize import word_tokenize<br><br>text = 'In this tutorial, I\'m learning NLTK. It is an interesting platform.'<br>stop_words = set(stopwords.words('english'))<br>words = word_tokenize(text)<br><br>new_sentence = []<br><br>for word in words:<br> if word not in stop_words:<br> new_sentence.append(word)<br><br>print(new_sentence)<br>
The output of the above script is:
So what the word_tokenize()<code>word_tokenize()
function does is:
Tokenize a string to split off punctuation other than periods
Let's say we have the following text file (download the text file from Dropbox). We would like to look for (search) the word language
. We can simply do this using the NLTK platform as follows:
"Python is a very high-level programming language. Python is interpreted."<br>
In which case you will get the following output:
Notice that concordance()
returns every occurrence of the word language
, in addition to some context. Before that, as shown in the script above, we tokenize the read file and then convert it into an nltk.Text
object.
I just want to note that the first time I ran the program, I got the following error, which seems to be related to the encoding the console uses:
from nltk.tokenize import word_tokenize
text = "Python is a very high-level programming language. Python is interpreted."<br>print(word_tokenize(text))
What I simply did to solve this issue is to run this command in my console before running the program: chcp 65001
.
As mentioned in Wikipedia:
Project Gutenberg (PG) is a volunteer effort to digitize and archive cultural works, to "encourage the creation and distribution of eBooks". It was founded in 1971 by Michael S. Hart and is the oldest digital library. Most of the items in its collection are the full texts of public domain books. The project tries to make these as free as possible, in long-lasting, open formats that can be used on almost any computer. As of 3 October 2015, Project Gutenberg reached 50,000 items in its collection.
NLTK contains a small selection of texts from Project Gutenberg. To see the included files from Project Gutenberg, we do the following:
['Python', 'is', 'a', 'very', 'high-level', 'programming', 'language', '.', 'Python', 'is', 'interpreted', '.']<br>
The output of the above script will be as follows:
If we want to find the number of words for the text file bryant-stories.txt
for instance, we can do the following:
from nltk.corpus import stopwords<br>print(set(stopwords.words('English')))<br>
The above script should return the following number of words: 55563
.
As we have seen in this tutorial, the NLTK platform provides us with a powerful tool for working with natural language processing (NLP). I have only scratched the surface in this tutorial. If you would like to go deeper into using NLTK for different NLP tasks, you can refer to NLTK's accompanying book: Natural Language Processing with Python.
This post has been updated with contributions from Esther Vaati. Esther is a software developer and writer for Envato Tuts .
The above is the detailed content of Introducing the Natural Language Toolkit (NLTK). For more information, please follow other related articles on the PHP Chinese website!