Detailed example of word vector embedding
Word vector embedding requires efficient processing of large-scale text corpora. word2vec. In a simple way, the word is sent to the one-hot encoding learning system, the length is a vector of the length of the vocabulary, the corresponding position element of the word is 1, and the other elements are 0. The vector dimension is very high and cannot describe the semantic association of different words. Co-occurrence represents words, resolves semantic associations, traverses a large-scale text corpus, counts the surrounding words within a certain distance of each word, and represents each word with the normalized number of nearby words. Words in similar contexts have similar semantics. Use PCA or similar methods to reduce the dimensionality of the occurrence vector to obtain a denser representation. It has good performance and tracks all vocabulary co-occurrence matrices. The width and height are the vocabulary length. In 2013, Mikolov, Tomas and others proposed a context calculation word representation method, "Efficient estimation of word representations in vector space" (arXiv preprint arXiv:1301.3781(2013)). The skip-gram model starts from a random representation and predicts a simple classifier of context words based on the current word. The error is propagated through the classifier weight and word representation, and the two are adjusted to reduce the prediction error. The large-scale corpus training model representation vector approximates the compressed co-occurrence vector.
Dataset, English Wikipedia dump file contains the complete revision history of all pages, the current page version is 100GB.
Download the dump file and extract the page words. Count the number of occurrences of words and build a common vocabulary list. Encode the extracted pages using a vocabulary. The file is read line by line and the results are written immediately to disk. Save checkpoints between different steps to avoid program crashes.
__iter__Traverses the word index list page. encode obtains the vocabulary index of the string word. decode returns the string word according to the vocabulary index. _read_pages extracts words from a Wikipedia dump file (compressed XML) and saves them to a pages file, with one line of space-delimited words per page. The bz2 module open function reads files. Intermediate result compression processing. Regular expressions capture any sequence of consecutive letters or individual special letters. _build_vocabulary counts the number of words in the page file, and words with high frequency are written into the file. One-hot encoding requires a vocabulary. Glossary index encoding. Spelling errors and extremely uncommon words are removed, and the vocabulary only contains vocabulary_size - 1 most common words. All words that are not in the vocabulary are marked with
Dynamicly form training samples, organize a large amount of data, and the classifier does not occupy a large amount of memory. The skip-gram model predicts the context words of the current word. Traverse the text, current word data, surrounding word targets, and create training samples. Context size R, each word generates 2R samples, R words to the left and right of the current word. Semantic context, close distance is important, create as few training samples of far-context words as possible, and randomly select the word context size in the range [1, D=10]. Training pairs are formed based on the skip-gram model. Numpy arrays generate numerical stream batch data.
Initially, words are represented as random vectors. The classifier predicts the current representation of the context word based on the mid-level representation. Propagate errors, fine-tune weights, and input word representations. MomentumOptimizer model optimization, lack of intelligence and high efficiency.
The classifier is the core of the model. Noise contrastive estimation loss has excellent performance. Softmax classifier modeling. tf.nn.nce_loss New random vector negative sample (comparison sample), approximate softmax classifier.
The training model ends and the final word vector is written to the file. A subset of the Wikipedia corpus was trained on a normal CPU for 5 hours, and the NumPy array embedding representation was obtained. Complete corpus: . The AttrDict class is equivalent to a Python dict, with keys accessible as attributes.
import bz2
import collections
import os
import re
from lxml import etree
from helpers import download
class Wikipedia:
TOKEN_REGEX = re.compile(r'[A-Za-z]+|[!?.:,()]')
def __init__(self, url, cache_dir, vocabulary_size=10000):
self._cache_dir = os.path.expanduser(cache_dir)
self._pages_path = os.path.join(self._cache_dir, 'pages.bz2')
self._vocabulary_path = os.path.join(self._cache_dir, 'vocabulary.bz2')
if not os.path.isfile(self._pages_path):
print('Read pages')
self._read_pages(url)
if not os.path.isfile(self._vocabulary_path):
print('Build vocabulary')
self._build_vocabulary(vocabulary_size)
with bz2.open(self._vocabulary_path, 'rt') as vocabulary:
print('Read vocabulary')
self._vocabulary = [x.strip() for x in vocabulary]
self._indices = {x: i for i, x in enumerate(self._vocabulary)}
def __iter__(self):
with bz2.open(self._pages_path, 'rt') as pages:
for page in pages:
words = page.strip().split()
words = [self.encode(x) for x in words]
yield words
@property
def vocabulary_size(self):
return len(self._vocabulary)
def encode(self, word):
return self._indices.get(word, 0)
def decode(self, index):
return self._vocabulary[index]
def _read_pages(self, url):
wikipedia_path = download(url, self._cache_dir)
with bz2.open(wikipedia_path) as wikipedia, \
bz2.open(self._pages_path, 'wt') as pages:
for _, element in etree.iterparse(wikipedia, tag='{*}page'):
if element.find('./{*}redirect') is not None:
continue
page = element.findtext('./{*}revision/{*}text')
words = self._tokenize(page)
pages.write(' '.join(words) + '\n')
element.clear()
def _build_vocabulary(self, vocabulary_size):
counter = collections.Counter()
with bz2.open(self._pages_path, 'rt') as pages:
for page in pages:
words = page.strip().split()
counter.update(words)
common = ['
common = [x[0] for x in common]
with bz2.open(self._vocabulary_path, 'wt') as vocabulary:
for word in common:
vocabulary.write(word + '\n')
@classmethod
def _tokenize(cls, page):
words = cls.TOKEN_REGEX.findall(page)
words = [x.lower() for x in words]
return words
import tensorflow as tf
import numpy as np
from helpers import lazy_property
class EmbeddingModel:
def __init__(self, data, target, params):
self.data = data
self.target = target
self.params = params
self.embeddings
self.cost
self.optimize
@lazy_property
def embeddings(self):
initial = tf.random_uniform(
[self.params.vocabulary_size, self.params.embedding_size],
-1.0, 1.0)
return tf.Variable(initial)
@lazy_property
def optimize(self):
optimizer = tf.train.MomentumOptimizer(
self.params.learning_rate, self.params.momentum)
return optimizer.minimize(self.cost)
@lazy_property
def cost(self):
embedded = tf.nn.embedding_lookup(self.embeddings, self.data)
weight = tf.Variable(tf.truncated_normal(
[self.params.vocabulary_size, self.params.embedding_size],
stddev=1.0 / self.params.embedding_size ** 0.5))
bias = tf.Variable(tf.zeros([self.params.vocabulary_size]))
target = tf.expand_dims(self.target, 1)
return tf.reduce_mean(tf.nn.nce_loss(
weight, bias, embedded, target,
self.params.contrastive_examples,
self.params.vocabulary_size))
import collections
import tensorflow as tf
import numpy as np
from batched import batched
from EmbeddingModel import EmbeddingModel
from skipgrams import skipgrams
from Wikipedia import Wikipedia
from helpers import AttrDict
WIKI_DOWNLOAD_DIR = './wikipedia'
params = AttrDict(
vocabulary_size=10000,
max_context=10,
embedding_size=200,
contrastive_examples=100,
learning_rate=0.5,
momentum=0.5,
batch_size=1000,
)
data = tf.placeholder(tf.int32, [None])
target = tf.placeholder(tf.int32, [None])
model = EmbeddingModel(data, target, params)
corpus = Wikipedia(
'https://dumps.wikimedia.org/enwiki/20160501/'
'enwiki-20160501-pages-meta-current1.xml-p000000010p000030303.bz2',
WIKI_DOWNLOAD_DIR,
params.vocabulary_size)
examples = skipgrams(corpus, params.max_context)
batches = batched(examples, params.batch_size)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
average = collections.deque(maxlen=100)
for index, batch in enumerate(batches):
feed_dict = {data: batch[0], target: batch[1]}
cost, _ = sess.run([model.cost, model.optimize], feed_dict)
average.append(cost)
print('{}: {:5.1f}'.format(index + 1, sum(average) / len(average)))
if index > 100000:
break
embeddings = sess.run(model.embeddings)
np.save(WIKI_DOWNLOAD_DIR + '/embeddings.npy', embeddings)
The above is the detailed content of Detailed example of word vector embedding. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

How to delete Xiaohongshu notes? Notes can be edited in the Xiaohongshu APP. Most users don’t know how to delete Xiaohongshu notes. Next, the editor brings users pictures and texts on how to delete Xiaohongshu notes. Tutorial, interested users come and take a look! Xiaohongshu usage tutorial How to delete Xiaohongshu notes 1. First open the Xiaohongshu APP and enter the main page, select [Me] in the lower right corner to enter the special area; 2. Then in the My area, click on the note page shown in the picture below , select the note you want to delete; 3. Enter the note page, click [three dots] in the upper right corner; 4. Finally, the function bar will expand at the bottom, click [Delete] to complete.

As a Xiaohongshu user, we have all encountered the situation where published notes suddenly disappeared, which is undoubtedly confusing and worrying. In this case, what should we do? This article will focus on the topic of "What to do if the notes published by Xiaohongshu are missing" and give you a detailed answer. 1. What should I do if the notes published by Xiaohongshu are missing? First, don't panic. If you find that your notes are missing, staying calm is key and don't panic. This may be caused by platform system failure or operational errors. Checking release records is easy. Just open the Xiaohongshu App and click "Me" → "Publish" → "All Publications" to view your own publishing records. Here you can easily find previously published notes. 3.Repost. If found

How to add product links in notes in Xiaohongshu? In the Xiaohongshu app, users can not only browse various contents but also shop, so there is a lot of content about shopping recommendations and good product sharing in this app. If If you are an expert on this app, you can also share some shopping experiences, find merchants for cooperation, add links in notes, etc. Many people are willing to use this app for shopping, because it is not only convenient, but also has many Experts will make some recommendations. You can browse interesting content and see if there are any clothing products that suit you. Let’s take a look at how to add product links to notes! How to add product links to Xiaohongshu Notes Open the app on the desktop of your mobile phone. Click on the app homepage

No more need for pip? Come and learn how to uninstall pip effectively! Introduction: pip is one of Python's package management tools, which can easily install, upgrade and uninstall Python packages. However, sometimes we may need to uninstall pip, perhaps because we wish to use another package management tool, or because we need to completely clear the Python environment. This article will explain how to uninstall pip efficiently and provide specific code examples. 1. How to uninstall pip The following will introduce two common methods of uninstalling pip.

To learn more about the matplotlib color table, you need specific code examples 1. Introduction matplotlib is a powerful Python drawing library. It provides a rich set of drawing functions and tools that can be used to create various types of charts. The colormap (colormap) is an important concept in matplotlib, which determines the color scheme of the chart. In-depth study of the matplotlib color table will help us better master the drawing functions of matplotlib and make drawings more convenient.

The Charm of Learning C Language: Unlocking the Potential of Programmers With the continuous development of technology, computer programming has become a field that has attracted much attention. Among many programming languages, C language has always been loved by programmers. Its simplicity, efficiency and wide application make learning C language the first step for many people to enter the field of programming. This article will discuss the charm of learning C language and how to unlock the potential of programmers by learning C language. First of all, the charm of learning C language lies in its simplicity. Compared with other programming languages, C language

Learn Pygame from scratch: complete installation and configuration tutorial, specific code examples required Introduction: Pygame is an open source game development library developed using the Python programming language. It provides a wealth of functions and tools, allowing developers to easily create a variety of type of game. This article will help you learn Pygame from scratch, and provide a complete installation and configuration tutorial, as well as specific code examples to get you started quickly. Part One: Installing Python and Pygame First, make sure you have

When editing text content in Word, you sometimes need to enter formula symbols. Some guys don’t know how to input the root number in Word, so Xiaomian asked me to share with my friends a tutorial on how to input the root number in Word. Hope it helps my friends. First, open the Word software on your computer, then open the file you want to edit, and move the cursor to the location where you need to insert the root sign, refer to the picture example below. 2. Select [Insert], and then select [Formula] in the symbol. As shown in the red circle in the picture below: 3. Then select [Insert New Formula] below. As shown in the red circle in the picture below: 4. Select [Radical Formula], and then select the appropriate root sign. As shown in the red circle in the picture below:
