A relatively memory-saving sparse matrix Python storage solution

大家讲道理
Release: 2016-11-07 17:15:16
Original
1212 people have browsed it

Recommendation systems often need to process data like user_id, item_id, rating, which are actually sparse matrices in mathematics. Scipy provides the sparse module to solve this problem, but scipy.sparse has many problems that are not suitable for use: 1. It cannot It supports fast slicing of data[i, ...], data[..., j], and data[i, j] at the same time; 2. Because the data is stored in memory, it cannot well support massive data processing.

To support fast slicing of data[i, ...], data[..., j], the data of i or j needs to be stored centrally; at the same time, in order to save massive data, part of the data also needs to be placed in On the hard disk, use memory as a buffer. The solution here is relatively simple. Use a Dict-like thing to store data. For a certain i (such as 9527), its data is stored in dict['i9527']. Similarly, for a certain j (such as 3306) , all its data is stored in dict['j3306']. When you need to take out data[9527, ...], just take out dict['i9527']. dict['i9527'] is originally a dict object , stores the value corresponding to a certain j. In order to save memory space, we store this dict in the form of a binary string and directly enter the code:

'''
Sparse Matrix
'''
import struct
import numpy as np
import bsddb
from cStringIO import StringIO
  
class DictMatrix():
    def __init__(self, container = {}, dft = 0.0):
        self._data  = container
        self._dft   = dft
        self._nums  = 0
  
    def __setitem__(self, index, value):
        try:
            i, j = index
        except:
            raise IndexError('invalid index')
  
        ik = ('i%d' % i)
        # 为了节省内存,我们把j, value打包成字二进制字符串
        ib = struct.pack('if', j, value)
        jk = ('j%d' % j)
        jb = struct.pack('if', i, value)
  
        try:
            self._data[ik] += ib
        except:
            self._data[ik] = ib
        try:
            self._data[jk] += jb
        except:
            self._data[jk] = jb
        self._nums += 1
  
    def __getitem__(self, index):
        try:
            i, j = index
        except:
            raise IndexError('invalid index')
  
        if (isinstance(i, int)):
            ik = ('i%d' % i)
            if not self._data.has_key(ik): return self._dft
            ret = dict(np.fromstring(self._data[ik], dtype = 'i4,f4'))
            if (isinstance(j, int)): return ret.get(j, self._dft)
  
        if (isinstance(j, int)):
            jk = ('j%d' % j)
            if not self._data.has_key(jk): return self._dft
            ret = dict(np.fromstring(self._data[jk], dtype = 'i4,f4'))
  
        return ret
  
    def __len__(self):
        return self._nums
  
    def __iter__(self):
        pass
  
    '''
    从文件中生成matrix
    考虑到dbm读写的性能不如内存,我们做了一些缓存,每1000W次批量写入一次
    考虑到字符串拼接性能不太好,我们直接用StringIO来做拼接
    '''
    def from_file(self, fp, sep = 't'):
        cnt = 0
        cache = {}
        for l in fp:
            if 10000000 == cnt:
                self._flush(cache)
                cnt = 0
                cache = {}
            i, j, v = [float(i) for i in l.split(sep)]
  
            ik = ('i%d' % i)
            ib = struct.pack('if', j, v)
            jk = ('j%d' % j)
            jb = struct.pack('if', i, v)
  
            try:
                cache[ik].write(ib)
            except:
                cache[ik] = StringIO()
                cache[ik].write(ib)
  
            try:
                cache[jk].write(jb)
            except:
                cache[jk] = StringIO()
                cache[jk].write(jb)
  
            cnt += 1
            self._nums += 1
  
        self._flush(cache)
        return self._nums
  
    def _flush(self, cache):
        for k,v in cache.items():
            v.seek(0)
            s = v.read()
            try:
                self._data[k] += s
            except:
                self._data[k] = s
  
if __name__ == '__main__':
    db = bsddb.btopen(None, cachesize = 268435456)
    data = DictMatrix(db)
    data.from_file(open('/path/to/log.txt', 'r'), ',')
Copy after login

Test 4500W rating data (integer, integer, floating point format ), a 922MB text file is imported. If the memory dict is used to store it, the construction is completed in 12 minutes, consuming 1.2G of memory. Using the bdb storage in the sample code, the construction is completed in 20 minutes, occupying about 300~400MB of memory, which is not much larger than cachesize. Data reading Take the test:

import timeit
timeit.Timer('foo = __main__.data[9527, ...]', 'import __main__').timeit(number = 1000)
Copy after login

consumes 1.4788 seconds, and it takes about 1.5ms to read a piece of data.

Another benefit of using Dict class to store data is that you can use memory Dict or any other form of DBM, or even the legendary Tokyo Cabinet…

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template