python基于mysql实现的简单队列以及跨进程锁实例详解
通常在我们进行多进程应用开发的过程中,不可避免的会遇到多个进程访问同一个资源(临界资源)的状况,这时候必须通过加一个全局性的锁,来实现资源的同步访问(即:同一时间里只能有一个进程访问资源)。
举个例子如下:
假设我们用mysql来实现一个任务队列,实现的过程如下:
1. 在Mysql中创建Job表,用于储存队列任务,如下:
create table jobs( id auto_increment not null primary key, message text not null, job_status not null default 0 );
message 用来存储任务信息,job_status用来标识任务状态,假设只有两种状态,0:在队列中, 1:已出队列
2. 有一个生产者进程,往job表中放新的数据,进行排队:
insert into jobs(message) values('msg1');
3.假设有多个消费者进程,从job表中取排队信息,要做的操作如下:
select * from jobs where job_status=0 order by id asc limit 1; update jobs set job_status=1 where id = ?; -- id为刚刚取得的记录id
4. 如果没有跨进程的锁,两个消费者进程有可能同时取到重复的消息,导致一个消息被消费多次。这种情况是我们不希望看到的,于是,我们需要实现一个跨进程的锁。
=========================分割线=======================================
说到跨进程的锁实现,我们主要有几种实现方式:
(1)信号量
(2)文件锁fcntl
(3)socket(端口号绑定)
(4)signal
这几种方式各有利弊,总体来说前2种方式可能多一点,这里我就不详细说了,大家可以去查阅资料。
查资料的时候发现mysql中有锁的实现,适用于对于性能要求不是很高的应用场景,大并发的分布式访问可能会有瓶颈.
对此用python实现了一个demo,如下:
文件名:glock.py
#!/usr/bin/env python2.7 # # -*- coding:utf-8 -*- # # Desc : # import logging, time import MySQLdb class Glock: def __init__(self, db): self.db = db def _execute(self, sql): cursor = self.db.cursor() try: ret = None cursor.execute(sql) if cursor.rowcount != 1: logging.error("Multiple rows returned in mysql lock function.") ret = None else: ret = cursor.fetchone() cursor.close() return ret except Exception, ex: logging.error("Execute sql \"%s\" failed! Exception: %s", sql, str(ex)) cursor.close() return None def lock(self, lockstr, timeout): sql = "SELECT GET_LOCK('%s', %s)" % (lockstr, timeout) ret = self._execute(sql) if ret[0] == 0: logging.debug("Another client has previously locked '%s'.", lockstr) return False elif ret[0] == 1: logging.debug("The lock '%s' was obtained successfully.", lockstr) return True else: logging.error("Error occurred!") return None def unlock(self, lockstr): sql = "SELECT RELEASE_LOCK('%s')" % (lockstr) ret = self._execute(sql) if ret[0] == 0: logging.debug("The lock '%s' the lock is not released(the lock was not established by this thread).", lockstr) return False elif ret[0] == 1: logging.debug("The lock '%s' the lock was released.", lockstr) return True else: logging.error("The lock '%s' did not exist.", lockstr) return None #Init logging def init_logging(): sh = logging.StreamHandler() logger = logging.getLogger() logger.setLevel(logging.DEBUG) formatter = logging.Formatter('%(asctime)s -%(module)s:%(filename)s-L%(lineno)d-%(levelname)s: %(message)s') sh.setFormatter(formatter) logger.addHandler(sh) logging.info("Current log level is : %s",logging.getLevelName(logger.getEffectiveLevel())) def main(): init_logging() db = MySQLdb.connect(host='localhost', user='root', passwd='') lock_name = 'queue' l = Glock(db) ret = l.lock(lock_name, 10) if ret != True: logging.error("Can't get lock! exit!") quit() time.sleep(10) logging.info("You can do some synchronization work across processes!") ##TODO ## you can do something in here ## l.unlock(lock_name) if __name__ == "__main__": main()
在main函数里:
l.lock(lock_name, 10) 中,10是表示timeout的时间是10秒,如果10秒还获取不了锁,就返回,执行后面的操作。
在这个demo中,在标记TODO的地方,可以将消费者从job表中取消息的逻辑放在这里。即分割线以上的.
2.假设有多个消费者进程,从job表中取排队信息,要做的操作如下:
select * from jobs where job_status=0 order by id asc limit 1; update jobs set job_status=1 where id = ?; -- id为刚刚取得的记录id
这样,就能保证多个进程访问临界资源时同步进行了,保证数据的一致性。
测试的时候,启动两个glock.py, 结果如下:
[@tj-10-47 test]# ./glock.py 2014-03-14 17:08:40,277 -glock:glock.py-L70-INFO: Current log level is : DEBUG 2014-03-14 17:08:40,299 -glock:glock.py-L43-DEBUG: The lock 'queue' was obtained successfully. 2014-03-14 17:08:50,299 -glock:glock.py-L81-INFO: You can do some synchronization work across processes! 2014-03-14 17:08:50,299 -glock:glock.py-L56-DEBUG: The lock 'queue' the lock was released.
可以看到第一个glock.py是 17:08:50解锁的,下面的glock.py是在17:08:50获取锁的,可以证实这样是完全可行的。
[@tj-10-47 test]# ./glock.py 2014-03-14 17:08:46,873 -glock:glock.py-L70-INFO: Current log level is : DEBUG 2014-03-14 17:08:50,299 -glock:glock.py-L43-DEBUG: The lock 'queue' was obtained successfully. 2014-03-14 17:09:00,299 -glock:glock.py-L81-INFO: You can do some synchronization work across processes! 2014-03-14 17:09:00,300 -glock:glock.py-L56-DEBUG: The lock 'queue' the lock was released. [@tj-10-47 test]#

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The log files of the Debian system are valuable resources for system administrators and developers to diagnose problems and monitor the operating status of the system. This article will focus on some key log information that cannot be ignored. Core system logs (usually located in /var/log/syslog or /var/log/messages) These logs record the core activities of the system, including: system startup and shutdown events: log kernel version, hardware detection results, etc., to help track startup failures or shutdown exceptions. Hardware failure alerts: such as disk errors, memory problems, etc., to promptly detect potential hardware problems. Service status changes: Record the service start, stop and restart events to facilitate monitoring of the service's health. User login/logout history:

Error loading Pickle file in Python 3.6 environment: ModuleNotFoundError:Nomodulenamed...

This article discusses how to optimize website performance on Debian systems. "DebianStrings" is not a standard term and may refer to tools or technologies used in Debian systems to improve website performance. The following are some practical tips: 1. It is recommended to use the Pagoda panel to simplify the installation and configuration process for web server and PHP environment configuration. It is recommended to install Nginx1.22.1 as the web server, PHP8.2 as the script interpreter, and MySQL10.7.3-MariaDB as the database system. Be sure to enable the necessary PHP extensions, such as fileinfo, opcache, memcached, red

"DebianStrings" is not a standard term, and its specific meaning is still unclear. This article cannot directly comment on its browser compatibility. However, if "DebianStrings" refers to a web application running on a Debian system, its browser compatibility depends on the technical architecture of the application itself. Most modern web applications are committed to cross-browser compatibility. This relies on following web standards and using well-compatible front-end technologies (such as HTML, CSS, JavaScript) and back-end technologies (such as PHP, Python, Node.js, etc.). To ensure that the application is compatible with multiple browsers, developers often need to conduct cross-browser testing and use responsiveness

Under the BeegoORM framework, how to specify the database associated with the model? Many Beego projects require multiple databases to be operated simultaneously. When using Beego...

Two solutions for data consistency between MySQL and Redis: delayed double deletion and first modifying the database and then deleting cache are used to handle data consistency problems in MySQL and Redis, ...

Modifying XML content requires programming, because it requires accurate finding of the target nodes to add, delete, modify and check. The programming language has corresponding libraries to process XML and provides APIs to perform safe, efficient and controllable operations like operating databases.

Issues of defining string constant enumeration in protobuf When using protobuf, you often encounter situations where you need to associate the enum type with string constants...
