multiprocessing python multi-process module, so Processing is also the darling of multi-process. But the issues discussed today seem to attract our attention
Go directly to the code:
from multiprocessing import Process, Lock err_file = 'error1.log' err_fd = open(err_file, 'w') def put(fd): print "PUT" fd.write("hello, func put write\n") print "END" if __name__=='__main__': p_list=[] for i in range(1): p_list.append(Process(target=put, args=(err_fd,))) for p in p_list: p.start() for p in p_list: p.join()
Above The purpose of the code is very clear: use multiprocessing.Process to derive a process to execute the put function. The function of the put function is also very clear. It outputs PUT and END, and writes "hello, func put write" to the file error1.log.
It stands to reason that the output should be as mentioned above, PUT and END, and then error1.log will have the sentence "hello, func put write". However, there are always things in the world that are unpredictable. The code The execution result is:
[root@iZ23pynfq19Z ~]# py27 2.py ; cat error1.log PUT END [root@iZ23pynfq19Z ~]#
what! ? Why is there nothing in error1.log!?
Let’s adjust the code a little and witness the magic again:
from multiprocessing import Process, Lock err_file = 'error1.log' err_fd = open(err_file, 'w') def put(fd): print "PUT" fd.write("hello, func put write\n") fd.write("o" * 4075) # 神奇的一行 print "END" if __name__=='__main__': p_list=[] for i in range(1): p_list.append(Process(target=put, args=(err_fd,))) for p in p_list: p.start() for p in p_list: p.join()
Output result:
[root@iZ23pynfq19Z ~]# py27 2.py ; cat error1.log PUT END hello, func put write o....(有4075个) [root@iZ23pynfq19Z ~]#
Do you feel confused? How awesome!?
Now, two questions arise in my mind:
Why can’t the first program write that sentence, but the second one can?
What the hell is that 4075?
Before explaining these problems, we need to be clear about the characteristics of the standard IO library: Full buffering, line buffering, no buffering
For details, please refer to the previous blog post: https://my.oschina.net/u/2291...
Because now we are writing to a file, so System IO will adopt a fully buffered method, that is to say, the buffer will be filled up before being flushed into the system write queue.
So the above problems are all solved at once, precisely because of those mysterious ' o', fills the entire buffer, so the system flushes our content into the write queue, so how does 4075 come from, just use 4096-sizeof("hello, func put writen")+1, why +1, because of the buffer It's not enough if the area is full. It needs to be larger to trigger the write action.
So we can now get the answer. If we want to write files in a similar way to the above in multiprcessing.Process, there are three methods. To achieve:
Fill the buffer
Manually call flush()
Call the file The object is set to not be buffered
The first and second types have been explained above, so let’s briefly talk about the third type:
取自Python官网Document: open(name[, mode[, buffering]]) ... The optional buffering argument specifies the file’s desired buffer size: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size (in bytes). A negative buffering means to use the system default, which is usually line buffered for tty devices and fully buffered for other files. If omitted, the system default is used. [2]
The above picture illustrates that, we are allowed to open When setting buffering to 0, it is unbuffered mode. Then every time you write, it is written directly to the write queue instead of writing to the buffer. (The lowest performance method)
------ ---------------------------------------------I am cutting line---- ------------------------------------------
End of discussion We should go deeper into the phenomenon and treatment methods;
I believe we have tried it before. When the file object is not explicitly closed or flush is explicitly called, the file can still be written normally. So what happened? What's the matter?
In fact, when we close the program normally, the process will do some "hands-on" for us after exiting, such as closing open file descriptors, cleaning up temporary files, cleaning up memory, etc. That's right. Because of this "good habit" of the system, our data can be flushed into the write queue when the file descriptor is closed, and the file content will not be lost.
So based on this understanding, let's look back The problem just now is that when the child process calls put, in theory, the file descriptor is not closed when the program exits, so the data is lost in the buffer.
Let us follow the clues and look at Process Implementation
multiprocessing/Processing.py def start(self): ''' Start child process ''' assert self._popen is None, 'cannot start a process twice' assert self._parent_pid == os.getpid(), \ 'can only start a process object created by current process' assert not _current_process._daemonic, \ 'daemonic processes are not allowed to have children' _cleanup() if self._Popen is not None: Popen = self._Popen else: from .forking import Popen self._popen = Popen(self) _current_process._children.add(self)
Let’s take a look at how Popn does it?
multiprocessing/forking.py class Popen(object): def __init__(self, process_obj): sys.stdout.flush() sys.stderr.flush() self.returncode = None self.pid = os.fork() if self.pid == 0: if 'random' in sys.modules: import random random.seed() code = process_obj._bootstrap() sys.stdout.flush() sys.stderr.flush() os._exit(code)
The key point is the last os._exit(code). Why is it the most critical? Because the exit of this part will determine the process. What is the "handle" to deal with?
What is os._exit? In fact, it is _eixt of the standard library, so we can learn this thing simply again
https://my.oschina .net/u/2291...
In the above link, we can clearly see that _exit() and exit() are two different things. _exit() is simple violence and discarded directly. The content of user mode enters the kernel, and exit() cleans it up for us patiently
So can we assume: What will be the effect if Popen's exit is not os._exit()?
Fortunately, sys.exit() is the exit() we need first. Without further ado, let’s try it!
multiprocessing/forking.py class Popen(object): def __init__(self, process_obj): sys.stdout.flush() sys.stderr.flush() self.returncode = None self.pid = os.fork() if self.pid == 0: if 'random' in sys.modules: import random random.seed() code = process_obj._bootstrap() sys.stdout.flush() sys.stderr.flush() #os._exit(code) sys.exit(code)
Test the code and return the original one without 'o' filling Version
[root@iZ23pynfq19Z ~]# python 2.py ; cat error1.log PUT END hello, func put write
We can see that it can indeed be written in, which proves that the above statement is tenable
But it is best not to change the source code randomly, after all They are all the result of many years of optimization by seniors. Maybe they wrote this intentionally to avoid certain problems. It is better to standardize your behavior and try to reduce these seemingly unstandardized implementation ideas.
Welcome everyone For guidance and communication from experts, please indicate when reprinting: https://segmentfault.com/a/11...
multiprocessing python multi-process module, so, Processing It is also the darling of multi-process. But the issues discussed today seem to attract our attention.
Go directly to the code:
from multiprocessing import Process, Lock err_file = 'error1.log' err_fd = open(err_file, 'w') def put(fd): print "PUT" fd.write("hello, func put write\n") print "END" if __name__=='__main__': p_list=[] for i in range(1): p_list.append(Process(target=put, args=(err_fd,))) for p in p_list: p.start() for p in p_list: p.join()
The intention of the above code is very clear: derive a through multiprocessing.Process Process, execute the put function. The role of the put function is also very clear. It outputs PUT and END, and writes "hello, func put write" to the file error1.log.
那么按理说, 输出应该如同上面说的那样, PUT和END,然后error1.log将有那句话"hello, func put write", 然而, 世事总有那么点难料的, 代码执行结果是:
[root@iZ23pynfq19Z ~]# py27 2.py ; cat error1.log PUT END [root@iZ23pynfq19Z ~]#
what!? 为什么error1.log没东西 !?
让我们稍微调整下代码, 再见证神奇的事情:
from multiprocessing import Process, Lock err_file = 'error1.log' err_fd = open(err_file, 'w') def put(fd): print "PUT" fd.write("hello, func put write\n") fd.write("o" * 4075) # 神奇的一行 print "END" if __name__=='__main__': p_list=[] for i in range(1): p_list.append(Process(target=put, args=(err_fd,))) for p in p_list: p.start() for p in p_list: p.join()
输出结果:
[root@iZ23pynfq19Z ~]# py27 2.py ; cat error1.log PUT END hello, func put write o....(有4075个) [root@iZ23pynfq19Z ~]#
有没有觉得一种懵逼的感觉!?
如今, 心中涌现两个问题:
为什么第一个程序无法写入那句话 , 但是第二个却可以?
那个4075是什么鬼?
在解释这些问题之前, 我们需要清楚标准IO库所具有的特点: 全缓冲, 行缓冲, 不缓冲
具体可以看之前博文:https://my.oschina.net/u/2291...
因为现在是写入文件, 所以系统IO将采用全缓冲的方式, 也就是说, 会将缓冲区填满才刷入系统写队列.
所以上面的问题就一下子全解决了, 正因为那些 迷一般的 'o',填满了整个缓冲区, 所以系统将我们的内容刷进去写队列,所以4075怎么来, 就是用4096-sizeof("hello, func put writen")+1, 为什么要+1, 因为缓冲区满还不行, 要大于才能触发写动作.
所以我们现在已经能够得出答案, 如果我们想要在multiprcessing.Process中, 用上面类似的方式去写文件时,有三种方法去实现:
写满缓冲区
手动调用flush()
将文件对象设置成不缓冲
第一第二种在上面已经阐述, 那我们简单讲下第三种:
取自Python官网Document: open(name[, mode[, buffering]]) ... The optional buffering argument specifies the file’s desired buffer size: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size (in bytes). A negative buffering means to use the system default, which is usually line buffered for tty devices and fully buffered for other files. If omitted, the system default is used. [2]
上图说明就是, 允许我们在open的时候, 设置buffering为0, 那么就是unbuffered模式, 那么在每次写, 就是直接写入写队列,而不是写到缓冲区.(性能最低的方式)
------------------------------------------------我是切割线----------------------------------------------
谈论完现象和处理的方法, 我们应该来点深入的;
相信我们曾经试过, 在没有显示关闭文件对象或者显示调用flush时, 文件依旧能够正常写入,那么又是怎么一回事呢?
其实,在我们正常关闭程序时, 进程在退出将会为我们做一些"手尾", 例如关闭打开的文件描述符, 清理临时文件,清理内存等等.正是因为系统的这种"好习惯", 所以我们的数据在文件描述符关闭时,就能刷入写队列,文件内容也不会丢失.
那么基于这种认识,我们再回首刚才的问题, 在子进程调用put的时候, 理论上在程序退出时, 并没显示关闭文件描述符, 所以数据在缓冲区就丢失了.
让我们在顺藤摸瓜,看Process的实现
multiprocessing/Processing.py def start(self): ''' Start child process ''' assert self._popen is None, 'cannot start a process twice' assert self._parent_pid == os.getpid(), \ 'can only start a process object created by current process' assert not _current_process._daemonic, \ 'daemonic processes are not allowed to have children' _cleanup() if self._Popen is not None: Popen = self._Popen else: from .forking import Popen self._popen = Popen(self) _current_process._children.add(self)
再看下Popn是怎么做?
multiprocessing/forking.py class Popen(object): def __init__(self, process_obj): sys.stdout.flush() sys.stderr.flush() self.returncode = None self.pid = os.fork() if self.pid == 0: if 'random' in sys.modules: import random random.seed() code = process_obj._bootstrap() sys.stdout.flush() sys.stderr.flush() os._exit(code)
关键地方就是最后的 os._exit(code), 为什么说最关键? 因为这部分的退出, 将决定进程会处理什么"手尾",
os._exit是什么鬼? 其实就是标准库的_eixt, 于是我们又能简单学习这东西了
https://my.oschina.net/u/2291...
在上面的链接, 我们能够比较清楚看到 _exit() 和exit() 是比较不同的两个东西, _exit() 简单暴力, 直接丢弃用户态的内容,进入内核, 而exit()则比较耐心地为我们清理
那么我们是否能够假设: 如果Popen的退出不是os._exit() 会是怎样的效果呢?
很幸运的是, sys.exit() 就是我们先要的exit(), 事不宜迟, 赶紧试下!
multiprocessing/forking.py class Popen(object): def __init__(self, process_obj): sys.stdout.flush() sys.stderr.flush() self.returncode = None self.pid = os.fork() if self.pid == 0: if 'random' in sys.modules: import random random.seed() code = process_obj._bootstrap() sys.stdout.flush() sys.stderr.flush() #os._exit(code) sys.exit(code)
测试代码, 返回最原始那个没有'o'填充的版本
[root@iZ23pynfq19Z ~]# python 2.py ; cat error1.log PUT END hello, func put write
我们可以看到, 确实是可以写进去, 这样就证明上面的说法是站得住脚步的
不过最好还是不要乱改源码哦, 毕竟这些都是老前辈多年优化的结果,可能这是他们故意这些写,为了避免某些问题.还是规范好自己的行为,尽量减少这些看起来不怎么规范的实现思路吧
The above is the detailed content of Things to note when sharing file objects between parent and child processes in mutilprocessing Processing in Python. For more information, please follow other related articles on the PHP Chinese website!