1: readline()
file = open("sample.txt") while 1: line = file.readline() if not line: break pass # do something file.close()
You have to read data from the file line by line, which is obviously slower;
But it saves memory;
Tested reading a 10M sample.txt file, about 32,000 lines per second;
2: fileinput
import fileinput for line in fileinput.input("sample.txt"): pass
The writing method is simpler, but after testing, it was found that it can only read 13,000 rows of data per second, which is more than twice as slow as the previous method;
3: readlines()
file = open("sample.txt") while 1: lines = file.readlines(100000) if not lines: break for line in lines: pass # do something file.close()
Tested with the same data, it can read 96,900 rows of data per second! The efficiency is 3 times that of the first method and 7 times that of the second method!
4: File iterator
Only read and display one line at a time. When reading a large file, it should be like this:
file = open("sample.txt") for line in file: pass # do something file.close()
The above is the simple implementation method of reading files line by line in Python brought to you by the editor. I hope you will support Script Home~