There were too many files and they were randomly placed. Suddenly one day I found that the hard disk space was not enough, so I wrote a python script to search for all files larger than 10MB to see if there are duplicate copies of these large files. If so, list them all so that they can be deleted manually.
Usage method: add a parameter specifying the directory
For example, python redundant_remover.py /tmp
Mainly use the stat module, os, sys system module
import os, sys #引入统计模块 from stat import * BIG_FILE_THRESHOLD = 10000000L dict1 = {} # filesize 做 key, filename 做 value dict2 = {} # filename 做 key, filesize 做 value def treewalk(path): try: for i in os.listdir(path): mode = os.stat(path+"/"+i).st_mode if S_ISDIR(mode) <> True: filename = path+"/"+i filesize = os.stat(filename).st_size if filesize > BIG_FILE_THRESHOLD: if filesize in dict1: dict2[filename] = filesize dict2[dict1[filesize]]=filesize else: dict1[filesize] = filename else: treewalk(path+"/"+i) except WindowsError: pass def printdict(finaldict): for i_size in finaldict.values(): print i_size for j_name in finaldict.keys(): if finaldict[j_name] == i_size: print j_name print "\n" if __name__=="__main__": treewalk(sys.argv[1]) printdict(dict2)