This article brings you relevant knowledge about python. It mainly summarizes and introduces some tips to improve the performance of Python, including using map for function mapping, using set to find intersection, etc. Wait, I hope it helps everyone.
Recommended learning: python learning tutorial
About Python how to accurately Measuring the execution time of a program seems simple but is actually very complicated, because the execution time of the program is affected by many factors, such as the operating system, Python version, and related hardware (CPU performance, memory read and write speed), etc. When running the same version of the language on the same computer, the above factors are certain, but the sleep time of the program still changes, and other programs running on the computer will also interfere with the experiment, so strictly speaking this is "Experiments cannot be repeated."
The two most representative libraries I learned about timing are time and timeit.
Among them, there are three functions time(), perf_counter() and process_time() in the time library that can be used for timing (in seconds). Adding the suffix _ns means timing in nanoseconds (since Python3.7 beginning). There was a clock() function before this, but it was removed after Python 3.3. The differences between the above three are as follows:
Timeit has two advantages over the time library:
timeit.timeit(stmt='pass', setup='pass', timer=, number=1000000, globals=None) Parameter description:
All timings in this article use the timeit method, and the default number of executions is one million times.
Why do we need to execute it a million times? Because our test program is very short, if we don't execute it so many times, we won't be able to see the difference at all.
Exp1: Convert the lowercase letters in the string array to uppercase letters.
测试数组为 oldlist = ['life', 'is', 'short', 'i', 'choose', 'python']。
newlist = []for word in oldlist: newlist.append(word.upper())
list(map(str.upper, oldlist))
Method one takes 0.5267724000000005s, method two takes 0.5267724000000005s Time 0.41462569999999843s, performance increased by 21.29%
Exp2: Find the intersection of two lists.
测试数组:a = [1,2,3,4,5],b = [2,4,6,8,10]。
overlaps = []for x in a: for y in b: if x == y: overlaps.append(x)
list(set(a) & set(b))
Method one takes 0.9507264000000006s, method two consumes Time 0.6148200999999993s, performance increased by 35.33%
About the syntax of set(): |, &, - represent union, intersection, and difference sets respectively.
We can sort the sequence in many ways, but in fact, writing the sorting algorithm yourself is not worth the gain. Because the built-in sort() or sorted() method is good enough, and the parameter key can be used to implement different functions, which is very flexible. The difference between the two is that the sort() method is only defined in the list, while sorted() is a global method that is valid for all iterable sequences.
Exp3: Use quick sort and sort() methods to sort the same list.
测试数组:lists = [2,1,4,3,0]。
def quick_sort(lists,i,j): if i >= j: return list pivot = lists[i] low = i high = j while i = pivot: j -= 1 lists[i]=lists[j] while i
lists.sort()
Method one takes 2.4796975000000003s, method two takes 2.4796975000000003s The time is 0.05551999999999424s, and the performance is improved by 97.76%
By the way, the sorted() method takes 0.1339823999987857s.
It can be seen that sort() is still very powerful as a list-specific sorting method. Although sorted() is a little slower than the former, it is better because it is "not picky" and it can be used for all iterable sequences. efficient.
Extension: How to define the key of sort() or sorted() method
1. Define through lambda
#学生:(姓名,成绩,年龄) students = [('john', 'A', 15),('jane', 'B', 12),('dave', 'B', 10)]students.sort(key = lambda student: student[0]) #根据姓名排序sorted(students, key = lambda student: student[0])
2. Define through operator
import operator students = [('john', 'A', 15),('jane', 'B', 12),('dave', 'B', 10)]students.sort(key=operator.itemgetter(0))sorted(students, key = operator.itemgetter(1, 0)) #先对成绩排序,再对姓名排序
operator's itemgetter() is suitable for ordinary array sorting, and attrgetter() is suitable for object array sorting
3. Defined through cmp_to_key(), the most flexible
import functools def cmp(a,b): if a[1] != b[1]: return -1 if a[1] b[2] else 1 #成绩姓名都相同,按照年龄降序排序 students = [('john', 'A', 15),('john', 'A', 14),('jane', 'B', 12),('dave', 'B', 10)]sorted(students, key = functools.cmp_to_key(cmp))
Exp4: Count the number of times each character appears in a string.
Test array: sentence=‘life is short, i choose python’.
counts = {}for char in sentence: counts[char] = counts.get(char, 0) + 1
from collections import CounterCounter(sentence)
方法一耗时 2.8105250000000055s,方法二耗时 1.6317423000000062s,性能提升 41.94%
列表推导(list comprehension)短小精悍。在小代码片段中,可能没有太大的区别。但是在大型开发中,它可以节省一些时间。
Exp5:对列表中的奇数求平方,偶数不变。
测试数组:oldlist = range(10)。
newlist = []for x in oldlist: if x % 2 == 1: newlist.append(x**2)
[x**2 for x in oldlist if x%2 == 1]
方法一耗时 1.5342976000000021s,方法二耗时 1.4181957999999923s,性能提升 7.57%
大多数人都习惯使用+来连接字符串。但其实,这种方法非常低效。因为,+操作在每一步中都会创建一个新字符串并复制旧字符串。更好的方法是用 join() 来连接字符串。关于字符串的其他操作,也尽量使用内置函数,如isalpha()、isdigit()、startswith()、endswith()等。
Exp6:将字符串列表中的元素连接起来。
测试数组:oldlist = [‘life’, ‘is’, ‘short’, ‘i’, ‘choose’, ‘python’]。
sentence = ""for word in oldlist: sentence += word
"".join(oldlist)
方法一耗时 0.27489080000000854s,方法二耗时 0.08166570000000206s,性能提升 70.29%
join还有一个非常舒服的点,就是它可以指定连接的分隔符,举个例子
oldlist = ['life', 'is', 'short', 'i', 'choose', 'python']sentence = "//".join(oldlist)print(sentence)
life//is//short//i//choose//python
Exp6:交换x,y的值。
测试数据:x, y = 100, 200。
temp = x x = y y = temp
x, y = y, x
方法一耗时 0.027853900000010867s,方法二耗时 0.02398730000000171s,性能提升 13.88%
在不知道确切的循环次数时,常规方法是使用while True进行无限循环,在代码块中判断是否满足循环终止条件。虽然这样做没有任何问题,但while 1的执行速度比while True更快。因为它是一种数值转换,可以更快地生成输出。
Exp8:分别用while 1和while True循环 100 次。
i = 0while True: i += 1 if i > 100: break
i = 0while 1: i += 1 if i > 100: break
方法一耗时 3.679268300000004s,方法二耗时 3.607847499999991s,性能提升1.94%
将文件存储在高速缓存中有助于快速恢复功能。Python 支持装饰器缓存,该缓存在内存中维护特定类型的缓存,以实现最佳软件驱动速度。我们使用lru_cache装饰器来为斐波那契函数提供缓存功能,在使用fibonacci递归函数时,存在大量的重复计算,例如fibonacci(1)、fibonacci(2)就运行了很多次。而在使用了lru_cache后,所有的重复计算只会执行一次,从而大大提高程序的执行效率。
Exp9:求斐波那契数列。
测试数据:fibonacci(7)。
def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 return fibonacci(n - 1) + fibonacci(n-2)
import functools @functools.lru_cache(maxsize=128)def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 return fibonacci(n - 1) + fibonacci(n-2)
方法一耗时 3.955014900000009s,方法二耗时 0.05077979999998661s,性能提升 98.72%
注意事项:
import functools @functools.lru_cache(maxsize=100)def demo(a, b): print('我被执行了') return a + bif __name__ == '__main__': demo(1, 2) demo(1, 2)
我被执行了(执行了两次demo(1, 2),却只输出一次)
from functools import lru_cache @lru_cache(maxsize=100)def list_sum(nums: list): return sum(nums)if __name__ == '__main__': list_sum([1, 2, 3, 4, 5])
TypeError: unhashable type: ‘list’
functools.lru_cache(maxsize=128, typed=False)的两个可选参数:
maxsize代表缓存的内存占用值,超过这个值之后,就的结果就会被释放,然后将新的计算结果进行缓存,其值应当设为 2 的幂。
typed若为True,则会把不同的参数类型得到的结果分开保存。
点运算符(.)用来访问对象的属性或方法,这会引起程序使用__getattribute__()和__getattr__()进行字典查找,从而带来不必要的开销。尤其注意,在循环当中,更要减少点运算符的使用,应该将它移到循环外处理。
这启发我们应该尽量使用from … import …这种方式来导包,而不是在需要使用某方法时通过点运算符来获取。其实不光是点运算符,其他很多不必要的运算我们都尽量移到循环外处理。
Exp10:将字符串数组中的小写字母转为大写字母。
测试数组为 oldlist = [‘life’, ‘is’, ‘short’, ‘i’, ‘choose’, ‘python’]。
newlist = []for word in oldlist: newlist.append(str.upper(word))
newlist = []upper = str.upperfor word in oldlist: newlist.append(upper(word))
方法一耗时 0.7235491999999795s,方法二耗时 0.5475435999999831s,性能提升 24.33%
当我们知道具体要循环多少次时,使用for循环比使用while循环更好。
Exp12:使用for和while分别循环 100 次。
i = 0while i
for _ in range(100): pass
方法一耗时 3.894683299999997s,方法二耗时 1.0198077999999953s,性能提升73.82%
Numba 可以将 Python 函数编译码为机器码执行,大大提高代码执行速度,甚至可以接近 C 或 FORTRAN 的速度。它能和 Numpy 配合使用,在 for 循环中或存在大量计算时能显著地提高执行效率。
Exp12:求从 1 加到 100 的和。
def my_sum(n): x = 0 for i in range(1, n+1): x += i return x
from numba import jit @jit(nopython=True) def numba_sum(n): x = 0 for i in range(1, n+1): x += i return x
方法一耗时 3.7199997000000167s,方法二耗时 0.23769430000001535s,性能提升 93.61%
矢量化是 NumPy 中的一种强大功能,可以将操作表达为在整个数组上而不是在各个元素上发生。这种用数组表达式替换显式循环的做法通常称为矢量化。
在 Python 中循环数组或任何数据结构时,会涉及很多开销。NumPy 中的向量化操作将内部循环委托给高度优化的 C 和 Fortran 函数,从而使 Python 代码更加快速。
Exp13:两个长度相同的序列逐元素相乘。
测试数组:a = [1,2,3,4,5], b = [2,4,6,8,10]
[a[i]*b[i] for i in range(len(a))]
import numpy as np a = np.array([1,2,3,4,5])b = np.array([2,4,6,8,10])a*b
方法一耗时 0.6706845000000214s,方法二耗时 0.3070132000000001s,性能提升 54.22%
若要检查列表中是否包含某成员,通常使用in关键字更快。
Exp14:检查列表中是否包含某成员。
测试数组:lists = [‘life’, ‘is’, ‘short’, ‘i’, ‘choose’, ‘python’]
def check_member(target, lists): for member in lists: if member == target: return True return False
if target in lists: pass
方法一耗时 0.16038449999999216s,方法二耗时 0.04139250000000061s,性能提升 74.19%
itertools是用来操作迭代器的一个模块,其函数主要可以分为三类:无限迭代器、有限迭代器、组合迭代器。
Exp15:返回列表的全排列。
测试数组:[“Alice”, “Bob”, “Carol”]
def permutations(lst): if len(lst) == 1 or len(lst) == 0: return [lst] result = [] for i in lst: temp_lst = lst[:] temp_lst.remove(i) temp = permutations(temp_lst) for j in temp: j.insert(0, i) result.append(j) return result
import itertools itertools.permutations(["Alice", "Bob", "Carol"])
方法一耗时 3.867292899999484s,方法二耗时 0.3875405000007959s,性能提升 89.98%
根据上面的测试数据,我绘制了下面这张实验结果图,可以更加直观的看出不同方法带来的性能差异。
从图中可以看出,大部分的技巧所带来的性能增幅还是比较可观的,但也有少部分技巧的增幅较小(例如编号5、7、8,其中,第 8 条的两种方法几乎没有差异)。
总结下来,我觉得其实就是下面这两条原则:
内置库函数由专业的开发人员编写并经过了多次测试,很多库函数的底层是用C语言开发的。因此,这些函数总体来说是非常高效的(比如sort()、join()等),自己编写的方法很难超越它们,还不如省省功夫,不要重复造轮子了,何况你造的轮子可能更差。所以,如果函数库中已经存在该函数,就直接拿来用。
有很多优秀的第三方库,它们的底层可能是用 C 和 Fortran 来实现的,像这样的库用起来绝对不会吃亏,比如前文提到的 Numpy 和 Numba,它们带来的提升都是非常惊人的。类似这样的库还有很多,比如Cython、PyPy等,这里我只是抛砖引玉。
其实加快 Python 代码执行速度的方法还有很多,比如避免使用全局变量、使用最新版本、使用合适的数据结构、利用if条件的惰性等等,我这里就不一一例举了。这些方法都需要我们亲身去实践才会有深刻的感受和理解,但最根本的方法就是保持我们对编程的热情和对最佳实践的追求,这才是我们能不断突破自我、勇攀高峰的不竭动力源泉!
Recommended learning: python learning tutorial
The above is the detailed content of Summary of classic techniques for using Python as smoothly as silk. For more information, please follow other related articles on the PHP Chinese website!