How to Profile Memory Usage in Python
When exploring algorithms by implementing naive versions and optimizing them, analyzing memory usage can be crucial. Python 3.4 introduces the tracemalloc module, offering detailed insights into which code segments allocate the most memory.
Using tracemalloc
import tracemalloc tracemalloc.start() # Code to profile... snapshot = tracemalloc.take_snapshot() # Display top memory-consuming lines top_stats = snapshot.statistics('lineno') for index, stat in enumerate(top_stats[:3], 1): frame = stat.traceback[0] print(f"#{index}: {frame.filename}:{frame.lineno}: {stat.size / 1024:.1f} KiB")
Example
Profiling memory usage while counting prefixes in a list of words from the American English dictionary:
import tracemalloc import linecache import os tracemalloc.start() words = list(open('/usr/share/dict/american-english')) counts = Counter() for word in words: prefix = word[:3] counts[prefix] += 1 snapshot = tracemalloc.take_snapshot() display_top(snapshot)
Output
Top 3 lines #1: scratches/memory_test.py:37: 6527.1 KiB words = list(words) #2: scratches/memory_test.py:39: 247.7 KiB prefix = word[:3] #3: scratches/memory_test.py:40: 193.0 KiB counts[prefix] += 1 4 other: 4.3 KiB Total allocated size: 6972.1 KiB
Handling Code That Releases Memory
If a function allocates a lot of memory and then releases it all, it's not technically a leak but still consumes excessive memory. To account for this, it's necessary to take snapshots while the function is running or use a separate thread to monitor memory usage.
The above is the detailed content of How Can I Profile Memory Usage in Python Code?. For more information, please follow other related articles on the PHP Chinese website!