Often when I am writing in Python, I find that I want to see how long a particular function call or set of statements are taking to execute. Let’s say I have the following code that gets executed frequently:
and I want to know how long it takes to execute to see if it is slowing down my app and should be optimized. Previously I would surround it as such:
This will print out the duration in seconds of that code segment, but is more work and typing than I want, and more cleaning up later. I realized that the new “with” statement in Python could probably help me out. Let’s create a little timer class that cooperates with it:
Now all we have to do is:
You can also try:
For these, 0.28738 and 1.50169 are what I get, respectively. While something like this couldn’t really replace application-wide profiling via a module like cProfile, it can be an extremely useful and quick way to see if your prototype is scalable or not. I usually end up having a debug.py or helpers.py file in my larger projects with little tools like this, and I’ll probably end up adding this one as well.
Let me know if you are doing something similar, or if I’ve reinvented something that already exists. I’d also love to hear from people profiling their python code and what techniques they are using, as I am just starting to learn about it.
for i in range(10000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23
and I want to know how long it takes to execute to see if it is slowing down my app and should be optimized. Previously I would surround it as such:
import time; x = time.time()
for i in range(10000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23
print time.time() - x
This will print out the duration in seconds of that code segment, but is more work and typing than I want, and more cleaning up later. I realized that the new “with” statement in Python could probably help me out. Let’s create a little timer class that cooperates with it:
class Timer():
def __enter__(self): self.start = time.time()
def __exit__(self, *args): print time.time() - self.start
Now all we have to do is:
with Timer():
for i in range(1000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23
You can also try:
with Timer():
time.sleep(1.5)
For these, 0.28738 and 1.50169 are what I get, respectively. While something like this couldn’t really replace application-wide profiling via a module like cProfile, it can be an extremely useful and quick way to see if your prototype is scalable or not. I usually end up having a debug.py or helpers.py file in my larger projects with little tools like this, and I’ll probably end up adding this one as well.
Let me know if you are doing something similar, or if I’ve reinvented something that already exists. I’d also love to hear from people profiling their python code and what techniques they are using, as I am just starting to learn about it.
Check out profilehooks on PyPI for something like this at function level, implemented using decorators. I wonder how difficult it would be to have the same object (or function) act both as a decorator and as a context manager.
Timeit is a great module, but it serves a different purpose: very accurate microbenchmarks. Sometimes you just need to know whether a for loop takes 1 or 20 seconds; repeating that for loop 1000 times to get its accurate speed up to the nearest microsecond is overkill. Plus, timeit's API is cumbersome and inconvenient, or at least I can never remember offhand how to use it, which is the same thing in my book.