Currently testing a CPU intensive algorithm I need for my current research project. Hoping it will save me some time. When I am done I will post the results!
[Edit:]
Pretty good so far!
~/stuff/Programming/faultire/src/
hendersont@glycineportable src $ time pypy sleepytree/test_metricspace.py
.......
----------------------------------------------------------------------
Ran 7 tests in 39.621s
OK
real 0m39.675s
user 0m39.150s
sys 0m0.170s
~/stuff/Programming/faultire/src/
hendersont@glycineportable src $ time python sleepytree/test_metricspace.py
.......
----------------------------------------------------------------------
Ran 7 tests in 69.442s
OK
real 1m9.483s
user 1m8.970s
sys 0m0.110s
~/stuff/Programming/sleepytree/
hendersont@glycineportable sleepytree $ time python test_metricspace.py -v
test_distance (__main__.TestCompare) ... ok
test_nondegenercy (__main__.TestCompare) ... ok
test_symmetry (__main__.TestCompare) ... ok
test_triangle_inequality (__main__.TestCompare) ... ok
test_contains (__main__.TestTestNode) ... ok
test_get (__main__.TestTestNode) ... ok
test_iter (__main__.TestTestNode) ... ok
----------------------------------------------------------------------
Ran 7 tests in 570.385s
OK
real 9m30.451s
user 9m23.920s
sys 0m1.730s
~/stuff/Programming/sleepytree/
hendersont@glycineportable sleepytree $ time pypy test_metricspace.py -v
test_distance (__main__.TestCompare) ... ok
test_nondegenercy (__main__.TestCompare) ... ok
test_symmetry (__main__.TestCompare) ... ok
test_triangle_inequality (__main__.TestCompare) ... ok
test_contains (__main__.TestTestNode) ... ok
test_get (__main__.TestTestNode) ... ok
test_iter (__main__.TestTestNode) ... ok
----------------------------------------------------------------------
Ran 7 tests in 255.339s
OK
real 4m15.396s
user 4m12.990s
sys 0m0.310s
>>>> t1 = time.time(); a=[x*x for x in xrange(1000000)]; time.time()-t1
0.38609600067138672
>>>> t1 = time.time(); a=[x*x+math.sin(x/1000000.) for x in xrange(1000000)]; time.time()-t1
0.42182803153991699
Python 2.7:
>>> t1 = time.time(); a=[x*x for x in xrange(1000000)]; time.time()-t1
0.25005197525024414
>>> t1 = time.time(); a=[x*x+math.sin(x/1000000.) for x in xrange(1000000)]; time.time()-t1
0.6075689792633057
Both running in 64-bit on a 2.53GHz Core 2 Duo. It looks like PyPy's JIT has some fixed overhead, but can heavily optimize operations once it gets going.
$ pypy -mtimeit -s'import math; sin=math.sin' \
'[x*x+sin(x/1e6) for x in xrange(1000000)]'
10 loops, best of 3: 156 msec per loop
With no division it is slower (?):
$ pypy -mtimeit -s'import math; sin=math.sin' \
'[x*x+sin(x) for x in xrange(1000000)]'
10 loops, best of 3: 188 msec per loop
CPython shows expected behavior:
$ python2.7 -mtimeit -s'import math; sin=math.sin' \
'[x*x+sin(x) for x in xrange(1000000)]'
10 loops, best of 3: 231 msec per loop
$ python2.7 -mtimeit -s'import math; sin=math.sin' \
'[x*x+sin(x/1e6) for x in xrange(1000000)]'
10 loops, best of 3: 253 msec per loop
CPython is faster for tiny cases:
$ pypy -mtimeit '[x*x for x in xrange(1000000)]'
10 loops, best of 3: 126 msec per loop
$ python2.7 -mtimeit '[x*x for x in xrange(1000000)]'
10 loops, best of 3: 67.3 msec per loop
$ pypy -mtimeit '[x*x*x for x in xrange(1000000)]'
10 loops, best of 3: 123 msec per loop
$ python2.7 -mtimeit '[x*x*x for x in xrange(1000000)]'
10 loops, best of 3: 118 msec per loop
$ pypy -mtimeit -s'import math; sin=math.sin; a=0.786' '[sin(a) for x in xrange(1000000)]'
10 loops, best of 3: 395 msec per loop
$ pypy -mtimeit -s'import math; sin=math.sin; a=0.785' '[sin(a) for x in xrange(1000000)]'
10 loops, best of 3: 375 msec per loop
$ python -mtimeit -s'import math; sin=math.sin; a=0.786' '[sin(a) for x in xrange(1000000)]'
10 loops, best of 3: 177 msec per loop
$ python -mtimeit -s'import math; sin=math.sin; a=0.785' '[sin(a) for x in xrange(1000000)]'
10 loops, best of 3: 155 msec per loop
32-bit pypy 1.4 can compile and run some C extensions; they just have to really well-written (not relying on CPython behavior). It can't find documentation for this, so freenode/#pypy is a good place to get details.
Sorry to bother people with this question, but I've spent ages searching my internet history for an answer, to no avail:
A few weeks ago someone posted a Python related link on HN. It was some sort of guide or in-depth analysis, with code snippets. The code snippets did not have any syntax colouring. The background of the site was a nice dark/deep green texture (slightly bluish maybe). The top of the page had a sort of golden bookmark icon in the corner.
If anyone remembers that site please let me know the url or the title. I need to find it again.
[Edit:]
Pretty good so far!