Hacker News new | past | comments | ask | show | jobs | submit login

would love to see actual benchmarks



Don't have anything significant, but giving this a quick test with some of my advent of code solutions I found it to be quite a bit slower:

   time python day_2.py             

   ________________________________________________________
   Executed in   57.25 millis    fish           external
      usr time   25.02 millis   52.00 micros   24.97 millis
      sys time   25.01 millis  601.00 micros   24.41 millis


   time codon run -release day_2.py 

   ________________________________________________________
   Executed in  955.58 millis    fish           external
      usr time  923.39 millis   62.00 micros  923.33 millis
      sys time   31.76 millis  685.00 micros   31.07 millis


   time codon run -release day_8.py 

   ________________________________________________________
   Executed in  854.23 millis    fish           external
      usr time  819.11 millis   78.00 micros  819.03 millis
      sys time   34.67 millis  712.00 micros   33.96 millis

   time python day_8.py             

   ________________________________________________________
   Executed in   55.30 millis    fish           external
      usr time   22.59 millis   54.00 micros   22.54 millis
      sys time   25.86 millis  642.00 micros   25.22 millis
It wasn't a ton of work to get running, but I had to comment out some stuff that isn't available. Some notable pain points: I couldn't import code from another file in the same directory and I couldn't do zip(*my_list) because asterisk wasn't supported in that way. I would consider revisiting it if I needed a single-file program that needs to work on someone else's machine if the compilation works as easily as the examples.


I would guess the bulk of the time is being spent in compilation. You might try "codon build -release day_2.py" then "time ./day_2" to measure just runtime.


Good catch! Here's updated runs:

   time python day_2.py

   ________________________________________________________
   Executed in   51.26 millis    fish           external
      usr time   23.38 millis   48.00 micros   23.33 millis
      sys time   21.88 millis  617.00 micros   21.26 millis

   time day_2

   ________________________________________________________
   Executed in  227.06 millis    fish           external
      usr time    8.17 millis   70.00 micros    8.10 millis
      sys time    6.69 millis  708.00 micros    5.98 millis

   time python day_8.py

   ________________________________________________________
   Executed in   53.63 millis    fish           external
      usr time   22.11 millis   51.00 micros   22.06 millis
      sys time   24.63 millis  714.00 micros   23.91 millis

   time day_8

   ________________________________________________________
   Executed in  115.89 millis    fish           external
      usr time    5.83 millis   92.00 micros    5.74 millis
      sys time    4.59 millis  856.00 micros    3.73 millis
Now codon is much faster than Python.


It looks like you are compiling and running. Try compiling to an executable and then benchmark running that


We do have a benchmark suite at https://github.com/exaloop/codon/tree/develop/bench and results on a couple different architectures at https://exaloop.io/benchmarks


Why are do the C++ implementations perform so poorly?


My guess for word_count and faq is that the C++ implementation uses std::unordered_map, which famously has quite poor performance. [0]

[0] https://martin.ankerl.com/2019/04/01/hashmap-benchmarks-01-o...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: