Hacker News new | past | comments | ask | show | jobs | submit login

It's hard to make optimizations, because you cannot make a lot of assumptions ahead of time on how the code is going to run.

For instance, one can override most of the builtin functions like len().




I am curious how this effects the compiler in particular?Does the compiler measure this in some way and respond with different behavior? Or is it already designed less efficient to account for these types of operations? Or a mixture of both and/or more nuanced hurdles? Thanks for any info you can provide.


Python 3.6 added a version field to dictionnaries. As all variables are looked up in dictionnaries (aka namespaces), a compiler can replace a call to a builtin by a test of the version of the builtins dict (aka a guarde), which chooses between an optimized version and the "naive" code.

Further reading: https://www.python.org/dev/peps/pep-0509/




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: