Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for confirming, and that is a good point! However, I think that for monomorphizing compilers, programmers often write code in such a way as to avoid triggering a larger rec-compilation. With monomorphization this might be easier to achieve since you need to control how your datastructures are instantiated. But with functions you would need to control how the functions are passed. For example, a use of a Haskell-style `lens` package would probably trigger a re-compilation of `lens -> kan-extensions -> profunctors -> comonad -> base`. Programmers can work around that, but it may place restrictions on what they can easily do with the language.



Yes - that’s a great observation. The scheme here means that you can end up with compilation dependencies that aren’t reflected in your explicit dependency graph, exactly e.g. via the path you describe.

The same is true of monomorphization in presence of typeclasses (or traits, concepts, etc). I have some ideas about how to do such incremental compilations optimally, but they haven’t been written down.

Anyway, I totally agree with you. I don’t mean to suggest this is the best way to do things - if anything Rust/C++/etc have taught us excessive monomorphization is probably not the way to go for developer experience reasons. You may be able to imagine some interesting derivative of the scheme presented here with something like Swift’s “witness table”-based compilation of protocols, which may be much more compile-time performant, and support separate compilation. But, I don’t even have a sketch of that. This is only one technique and the design space is very wide.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: