Hacker News new | past | comments | ask | show | jobs | submit login

I think affine types force usage at least once, while linear types force it exactly once. IIRC, this is because linear logic rejects both contraction and weakening while affine logic rejects only contraction.

I'm not sure exactly how this ties into avoiding a GC, but my guess is that rejecting type contraction allows a kind of "reference counting" at the type level.




I was trying to avoid being too detailed, but the distinction is actually that linear types require function arguments to be used exactly once and affine types require that they be used at most once. So affine types allow constant functions, but linear types don’t. A good place to start for anyone interested in learning more is the Wikipedia article on substructural types [0].

And, yes, you’re right that these type systems effectively implement reference counting at the type level. In practice, linear and affine types are used alongside more general types to permit controlled copying and destruction, which tends to look like manual (but type-enforced) memory allocation and deallocation around the edges.

0. https://en.wikipedia.org/wiki/Substructural_type_system




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: