To say something that’s maybe obvious: generics don’t seem so necessary for languages with dynamic types. (Caveat: I definitely remember wanting to express parameterised types in Common Lisp but I was probably doing something wrong).
I’d argue that memory safety feels like more of a requirement but there are plenty of new languages which don’t have it.
The performance argument (for generics in statically typed languages) also feels weak to me.
Firstly, you can get lots of the same value of generics using polymorphism in a language like ML. E.g. sort might have a type like:
type ordering = LT | EQ | GT
sort :: ('a -> 'a -> ordering) -> 'a array -> 'a array
Which reads as ‘sort takes a function to compare things of some type 'a, and an array of such things, and gives you a (sorted) array of those things.’
You can get type safety enforcing correct call sites (ie no sorting an array of strings with int comparison) and type safety in the implementation of the sort function however this could be implemented with (machine) code generated for the sort function only once rather than a separate function for each different type it’s used with. Indeed this is what you’ll get with the classic OCaml compiler. The thing you would want for performance is either sufficient inlining or generics which generate new code for each call.
So it seems that performance is not a necessary consequence of that sort of code generation. (Also you will get worse type checking with something like C++ templates because they are type-checked where they’re expanded rather than where they are created).
Secondly, this style of generics may lead to code size bloat which may lead to poor performance. Sure it is nice if you can speed up the functions in your critical path by 6% but if you have a big program doing lots of things, the increased number of cache misses/reads from having more code may slow you down. It’s maybe not great if e.g. you have a different function to peek inside a future/extract the value for each different thing you’d put in it, but that function would likely be inlined either way so isn’t a great example.
I’d argue that memory safety feels like more of a requirement but there are plenty of new languages which don’t have it.
The performance argument (for generics in statically typed languages) also feels weak to me.
Firstly, you can get lots of the same value of generics using polymorphism in a language like ML. E.g. sort might have a type like:
Which reads as ‘sort takes a function to compare things of some type 'a, and an array of such things, and gives you a (sorted) array of those things.’You can get type safety enforcing correct call sites (ie no sorting an array of strings with int comparison) and type safety in the implementation of the sort function however this could be implemented with (machine) code generated for the sort function only once rather than a separate function for each different type it’s used with. Indeed this is what you’ll get with the classic OCaml compiler. The thing you would want for performance is either sufficient inlining or generics which generate new code for each call.
So it seems that performance is not a necessary consequence of that sort of code generation. (Also you will get worse type checking with something like C++ templates because they are type-checked where they’re expanded rather than where they are created).
Secondly, this style of generics may lead to code size bloat which may lead to poor performance. Sure it is nice if you can speed up the functions in your critical path by 6% but if you have a big program doing lots of things, the increased number of cache misses/reads from having more code may slow you down. It’s maybe not great if e.g. you have a different function to peek inside a future/extract the value for each different thing you’d put in it, but that function would likely be inlined either way so isn’t a great example.