To my understanding, experiments with Unums showed shortcoming that Gustafson didn't anticipate and lead to Posits which drop the fixed length constraint.
Doing that makes improving precision a lot easier but at the cost of computation time.
Overall I am not convinced that the current implementation is optimal but it is a very good trade-off between speed and precision.
> Doing that makes improving precision a lot easier but at the cost of computation time.
Not quite. The difference in computation time is the current the lack of hardware support, not something inherent to the underlying encoding method. So in practice you are right, but in, for example, embedded contexts without floating point hardware, the performance advantages of IEEE floats should disappear (especially if using a 16 or 8 bit posit suffices).
Posits are simpler to implement than IEEE floats (less edge cases) and use more bits for actual numbers whereas IEEE floats waste about half on NaNs. The use of tapered precision is also nice.
Even if hardware support existed, it seems like a variable length encoding has some inherent overhead relative to a fixed length encoding. If you have a "base length" of e.g. 32 bits and occasionally expand to 64, there's an inherent cost there in both computation and memory, presumably for greater precision. Perhaps that overhead could be minimal with hardware support, but it seems it must have some.
Those are type one unums, not posits. What you are saying about variable length encoding may be true, but it does not actually apply to the current comparison. Type 2 unums are also fixed length, but have other issues.
In my defense, the comment he replied to got downvoted and I thought it was nestorD, so I was "primed" to misinterpret his comment as criticizing unums in general.
Maybe, but that doesn't mean the particular implementation of floats being used is the best one. See also: Unums and Posits
https://posithub.org/