Hacker News new | past | comments | ask | show | jobs | submit login

This was addressed in the previous comment

>On the quant side, *unlike on the data science side*,

Vision scientist is on the data science side. You're not dealing with monetary values where floating point error compounds on itself to the point your models become garbage. Quant work is it's own unique field with its own unique prerequisites.




Nothing precludes you from doing integer arithmetic in a dynamic language.

I’m not a quant and this isn’t my area of expertise, but, for example, I’m pretty sure various differential equation solving methods depend on variables taking on continuous values, so floating point basically must be used. Understanding the impact of that is definitely very important. Analogously, I frequently run into numerical precision issues in image processing. Understanding how numbers are represented on a computer isn’t unique to being a quant. Understanding how the choice of representation can impact prod is also not unique to being a quant. The dynamicness of the language isn’t particularly relevant, either.


>Nothing precludes you from doing integer arithmetic in a dynamic language.

You would be surprised. The second you use pandas with a custom data type (let alone any other library you'd want to use) it can randomly auto convert it to a float. Furthermore identifying when it randomly converts the type on you is a pain.

>so floating point basically must be used.

Quants tend to use fixed precision types. It is like a float in every way, except base 10 instead of base 2 so there is no floating point error.


> The second you use pandas with a custom data type

That's a pandas (and maybe numpy) issue, not a dynamic language issue. (If you want to generalize from the specific libraries more accurately than “dynamic language”, it's “using a low-level library whose type system doesn't match the host language type system” issue.

> Quants tend to use fixed precision types. It is like a float in every way, except base 10 instead of base 2 so there is no floating point error.

No, a type that is like binary floating point in every way except base 10 instead of base 2 would be decimal floating point, not fixed point. Decimal fixed point is different from binary floating point in more ways than base.


Quants don't care about floating point precision in research. It's just applied stats


I do, because the results from my research varies when I'm validating the model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: