Hacker News new | past | comments | ask | show | jobs | submit login

Because it would be the equivalent of a natural built-in evil bit and an extremely bizarre thing - for the same reason that the evil bit is unlikely to be used.

Any system that claims to work on that sort of input is almost certainly picking up socio-economic status of different races, or something similar, with no causal predictive power.




You are saying that there is no genetic component to personality? That's dumb. Are you saying there is no genetic component to facial features? Also dumb. Are you saying that there is no crossover whatsoever between the genetics that govern facial features and personality? Also dumb. There will be crossover above 0, it would extraordinary if there were 0 crossover. So there is likely some small correlation between facial features or skull shape and personality. How big is the effect, I don't know. The problem here is that you are judging the value of a person based upon their personality, not that their personality might be bound up in some way with their genetic makeup.


You've missed an angle - the causal link between genetics and personality is completely overwhelmed by the non-causal correlation between genetics and social status.

These models aren't going to pick up the correlation between facial structure and personality, they are going to pick up which families are high status and which are low, then provide the same pseudoscientific justifications for discriminating that people have been deploying since the dawn of pseudoscience.

Basically, these models are going to mislead people into thinking that a non-causal correlation is causal.



Facial (a)symmetry is an important factor in neurological/psychiatric diagnosis.


> Any system that claims to work on that sort of input is almost certainly picking up socio-economic status of different races, or something similar, with no causal predictive power.

I wonder which will have more predictive power, the version where you let the AI do it’s thing or the version where you intervene to correct for things that are almost certainly wrong according to you.


An AI doesn't do "it's thing", it learns with the bias the researcher encoded in the model, and most importantly in this case, with the massive bias of the datasets.

Correcting is just steering a bias from one way to another.


Bias is relative to a null hypothesis, you are just begging the question. Predictive power is the final arbiter


> Predictive power is the final arbiter

But how do you measure that predictive power? Humans do have to build an evaluation set. And that evaluation set will be biased one way or the other, you cannot just pretend bias does not exist and hope for the best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: