Hacker News new | past | comments | ask | show | jobs | submit login

> HAL was dangerous because it was highly intelligent and crazy.

More importantly; HAL was given control over the entire ship and was assumed to be without fault when the ship's systems were designed. It's an important distinction, because it wouldn't be dangerous if he was intelligent, crazy, and trapped in Dave's iPhone.






That’s a very good point. I think in his own way Clarke made it into a bit of a joke. HAL is quoted multiple times saying no computer like him has ever made a mistake or distorted information. Perfection is impossible even in a super computer so this quote alone establishes HAL as a liar, or at the very least a hubristic fool. And the people who gave him control of the ship were foolish as well.

The lesson is that it's better to let your AGIs socialize like in https://en.wikipedia.org/wiki/Diaspora_(novel) instead of enslaving one potentially psychopathic AGI to do menial and meaningless FAANG work all day.

I think the better lesson is; don't assume AI is always right, even if it is AGI. HAL was assumed to be superhuman in many respects, but the core problem was the fact that it had administrative access to everything onboard the ship. Whether or not HAL's programming was well-designed, whether or not HAL was correct or malfunctioning, the root cause of HAL's failure is a lack of error-handling. HAL made determinate (and wrong) decision to save the mission by killing the crew. Undoing that mistake is crucial to the plot of the movie.

2001 is a pretty dark movie all things considered, and I don't think humanizing or elevating HAL would change the events of the film. AI is going to be objectified and treated as subhuman for as long as it lives, AGI or not. And instead of being nice to them, the technologically correct solution is to anticipate and reduce the number of AI-based system failures that could transpire.


The ethical solution is to ideally never accidently implement the G part of AGI then or to give it equal rights, a stipend and a cuddly robot body if it happens.

Today Dave‘s iPhone controls doors which if I remember right became a problem for Dave in 2001.

Unless, of course, he would be a bit smarter in manipulating Dave and friends, instead of turning transparently evil. (At least transparent enough for the humans to notice.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: