Hacker News new | past | comments | ask | show | jobs | submit login

>While it is still an advancement in NLP, I'm more interested in getting a super accurate or generative AI system to explain itself than one that cannot

Why? People can explain ourselves because we rationalize our actions, not necessarily because we know why we did something. I don't understand why we hold AI to such a high standard.




there seems to be an overabundance of negative sentiment towards deep learning among hn commentators, but whenever i hear the reasons behind the pessimism i'm usually unimpressed.


for the same reason why we have psychiatrist, for when the AI does a mistake, you need to fix it, work around it, prevent it or if all else fail to protect others from it.

it's all fun and games when AI do trivia. when AI get plugged into places that can result in tangible real world consequences (i.e. airport screening) you need to be able to reason about the system so it gets monotonically better over time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: