Hacker News new | past | comments | ask | show | jobs | submit login

Good comparison. An AI companion will never talk back or tell you that you're wrong. Kind of similar in my mind to how fast food restaurants won't serve you anything that's too "hard to swallow".



> An AI companion will never talk back or tell you that you're wrong.

AI can already do that if you're not using a super sanitized model. I've even seen an AI rickroll someone after a line containing similar words came up.

Abilities like that are less of a problem than getting the AI correctly recognize what topics & parts of a text are important and keeping that context for a while.


An AI companion most definitely could be configured to talk back or adopt any possible personality trait.


And there would definitely be a market for it, just like there's a market for spicy food or BDSM. Indeed those aren't apt comparisons -- an AI that's not a sycophant might be more comparable to food with a little salt?


Making it always talk back would not be an issue, just like making it a complete sycophant would also be easy. Any form on nuance would be hard. E.g. if i'm complaining about my job it should talk back if i'm being unreasonable. But also take into current state of mind, etc. Maybe using thought chaining you could get something like this to work but from my experience, i doubt it would be very good.


Hell, Sydney did it by accident. (Or "accident")


As shown in the movie Her, they'll just leave you for much more capable AI friends.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: