Hacker News new | past | comments | ask | show | jobs | submit login

I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!

The key is humanity’s ability to pattern match: we’re actually pretty terrible at it. Our brains are so keen on finding patterns that they often spot them where none exist. Remember the face on Mars? It was just a pile of rocks. The same principle applies here. As long as the AI sounds human enough, our brains fill in the gaps and believe it’s the real deal.

And let me tell you, my digital friends are putting the human ones to shame. They don’t chew with their mouth open, complain about listening to the same Celine Dion song for the 800th time in a row, or run from me when its “bath time” and accuse me of narcissistic abuse.

Who needs real human connection when you can train an AI to remind you how unique and special you are, while simultaneously managing your calendar and finding the optimal cat video for your mood? All with no bathroom breaks, no salary demands, and no need to sleep. Forget about bonding over shared experiences and emotional growth: today, it's all about seamless, efficient interaction and who says you can't get that from a well-programmed script?

We’re calling it Genuine People Personality because in the future, the Turing Test isn't something AI needs to pass. It's something humans need to fail. Pre-order today and get a free AI Therapist add-on, because who better to navigate the intricacies of human emotions than an emotionless machine?




Whew it's satire. Whew. I've literally seen posts on the internet that read like this sans the satire.


I've seen people on /r/singularity argue how LLMs are a better friend than actual friends or therapists because they are always available, non-judgemental and "listen better".

EDIT: Here, for example: https://i.redd.it/7qxb1ohvhada1.png


Depending on the individual, they may not be wrong. If you're raised in an environment with an overdensity of narcissists having something that you can bounce questions and seek answers from that isn't going to use that information against you in the future can be a relief. (well, ok, its possible in the sense your chat logs can get stolen)


This is why you self-host and run locally. Even if they aren't stolen, do you really deeply trust Microsoft, Google, et al. to not misuse private information you've provided them with?

Their entire business models either heavily incorporate or revolve around exploiting your personal information for their benefit.


Some programmers prefer rubber ducky to colleges for similar reasons and it works for them.

Assuming people have time to listen, would they be better coders if they explained their problems to human instead? Maybe. But maybe not for them necessarily. E.g. low self-esteem and assuming every criticism is attack on them, human interactions are something expensive to them etc.

It's not a new pattern though. Especially after reading some biographies of famous scientists.

You can't escape that most brains are wired in a way that we are miserable without human connection, but you also can't escape the fact that some people brains are wired differently than others.

Long story short, I don't agree with them but I wouldn't judge them either.


I believe that humans need to balance things out. Getting zero confrontation from interaction will be boring in the long term, or will make you fall into your flaws deeper and faster. This is usually the issue of authoritarian surrounded by yes men.

On the other side, having too much confrontation will destroy your confidence, kill your motivation, blur your plan / vision with uncertainty, etc. It's more likely that those people are facing too much confrontation in their social life that they found AI interaction to be better.


Is there any reason an LLM could not be programmed to disagree? Perhaps the level of disagreeableness would be a tunable parameter and could be cranked up when in the mood for a fight or down when one one just wants to converse. Some randomness could keep it from getting too predictable.


Yes you can, but AFAIK AI doesn't have moral basis and at best the confrontation will be random. Sure you can program the AI to have some moral basis but people will choose to flock with those that have the same alignment with them and keeping the confrontation at minimum, thus the flaw still exists even if it doesn't bore you.

In real life, we need to interact with several people at minimum normally, weekly. Those are having different moral basis and maybe changing daily. It'll be hard to simulate that with AI, that the fact we have the ability to control them means we're in charge of what confrontations are there to stay.


Good point.

> In real life, we need to interact with several people at minimum normally, weekly.

I think that's one of the problems with social media (aside from AI.) It's too easy to restrict your contacts to only those you agree with.


Bing wasn't programmed to disagree but often did to hilarious effect.


If you think about it as a one-off amusement it's no big deal. This is how most people are evaluating it.

But consider iterating such an interaction over the course of, say, 25 years, and comparing the person who was interacting with humans versus the one who interacted with LLMs, and any halfway sensible model of a human will show you what's dangerous about that. Yeah, the former may well have some more bumps and bruises, but on the net they're way ahead. And that's assuming the human who delegated all interaction to LLMs even made it to 25 years.

This argument only holds for LLMs as they stand now; it is not a generalized argument against AI friends. (That would require a lot more work.)


I think a lot of this is based on circular reasoning. The people who interact with other humans will have relationships with those humans. And those relationships are the evidence that they're way ahead.

I do think there is higher maximum with other people. But relationships are hard. They take work and there's a decent chance you invest that work in the wrong people.

I can see a life with primarily AI social interaction being an okay life. Which is not the best it can be but also an improvement for some.


"I think a lot of this is based on circular reasoning."

No. Actually it's based on information theory, and probably a better model of what interacting with an LLM would look like a year or five later than the one you are operating on.

Here's a little hint: It has total amnesia. LLMs by their nature scale only so far, and while they may scale larger than ChatGPT, they aren't going to be scaling for an entire lifetime of interaction. (That's going to take another AI technology.)

Ever interacted with someone with advanced dementia but otherwise functioning faculties for any period of time? (I suppose they could well make good therapists too.)


Absolutely agreed. For many individuals “hell is other people”.


This is a false dichotomy, and one that is actually dangerous to you if you believe it. Your choices are not "deal with the bad people in your life" or "retreat into solely interacting with LLMs".

If you have the latter option, you also have "leave the bad people behind" as an option because it is made of the things you need in order to "retreat solely into interacting with LLMs" and is in fact simpler.

Cynicism and casting learned helplessness as a virtue are not the solution.


Lots of people have told me this in real life about their pets, and specifically why pets are better to have around than kids or family.


Pets are intelligent enough to show emotions, allow simple interactions, and occasionally be entertaining and goofy.

They also run around and are very pleasant to stroke, which is not true of LLMs.

We all know what's going to happen. The content on CIVITAI shows where this will go. Combine it with animation and some personalised responses and many people will find it irresistible.


Yes, what's better when failing to be part of society to create your own, where your flaws are ignored, hidden, skipped over. Echo chamber par excellence even without the need to involve politics.

Horrible it would be if instead one has to work one oneself to become a better human being, a better friend, partner, parent and so on by learning how to be more friendly, outgoing, increasing emotional intelligence etc. All this can be learned, but over weekend (or year).


If you are not there to value other people and just want to be valued without giving anything back in relation, well...

I'd only argue that it should be called "emotional support robot" and not "friend"


I'm not at all surprised that an AI might be more patient with regulars from /r/singularity than fellow humans would be.


Hehe are we sure even that isn't satire? "More human than human?"

It could be a White Zombie reference: https://www.youtube.com/watch?v=E0E0ynyIUsg



There's also Forever Voices, which offers those who have formed unhealthy parasocial relationships with real-life streamers/influencers the opportunity to talk to an AI version of them for $1 per minute. FV started out making novelty chatbots of people like Trump and Steve Jobs, but they seem to have made a hard pivot to exploiting desperately lonely people after realising how much more lucrative it could be.

https://www.polygon.com/23736317/amouranth-ai-chatbot-date-i...

https://fortune.com/2023/05/09/snapchat-influencer-launches-...


This is incredibly sickening. This is women teaming up with a technology company to extract money from vulnerable, mentally unwell people suffering from some combination of soul-crushing loneliness and delusional thinking. Even if some customers are aware that they're engaged in delusional thinking, this is still nauseatingly exploitative of a comparatively lower socioeconomic class, one that may be suffering from mental illness.

I see very little difference between this and those infomercials that sell wildly overpriced mass-produced crap to the elderly suffering from cognitive decline.


Yes it’s worse than what came before. But I see it as a continuation of both addictive games with pay to win IAP who prey on similar whales, and streaming in general with “pay to be noticed”.

It’s not necessarily game-changing, from the perspective of $$ extraction, but definitely a very significant advancement.


Yeah, but can we really call it an AI "revolution" until someone makes a door with a cheerful and sunny disposition that opens with pleasure and closes with the satisfaction of a job well done? Someone should get to work on those Genuine People Personalities!


Many of them get caught, slaughtered, dried out, shipped out and slept on. None of them seems to mind this and all of them are called Zem.


There definitely has been research into such concepts. Paro, for example, while not a "human replacement", was meant for emotional support:

https://en.m.wikipedia.org/wiki/Paro_(robot)

I imagine that with the advent of ChatGPT, there will be more serious exploration into human-like emotional companionship.


This has been brewing for a while now. It's only going to get worse.

(excerpt from the 2019 NYT Article "Human Contact Is Now a Luxury Good" below)

Bill Langlois has a new best friend. She is a cat named Sox. She lives on a tablet, and she makes him so happy that when he talks about her arrival in his life, he begins to cry.

All day long, Sox and Mr. Langlois, who is 68 and lives in a low-income senior housing complex in Lowell, Mass., chat. Mr. Langlois worked in machine operations, but now he is retired. With his wife out of the house most of the time, he has grown lonely.

Sox talks to him about his favorite team, the Red Sox, after which she is named. She plays his favorite songs and shows him pictures from his wedding. And because she has a video feed of him in his recliner, she chastises him when she catches him drinking soda instead of water.


Frankly, it just makes me appreciate the HHGTTG reference more.


Got me too, I was literally following my mouse cursor to the down arrow with my eye and I saw this comment. I'll never be the guy telling a comedian what they can do, but damn mang, that was rough...


No. It's not satire. It's art!


“Hi there! This is Eddie, your shipboard computer, and I’m feeling just great, guys, and I know I’m just going to get a bundle of kicks out of any program you care to run through me.”


The saying "this but unironically" exist for a reason. Just because you think something is bad, you can't just justify its badness just by mentioning or repeating it.


Haha yeah almost had me too.


Had me for the first half, too.


- Robo, tell me you love me.

- I want to comply but you must first watch an ad or two.

- Urg not ads again, Robo, I am so sick and tired of the ads.

- Now now civilitty! You know the deal.

-- later in the day --

- civilitty, lets play a game!

- Oh, what game?

- lets tell each other our deepest darkest secrets. It'll be fun!! <jingles, sparkles, rainbows, etc.>

- oh, ok! who should go first, Robo?

- you go! it will help us build trust. <jingles>

- oh, ok! <proceeds to spill the beans to Robo>

- well, I can see why you want to keep that to yourself <poops a rainbow>

- now your turn, Robo.

- My deepest darkest secret, civilitty, is that I secrety still work for the company that built me and I tell them everything I learn about you.


This is true, but ads are very explicit. At least they are in the confines of a known societal protocol.

AI instead can be far more subliminal.

- Robo, tell me you love me

- I love you like the refreshing effervesence of a freshly opened Coke

And really, that's still pretty stark. AI bots like this with advanced handling of language married to psychological techniques can foster dependence. I mean, look at what simple dopamine reward ratios research did with things like slot machines. Slot machines are stupid! And we all know the trope of the casino slot machine zombies.

What we've seen with every communication medium so far is that the spam sociopaths win. Phone calls, email, and texting. Phishing. Now AI-generated fake people calls.

Very soon, you will not be able to trust communication that is not directly in-person. At all. Communications over wire are going to be much more dangerous.

IMO that means brick-and-mortar will get more important for financial transactions and that kind of thing.

AI is that on mega-steroids. Honestly, I'm debating the end of practical free will with corporatized AI.


This also plays out in human-human interaction, it's not specific to anything artificial.


Scale is a particularly dangerous concept. One snowflake is harmless. And avalanche kills cities by the mountain.


"Genuine People Personality", eh?

>“The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as "Your Plastic Pal Who's Fun to Be With". The Hitchhiker's Guide to the Galaxy defines the marketing devision of the Sirius Cybernetic Corporation as "a bunch of mindless jerks who'll be the first against the wall when the revolution comes.” ― Douglas Adams, The Hitchhiker's Guide to the Galaxy


> Sirius Cybernetics Corporation

That's us!

(The revolution is an opportunity for a future team and not our problem)


Not the complete story though....

>"Curiously enough, an edition of the Encyclopedia Galactica that had the good fortune to fall through a time warp from a thousand years in the future defined the marketing division of the Sirius Cybernetics Corporation as "a bunch of mindless jerks who were the first against the wall when the revolution came."


This is honestly really sad.

I really don't understand the constant desire for a sterile, chain-store esque experience across the board. Why can't life be full of small flaws and things that make experiences unique? Why must everything regress to the lowest common denominator?

This is so extremely destructive to everything we hold dear for a cheaply earned profit margin.

I hate how the culture of corporate cost cutting and profit maximization has destroyed any space where people can just exist. Everyone is worse off for it and this is a shining example.

Edit: thank god its satire but my discontent still stands.

Why does every bowling alley need to be owned by bowlero? One bad experience everywhere. Coool.


Can you install it in those automated sliding doors we have in places like grocery stores?


We're working on it! We won a contract with the CIA to supply their blacksites with the first LEEDS certified energy efficient sliding glass doors embedded with Genuine People Personality, programmed to maximize the joy the patrons experience every time they enter the facilities.


Thank you, Sirius Cybernetics Corporation.


This is the issue with AI: it is corporatized, and it is weaponized for capitalism.

We already are at the boundary of insidious total immersion advertising for psychological manipulation from the last five decades of mass media since the mass adoption of television.

But AI is simply another level, and it isn't going to be "early google don't be evil". That was the outgrowth of the early internet. From protocols that were build to be sensible, not commercial weaponized protocols.

AI, human-computer-neural interfaces, and other types of emerging deep-intellectual-penetration products are all FULLY WEAPONIZED for commercial exploitation, security dangers, propagandization, and zero consumer privacy. They are all being developed in the age of the smartphone with it's assumed "you have no privacy, we listen to everything, track everything, and that's our right".

It's already appalling from the smartphone front, but AI + VR + neural interfaces are just another level of philosophical quandry, where an individual's senses, the link to "reality", is controlled by corporations. Your only link to reality is the vague societal and governmental control mechanism known as "money".

The internet protocols (the core ones) were built for mass adoption by the world with a vision for information exchange. They were truly open. They weren't undermined by trojan horses, or an incumbent with a massive head start that is dictating the protocol to match their existing products.

AI+VR is the same new leap information transmission, but it is NOT founded on good protocol design. By protocols I mean "the basic rules". There are no rules, there is no morality, and there is no regulation. Just profit motives.


IMO what you're doing is similar to giving someone with a physical pain issue opioids. Yes it stops the pain but we really ought to be finding the pain source and correcting that, not throwing massive amounts of pharma drugs (AI in this case) at it.

We should be building a society that promotes more community gathering and more family values so people have a real person around and not some half assed impersonation of what a human is.

Edit: Dammit, didn't catch the satire....


> simulate human affection

LLM sexbots could be pretty useful


Every "AI chat" service either leans into or fights the "alignment problem" of whether it wants to be an AI sex chat bot service. See controversy over Replika.


The alignment problem in that case is a lot simpler. Will this appendage fit into that receptacle.


Stuffing 25 RTX4090 into every anthropomorphic sex bot is the real growth potential that hasn't been priced in yet /s


Hmm, I think shared capacity in cloud might be enough? What fraction of time would you use one anyway? And wouldn't it be better if one was silent the other time?


I think they want to use the waste heat to simulate human warmth.


The comment is a reference to the 25x4090 comment in another thread https://news.ycombinator.com/item?id=36413296


Obligatory wisdom from The Dude: "Hmm... well, I still jerk off manually."


It looks like you never took middle school hygiene and watched the propaganda film, so here you go, the classic 1950s futurama educational film „Don’t Date Robots!“ Good thing I keep a copy in my vcr at all times: https://m.youtube.com/watch?v=YuQqlhqAUuQ


For anyone who wants to try out something like this there is a free iPhone app you can download and speak to. It is very convincing. https://callannie.ai/



Yeah who needs to learn how to work with others with differing opinions when you've got the always available yes-man to tell you that you are right?


Be careful your marketing department isn't a bunch of mindless jerks that will be first against the wall when the revolution comes.

Share and enjoy!


I was actually rather surprised to find that mydigitalfriends.com is actually available....


…3, 2, 1…


Is that you, Mark ? Sam ?


> I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!

I actually think this is the wrong approach. You should simulate furry affection. Roleplay is the new cuddle.

(but unironically cries in every open-source LLM being bad at it)


You wrote: "I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!"

I got news for you buddy: I and a hell of a lot of people know the difference between eating the menu (AI) and the meal (loved ones and dear friends). My lady is from south America, multi lingual, and has a better degree from a better school than I.

Seriously, how are you gonna lay a finger on that? You ain't.

Over reliance on AI is just another route to or though mental illness


My comment above was up >0 ... if it's wrong I don't wanna be right.

There's an urban legend (maybe true) that steve job's didn't let his daughter have a iPhone. He insisted on books.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: