Hacker News new | past | comments | ask | show | jobs | submit login

I've read a bit about this, and thought a great deal about it. I studied philosophy alongside Computer Science in college and the functioning of the brain has always fascinated me. But I am most definitely an amateur so take what I have to say with a grain of salt.

I think they might be missing a component in their research. They are looking at people with established friendships. And they are then assuming that people gravitated to one another due to similarities, rather than the friendship causing the similarity, and I think that might largely be incorrect. The brain has 'mirror neurons' which lead us to 'mirror' the sensations we believe others are having into our own minds. If we see someone get stabbed in the hand, the neurons responsible for sensing pain in our own hands see some activation, things like that. Given the central dogma of neuroscience, 'neurons that fire together, wire together', it would make sense that people who spend a great deal of time with each other, and in similar environments, would have their brains adapt to be similar.

Now here's where we can go off the deep end. So, the brain activity mirrored into our own minds when dealing with our friends is approximate to some degree, but this research suggests that it probably gets more accurate over time. Eventually, we are duplicating the patterns from our friends minds and they are duplicating the patterns of our own. To what degree might it be said, then, that our selves are 'shared'? If our friend passes away and we consider how they might have thought about something, our ability to model that probably borders on their own ability had they been alive given a strong enough friendship.

And obviously we have many friends. Around about 100 if Dunbar's Number is a legitimate feature of our brains. So our brains are filled with these 100 overlapping intertwined networks of models of our friends... and is that it? Is it meaningful to say that if we were able to 'scrub' our mind of those influences that there would be anything left?




One of the concerns about strong AI is that, if an AI tried to understand a human, it could end up simulating that human and literally making a virtual copy of them which is complete enough to be considered a person. Build a smart enough AI and from that point on you would never know if you were really you or a simulation being run by the AI.

In the same way, I've always wondered to what degree my own internal models of my friends and family are actually mini-versions of them. Ever wondered how you know exactly what (eg.) your mother would say in a given situation?


>to what degree my own internal models of my friends and family are actually mini-versions of them

I'd say there's still a lot of inference in your models on what goes on in them internally. They're extrapolated models from outward behavior. There are some things you don't know about them which they don't show.


Most people, including friends and family, interact at a very superficial level most of the time.

Sometime even spouses discover incredible fact about their others special one decades after the wedding.

So a good simulation could carry on the make believe for quite some time undetected I think.


In 'Revelation Space' by Alastair Reynolds, they have the idea of a "beta simulation" of a person, which is an AI agent based on external observations of the person and their interactions. These could seem very similar to the simulated person but weren't able to actually create any new responses, only respond from appropriately chosen canned responses. They also had "alpha simulations" which were whole-brain uploads and were basically the actual person (at least until they all went mad...)


It's funny that in most SF, they consider the brain the only things to copy to be a person.

But given we have neurons in the heart and guts, that the chemicals we have it our body and the digestive activity affect our behavior and that discover everyday that a lot of "us" is actually made up of alien micro-organism taking a role in almost every parts of our daily lifes, including hormones management - a crucial piece of our reaction puzzles - I think it's far from realistic.

I can't wait to see the first people uploading their brain in a machine, only to discover that:

- A, it's just a copy, not them;

- B, the copy is not nearly close to the original;

- C, they feel that they are missing something but they can't express it because the missing thing helped them defining it.

I swear, we geeks love to solve perfect problems with perfect solutions.

Like the joke of the physicist that can cure a chicken, but only if the chicken is a perfect sphere in a frictionless vacuum.


Yes! Pretty much all SF and AI woo ignores the fact that humans are ecosystems, and not brains driving a machine


They seem to consider the influence as the cause for similarity. Quoting from the end:

> They plan next to try the experiment in reverse: to scan incoming students who don’t yet know one another and see whether those with the most congruent neural patterns end up becoming good friends.


Socially, we would likely change. However, I don't think that would drastically change behavior.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: