Hacker News new | past | comments | ask | show | jobs | submit login

Right, but if there is such a thing as the very plastically named "Jeniffer Aniston neuron" [1], and further more, group equivariant deep learning [2], maybe there is a way in which you can isolate a certain concept/"type", such as Person, Car, and so forth; perhaps not even isolate, but rehydrate the context of where the concept takes place: as a brain does in various word plays, as in Who's on First [3], etc.

Come to think of it, when someone teaches me a new concept, the principle of mass conservation, for instance, in some sense they are transferring their embedding into my brain, further on I will relate to mass conservation through what that person taught me. The transfer is a very lossy process, sure, but a transfer with reintegration nonetheless. Perhaps "mortal computation" [4] is a requirement.

[1] https://en.wikipedia.org/wiki/Grandmother_cell

[2] https://www.youtube.com/playlist?list=PL8FnQMH2k7jzPrxqdYufo...

[3] https://www.youtube.com/watch?v=kTcRRaXV-fg

[4] Geoffrey Hinton, The Forward-Forward Algorithm: Some Preliminary Investigations, chapter 8, https://www.cs.toronto.edu/~hinton/FFA13.pdf




> Right, but if there is such a thing as the very plastically named "Jeniffer Aniston neuron"

Firstly even if there is such a cell that only fires for one face, or perhaps also the person’s name, it doesn’t mean there aren’t other cells that fire for that person, or for people in general including that person. Without those as well, that neurons responses might not mean anything to the rest if the brain. It’s a thought experiment but never really demonstrated.

Also even if this is true in the very strongest sense. Say there is one neuron that uniquely and discretely fires in response to thinking about that one person. What defines a neuron isn’t just its internal behaviour. It’s also the pattern of inputs that influence it, and the pattern of outputs it sends out. It’s the connections and dependencies on the weightings and signals and responses from all the cells it’s connected to. Including the specific unique ways all those neurons are connected, or not connected to all the other cells in the brain. It’s al, the specifics of that connectedness that are what makes the behaviour of that neuron meaningful.

If you took that neuron and implanted it into another brain, you’d need to hook it up to the neurons in that brain such that it gets exactly the same stimuli, in the same order, with the same strength, every time it needs to fire. The same applies to its output, all the neurons it’s connected to would have to interpret its firing behaviour in the exact same way the other neurons in the original brain did. But there’s no guarantee any of those connected mechanisms work or are physically connected in the same way, or even a vaguely similar or compatible way in the new brain.


Well, given the more organic nature of machine learning and what it's trying to achieve I wouldn't be surprised if that same neuron also triggered to some degree for "Jennifer and Stefan" ahaha.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: