Hacker News new | past | comments | ask | show | jobs | submit login
MIT discovers the location of memories: Individual neurons (extremetech.com)
341 points by mrsebastian on March 23, 2012 | hide | past | favorite | 120 comments



OMG, it's the ultimate mechanical_fish pet peeve collection! TL;DR: I rant.

ONE: I can't find the citation of the scientific paper on ExtremeTech. (Let alone a link. Who would dare to dream of a link?) They do refer obliquely to "the paper" once. (What paper?)

And they link to MIT's press office, whose brand is really solid, so everything they write is almost like science! And there you can skim the article twice and finally spot the citation:

Susumu Tonegawa, the Picower Professor of Biology and Neuroscience at MIT and lead author of the study reported online today in the journal Nature.

Okay, gotta go find an online Nature subscription to find out what's going on. There's an hour of my day spoken for. At least they're trying to ensure I get some exercise.

Why is the actual journal article important? Just look at this thread here on HN. We have people doubting all sorts of things, but these may well be things that are addressed in the actual work. The content of the popular articles means very little: They leave out most of the details. The details matter. The whole point of this study is to try and tease out more details.)

TWO: But, wait, there's more. I first tried to read ExtremeTech on an iPad (original edition). The article popped up in one of those insufferable iPad-only JS-powered "mobile editions" with Swiping Action. Unfortunately, there was only the first page of the article. It cut off in midsentence. I tried pressing the giant button marked "Next" on the right side of the screen. I got a big white screen. I flailed around with my fingers. A completely different article eventually rendered itself. I flailed around with my fingers some more. Eventually the original article reappeared, still incomplete. Fortunately, one more roundtrip to the next article and back and I finally got the whole thing to render.

Then I pressed the back button and everything seemed to hang. I closed the browser window and thanked the gods for my escape.

Why on earth do publications use these broken things, when simple web pages render so nicely on the iPad? The site does have something like a dozen tracking cookies on it; does this imply that they have data showing that swipability is so important that it doesn't even have to work in order to attract more ad impressions/clicks/Tweets/whatever? Or does it merely suggest that they are so busy struggling with glass-cockpit syndrome that they can't perceive that their site is broken on the iPad?


I will add a link to the paper itself; sorry about making you jump through hoops.


Hey, props for responding to criticism! As a (now ex-)neurobiologist, it's maddening when journalists don't cite the paper they talk about. I helped start ResearchBlogging.org way back in the day to help fight this very problem.

If I may make a suggestion: wrap your citations in OpenURL COinS format. ResearchBlogging.org provides this functionality automatically, but you can do it yourself too. It provides a lot of useful meta-data, citation information and the ability for plugins to interact intelligently with the web-page

[1]http://ocoins.info/


Is this similar to the DOI thing? I have been thinking about doing that, too.


It's related. DOI is basically a universal identifier that acts as a "domain name" for academic literature. This is to prevent broken links over time.

The big providers (Elsevier, etc) are obligated to maintain the DOI no matter what happens on their end. E.g they may completely change their site architecture, or change domain name, but as long as they keep the DOI updated with the current location of the paper nothing goes wrong. DOI's all around the internet will still continue to point at the paper.

It's a lot more robust than directly linking to the paper itself.

COINS is a way to embed bibliographic meta-data into a webpage. There are a number of plugins/extensions that will grab this metadata and do useful things. One example is an extension which automatically redirects links through your library's link resolver so you get the paper and not the paywall page.


Thanks so much. It's awesome to see this practice start to change. At this rate maybe I'll soon be able to control my pet peeve with medicine.


"Just look at this thread here on HN. We have people doubting all sorts of things, but these may well be things that are addressed in the actual work."

Presumably if MIT had actually discovered the nature of consciousness we wouldn't be reading about it in some tabloid. The research may be less bad than the article suggests, but there's no way it's as good as the article suggests either.


The fact that consciousness runs on neurons is hardly breaking news.


Logically speaking consciousness being "produced" from neurons doesn't any sense whatsoever. It sounds magical. What is it that is being produced? Why is not produced in other dynamic systems, e.g. a car engine? Is it complexity which produces consciousness? Or is it some substrate? Is it a function being computed which is necessary for this production?

Can a sophisticated car engine be conscious? Can a Ferris wheel be conscious? I can definitely map various bits of computation to a car engine and a Ferris wheel. If you need organic matter to have consciousness, can an extremely complex car engine made of organic matter be conscious?

One can brush aside all these rigorous questions, like almost all neuroscientists, and adopt a fideistic attitude that basically stipulates a materialistic view.

Please read more about the hard problem of consciousness [1]. There will be lots of people who will disagree, but the materialistic view of the mind has lots of cracks in it [2]. One needn't even look at [2], as so far no one has given a non-gibberish answer to [1] which doesn't appeal to one or other common fallacy.

[1] http://en.wikipedia.org/wiki/Hard_problem_of_consciousness

[2] http://en.wikipedia.org/wiki/Near-death_experience


I once did some reading about this so I'm curious for more information (since you seem to know what you're talking about).

This is probably a stupid question (my favorite kind): Are there good arguments for why a car engine isn't conscious? Isn't it just the "other minds" problem in a different form?


The main argument against it would be the theory that consciousness is something we tap into. Completely unproven, but it has a certain elegance to it. There is actually some precedent for this as an argument. Back in ancient Greece the third of the three arguments in favor of the earth being spherical was that it would be the perfect shape for it to be.

As it stands, it's completely unknown why subatomic particles have precisely the mass and charges they need in order for matter to exist, even though the chances of that are supposedly trillions and trillions against one. Perhaps because matter is designed to tap into consciousness, and the more evolved the matter the more complex the slice of consciousness it can capture. That would be my working theory at least.


You are right. It is sort of similar to the "other minds" problem. There are no ironclad arguments against a car engine not being conscious, though it seems obvious. You can have more fun of this sort reading this: http://en.wikipedia.org/wiki/Panpsychism

The SEP article below is probably the best I can find right now which talks about almost all viewpoints.

http://plato.stanford.edu/entries/consciousness/

There is an extremely good and entertaining argument in Penrose's book Shadows of the Mind against consciousness being computational (but the book has quite a good number of flaws in its other arguments).

From a reductionist and materialistic physical standpoint we only have fundamental particles and the forces in the universe. None of these seem to be related to consciousness. It is seems magical to say that these particles then interact in complex ways to produce something fundamentally new.

David Chalmers [1], the guy who came up with the hard problem, has written a lot on this.

[1] http://consc.net/chalmers/


Thanks! I'll ask one more question while I have your attention.

The first thing I searched for on that SEP page was "Popper", because my "how do we know a car engine is not conscious" stems from trying to apply falsifiability to my intuitive notions. What I take from "other minds" is that other people's consciousness is not falsifiable; taken to its natural conclusion, it seems to me that the non-consciousness of an engine is also non-falsifiable. Which is actually pretty cool, to a philosophical simpleton such as myself :-)

So my question is: is there something I can read that specifically links concepts of consciousness to Popper-derived falsifiability ideas?


Yes. You are spot on. I am not able to find readable survey-type articles other than the Wikipedia article on solipsism, but the idea you described is kind of folk-knowledge among really good philosophers of mind.

http://en.wikipedia.org/wiki/Solipsism#Falsifiability_and_te...

I see your email in your profile. If I come across something more solid I will pass it on to you/edit this comment.


Not sure about Popper, but you might also be interested in this book:

http://en.wikibooks.org/wiki/Consciousness_Studies

It tackles a lot of the philosophical problems around consciousness. I haven't read the whole thing because it's very technical and not always the best written, but what I've read is kind of interesting.


A philosopher is someone smart enough to ask the hard questions and dumb enough to try to answer them.


I agree, I almost feel an engine might be conscious.

in my mind a thermostat is a brain with just two points of being, top cold-warm enough.

i wonder what does an engine feel?


Windows is not produced by your computer; Windows runs on your computer they are separate concepts. The mind is a vary similar thing, it's an emergent property all the little pieces are individually understandable, it's not there until those peaces work together.

You mind works the same way. Apply a little voltage to this part of someone's brain and they are now having an out of body experience. It's just like a hardware interrupt in that it has meanings on several levels including the physical (QM) in all it's glory as well as the subjective. But, if you cut out that little area of someone's head and apply the same voltage it's not meaningful to suggest that those cells are having an out of body experience. In that context the experience is both the cascade that happens and the cells that are part of that cascade.


Sounds like you might be interested in reading GEB: http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach


Yes, I have skimmed through it. It is a nicely written book, but I think the main premise is wrong. A more nuanced book is Penrose's Shadows of the Mind. It is more rigorous and has a good overview of physics and computer science, but some of its arguments have flaws.


When I was a kid, I used to have very vivid dreams or hallucinations whenever I suffered from high fever. Nearly always the same dream, very frightening - very real to me when I was having them. What was amazing is that the dream was weird: I remember me being small. Too small for my mind - it was as if I (as in the thinker/narrator) was where the mind was and the rest of the body was so small that I couldn't fit it. My mind was floating around my head, like an aura or a chakra. The feeling, some part of it I have lost by now, was truly disturbing. The surroundings where white - marble I think. And there were huge white pillar, 100 times my size - and they were falling. And they wouldn't break when they do - they would simply fall without noise, without rubble. I was always scared to death, always running. The dream was short and sometimes I would have my parents with me in my dream. Although they were my size, I don't feel any emotion or have a though about their size, just my own.

A couple of times, I would feel that my limbs are getting bigger and bigger, while my head (possibly my mind) remains the same. I was observing it just outside my head. Not too far away, possibly from my own eyes but outside my head as my mind though still at the same position, has disconnected itself from the head. I knew I was in bed, I was arguing my own self about the feeling being unrealistic. But I remember I couldn't reason out, I felt as if I will be crushed by my own body - slowly and painfully. There is no physical pain that I could feel, just an intense fear of that it will come.

I don't think that these were near death experiences - possibly just hallucinations. Unfortunately, they were so real to me that to this day, I am in a way, still scared of them. On the other hand, I want them back because they were so real and so scary - because at no living moment in the rest of my life have I felt a similar emotion, a real fear of that magnitude. I am sharing this because I cannot believe that a psychological/mental experience can be more real than what I had felt; that NDEs is more real - that they need afterlife.

Religious theory is amazingly complete with scope to explain everything, may be with inconsistencies but still everything. Existence of God would explain everything - because he is omnipotent. The power of religion on the other hand is reducing because after so many years of scientific experiments we are able to experimentally verify alternative theories that explain much of what was unexplained before. We don't need God for minuscule things, not anymore. As of NDE today, sure physiological and psychological theories cannot explain everything about NDEs, but I believe that they will in future.


I think the parent should not have mentioned NDE, they are an interesting phenomenon, but they don't proof anything. His main argument is something else.


If you're interested in the 'cracks' with regards to NDEs, definitely read The Spirit Molecule by Rick Strassman. That book, along with a few Terence McKenna lectures, will destroy any last vestiges of a materialistic worldview.

(To learn more about Terence McKenna, start with these:

http://www.matrixmasters.net/salon/?p=254 (also part 2 and 3)

http://www.matrixmasters.net/blogs/?p=297)


It's always healthy to be versed with the opposing viewpoint, so I'd suggest reading How the Mind Works by Steven Pinker for a well articulated and evidenced materialistic thesis.


Does it actually attempt to explain qualia though?


It takes steps towards doing so mechanistically, but it's silent where there aren't any answers in that area yet.


Thanks! Look forward to reading it. Your other comment on matter capturing consciousness is interesting and insightful.


Actually, it would be a breaking news for people who subscribe to mind-matter dualism. There are quite a lot of them, for instance followers of many Christian religions.


The overwhelming majority of people accept that drugs can temporarily alter someones personality and brain trauma can interfere with memory and or function. The fact that many of these same people also believe in a soul setups an odd sort of cognitive dissonance.

Anyway, for most neuroscientists results like this are far closer to conformation using an interesting technique than truly groundbreaking research.


I am not religious, but I feel something is not adding up in this purely scientific view of the world. I never tried to explain this thought to anyone else, because it's damn hard to explain. I will give it a try:

As far as science goes (or I understand it) the brain is just a neural network, which gets activated based on inputs and produces some outputs, like a computer does. Neurons can also self-activate, but that is not really the point. The point is consciousness would not be required for that. When look around me I see things. If my brain would just do input/output, there would be no need for me to see anything. I would just act without seeing/hearing/feeling consciously. I don't claim I have any idea about anything, but I feel like something is not adding up here on a fundamental level. I don't even know if other people experience the same thing or if they do indeed just act on inputs and have no idea what I am talking about.

I wonder if what I said made any sense to anyone and if any philosophers were considering the same thing.


You might like Douglas Hofstadter's book I Am a Strange Loop. He examines consciousness as a recursive feedback loop. The book has some hokey tangents, but I think it's a good presentation. btw, this book is much more accessible that Hofstadter's GEB. :)

https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop


Thanks. From the WP article it sounds like his ideas are somehow related to mine, but that he is viewing it from a completely different angle. Very interesting.



Thanks. This is what I am talking about (I think). I have a hard time understanding the counter-arguments though, without any background in philosophy beyond Plato. I will try again when I am less tired.


Are you saying you see no evolutionary purpose for consciousness? This is a (very short) interesting book on the topic: http://www.amazon.com/Seeing-Red-Consciousness-Behavior-Init...


Are you talking about the self-awareness and introspection you perceive yourself as having?

Does it help if you view your consciousness as both being produced by the brain, and used as an input - i.e. a key part of a feedback system?


Are you talking about the self-awareness and introspection you perceive yourself as having?

Maybe, depends how you fill in those words. I guess you could write a computer program that has self-awareness, in the sense that it is able to talk about itself. That is not what I meant.

Does it help if you view your consciousness as both being produced by the brain, and used as an input - i.e. a key part of a feedback system?

I'd even say that this is what most likely is going on, but it opens a completely different can of worms, with questions like: Does the part responsible for the consciousness (let's call it soul for lack of a better word) survive after you die? This realization tipped the scale for me from "No, that is just wishful thinking." to "I have no clue, but it is slightly more likely that it does, than that it does not.".

It's very annoying that I can barely talk about this without using words, which have been claimed by people, who hold believes I consider completely irrational.


Your thinking of a basic classifier Neural net from an AI class. But, that's a poor analogy for anything more complex than reflexes. We have both short and long term memory, and we can do planning. A really simple way to thank about short term memory is to have a single neuron in a feed back loop. Basically it's output leads to it's input and it can be switched on and off. Of course real neurons have far more than just 3 connections and tend to work in fairly large networks, but it's gives you an idea of how you can have a short term memory you keep cycling though the lyrics of some song and it's 'stuck in your head'.

Long term memory is a physical change in the layout of a neural network. Spend long enough walking around a new city and there is an abstract but physical map that's actually stored in the layout of neurons in your head. At the same time pieces of that layout represent locations on the map and how they connect to other locations. Think of an entrance to a parking lot and you might have a fairly static picture of the location linked to the choice of where that will take you if you go there. (So short term memory is now a neuron that cycles, a network that picks which target network to activate, and a network that represents something, plus feedback to turn on and off the targeting.)

So what's consciousness? It's the ability to think about things as abstractions. When you say Apple to your self your actually activating neurons that keep cycling Apple over and over. Picture yourself tossing a ball and your thinking thinking about starting the cascade of muscle memory that causes you to through something while picking the perimeters of where you want the ball to end up and how hard you want to to hit the target, and possibly the position you need to be in to actually be able to through the ball. You can play around with the outcomes of if I do this that will happen by checking what neural net's predict the outcome will be. Chess is a great analogy for this, players don't think about moves as picking a piece up and moving it somewhere else, but what it means when the piece is in a new location. Though experience, education, or just thinking about things you can even train these neural networks to get better at those predictions.

Of course the actual implementation of these things is horribly complex, and many of the specifics are not all completely understood / studied. Also, chemicals play a major role, there are actual chemicals that represent things like pleasure in the brain. EX: Cocaine mimics the mostly hard coded chemical reward response for things like having sex by blocking the dopamine reuptake transporters. http://serendip.brynmawr.edu/exchange/node/1704

PS: Still I hope this simplified model helps you understand what's going on.


You seem to be missing my point.

So what's consciousness? It's the ability to think about things as abstractions.

No, it's not! (see http://en.wikipedia.org/wiki/Consciousness). Artificial NNs can also model memory (see Hopfield networks). However, I just used them as an analogy. Maybe Zombies work better for you and the Philosophical Zombie Wikipedia page does a better job of explaining the idea (thanks again xyzzyz).


Perhaps if I reword that a little you might understand what I mean: http://en.wikipedia.org/wiki/Consciousness is the subjective experience of the ability to think about things and processes as abstractions. Apple, Airplane, Alphabet, Algebra, Balance, Self, or Obama they all exist as arrangements and connections of neurons in your brain. When a song is stuck in your head that's a physical thing that's happening to a some neurons in your head. But, so is everything else your thinking about.

These networks also connect to and build off of other abstractions so ((((peanut) + butter) + Jelly) + Sandwich) is built from more than one of these networks. Try and think of a pile of peanut butter next to some grape jelly. No problem it's brown next to purple.

Now try and do that for peanut butter next to a peanut butter and jelly sandwich. Most people feel something odd happen on the second one because they are reusing the peanut butter network for both of those. For me, I can't focus on their colors at the same time.

For a great subjective description of this watch: http://www.youtube.com/watch?v=Cj4y0EUlU-Y

Getting back to Consciousness, it's turning on these abstractions by choice. I want to think about Apple so I activate the Apple network and suddenly experience Apple. And it's the same network as if I had just read the word or saw a physical Apple. Language also get's mapped to this which is part of why philosophers think about thinks like platonic ideals yes there is an ideal Chair it exists and it's the Chair classifier in your head.


We have different ideas of what consciousness is. Like I said I don't even know for sure if you process consciousness.

http://en.wikipedia.org/wiki/Problem_of_other_minds

I don't think it makes any sense to talk about this any further, we might as well be speaking different languages. Otherwise you could also comment here, http://news.ycombinator.com/item?id=3747462. Naveensundar did a better job explaining it.


"The fact that many of these same people also believe in a soul setups an odd sort of cognitive dissonance."

There's no cognitive dissonance if you believe that the brain influences consciousness but aren't committed to the idea that the brain is the source of consciousness. E.g. you could believe that the brain is a reducing valve, a la Huxley, and there is no cognitive dissonance there at all.


When one's core beliefs have no basis in logic or objective reality, it's simple to maintain consistency. All that's needed is the invention of some new concept to explain the discrepancy.

The world created in days -> but a "day" could be a million years. Evolutionary theory -> but <creator> is driving the evolution etc...


"When one's core beliefs have no basis in logic or objective reality, it's simple to maintain consistency. All that's needed is the invention of some new concept to explain the discrepancy."

Like empiricism.


While this is indeed a problem for cartesian dualism, it isn't a problem for something like hylemorphic dualism, which is the approach of St. Thomas Aquinas, among other mainstream Christian theologians.


I doubt that many of them will accept any scientific proof which disproves mind-matter dualism, and thus this one woudn't be breaking news for them neither.


I don't happen to believe in mind-matter dualism, but at the same time, I also don't believe it's possible to scientifically prove it one way or the other.

Can science truly explain where consciousness comes from? I would argue it cannot, because consciousness is a subjective experience, and is not objectively measurable.


Actually, can science truly prove anything about the world with certainty? I think claiming that would be ignorant of the way science works: You make assumptions, based on those assumptions you build a model and then (optionally) you show that your model correctly predicts some effect. If the assumptions you made do not hold, your proof does not hold either.

Related to that question: Is consciousness really subjective? Is it not the only thing I as a person can experience objectively?

EDIT: I don't understand why I am beeing down voted? I am just trying to explain my point of view. I don't think religion can prove anything with certainty, either. Nobody can. You can just make more or less strong assumptions.


> the way science works: You make assumptions, based on those assumptions you build a model and then (optionally) you show that your model correctly predicts some effect

Theories with predictive power are the essence of science and models are optional, not the other way round. Science is not materialistic, it would be valid even if the world we perceive was "fake", a virtual simulation for example. As long as it's consistent enough for us to make predictions, it's all good.


Theories with predictive power are the essence of science and models are optional, not the other way round.

I was thinking about Math or some subfields of Computer Science where you don't need any predictions (I'd think other disciplines have these purely theoretical subfields as well, but I dunno). I am not sure what the formal definition of a model is, but I think you'd agree you need some kind of non-trivial logic argument, otherwise it becomes, well... trivial ("X happens because we assume X happens").

Maybe I should have left that optional-remark or explained it further, it might be the reason I was downvoted in the beginning. First I also had a (semi-)joke in there about gaining citations being the ultimate goal of science, people might have though I was one of those fundamental Christians mocking science. It's very easy to get misunderstood if all people know about you is what they read in a comment of a few lines.


I agree with the difficulty of providing proof.

But I'm in perpetual discomfort with the way contemporary science is shunning anything "subjective" like it's the plague. Yes, we get it, you ("you, Science") despise uncertainty, and the subjective domain is deeply fuzzy and non-rigorous. But it leads to this robotic, industrial approach to things that should be more human - see the way hospitals work, for example.

I don't have any solutions, either. I'm just saying there's something rotten in this particular Denmark.


They may not accept scientific proof, but they are likely to accept an empiric proof -- people may not believe in atoms or Newton's/Einstein's laws of motion, but they sure as hell believe that electricity is real, fire burns and planes fly.


Empiric proof of consciousness is difficult, as NickM suggested, the problem is in the subjective nature of our perception of self.

The question behind the perception of consciousness a key one behind the issue of mind/matter dualism: imagine an empiric proof, somebody builds a powerful computer which simulates a human brain to perfection yet mind/matter dualiasts won't feel that this "being" which talks, expresses emotions etc is anything more than a simulacra, a [soul|mind|...]less machine.

I think that usually when two persons disagree, either at least one of them is wrong, or they are not talking about the same thing.

Yet, it's difficult for two persons to align to the definition of what they are talking about, when one them won't accept that definition as something which is hits what's being discussed: imagine that I argue with a matter/mind dualist and tell "hey, my experiment proves that the neurological processes behave in exactly the same way", he will answer "so what, there is more in it than mere neurological processes".

Where is the "subjective" part in all this? The fact that you cannot even prove that I have a [soul|mind|...], the only "proof" you have is that I look like you, I'm (as far as you can tell) made of the same flash, was given birth in a similar way as you... so you just assume I behave internally as you do, so you can transpose your subjective experience to others.

Both point of view generally assume that our consciousness is general, not only mine or yours; the difference is whether the consciousness is a pure emergent phenomena of matter alone or "something else".

So the subjectiveness of the perception is only used as a tool against the provability, against the acceptance of any empiric proof; which is understandable, as the empiric proof in itself, when done in terms of matter, works in the framework of mind materialism, and as such becomes invalid as soon as you undermine the very material nature of mind.

In simple words: mind/matter dualiasts won't accept any proof based on matter.

That's why they are usually deemed as anti-scientific: because scientific thought strives to search for explanations of the natural phenomena which kind be defined in such a way that it can be falsified by observation.

Any explanation that by definition is not suitable for any kind of proof because disconnected from the rest of the physical phenomena is by definition non-scientific.

So IMHO, this is the heart of the discussion, not the "subjectivity" of the mind, that's only an excuse.

PS:

Interesting read: http://en.wikipedia.org/wiki/The_Minds_I


Studies published in Nature typically don't have bad science...


Studies published in Nature typically have flashy science. Coupled with the fact that their relatively short format makes it difficult to determine what exactly was done, much less replicate it, I'm actually less inclined to believe something in Nature as compared to, for example, Cell or PNAS.

It's also worth considering that there's a positive correlation between impact factor and retraction rates. http://retractionwatch.wordpress.com/2011/08/11/is-it-time-f...

All in all, while Nature doesn't publish obviously bad science, it's less clear that it publishes better science than average.


It crashed my browser too. Twice. Big tower-computer-thingy with a couple of meg-joules of RAM and Pewter web browser.

Thankfully, I was able to spot the problem after visiting the page with my lawnmower: http://i.imgur.com/4jnGv.png


Cynical-me thinks "screwed up page rendering and swiping just achieved 4times more page views to claim to advertisers - mission accomplished, goals met!"


The data doesn't warrant the title of this post. Even if a single neuron is responsible for triggering a memory (which is hard to say based on mice in the first place), it doesn't follow that the information is stored in the neuron. As an analogy, if we erase a specific bit in memory, whole parts can become unreadable. For example imagine changing a bit in a pointer. That doesn't mean that all of the information was stored in that bit.


I came storming into this thread ready to post exactly your statement. The title is way over-blowing the situation. There are so many caveats here its hard to do it justice.

First, they found single neurons which trigger memory, a far cry from containing the entire memory. Think of them as gateways to a house. A gate does not hold the house, but it does let you access the house.

Second, they didn't remove any neurons (from my brief skimming of the press release at MIT[1], I don't have time to read the paper now). There is almost a 100% chance that there is a large subset of neurons which will trigger the same memory. Which means this house has a lot of gates. Nothing is becoming "unreadable" here.

Third, this trick of using optogenetics to trigger a memory has been shown several times over already. Here it was reported by Boyden and Hausser in 2009 [2]. Presumably this piece is being reported because it was performed by Tonegawa's lab. Not discounting the research, there is likely a lot of advancement on the underlying science, the optogenetics (which are undoubtedly being borrowed from Boyden's lab across the street) and the understanding of mechanism...but it isn't new.

[1] http://www.mit.edu/newsoffice/2012/conjuring-memories-artifi...

[2] http://www.technologyreview.com/biomedicine/23767/


Exactly. The function of a neuron is to trigger other neurons. If you removed a neuron, but manually triggered exactly the neurons that the removed neuron would have triggered, it is to be expected that the same memory would be recalled.

Obviously, this argument breaks down at some point, since by induction we could remove 100% of the neurons and manually trigger the axons to the muscles that would have been triggered by triggering the single neuron in the intact brain. This would give the same observable behavior, but it is not clear whether the same experience would be had. But this is getting into the area of consciousness, which is something that nobody understands (yet).


If there are cycles in neuron graph (and I'm pretty sure there are), then you cannot be sure that removing one neuron and triggering all the neurons it was connected to in correct way is the same - you'd need to also simulate reaction of that neuron you removed to possible feedback it takes after the first stimulation, maybe cycling in such loop many times.


Theoretically, I'd agree. But "memory" is an extremely fuzzy definition. Perhaps the memory is not 100% identical, but missing a tiny detail about the color of the triangle on the guy's shirt in the background of the TV.

There is also a lot of redundancy in neural networks. Large swaths of neurons fire together in concerted patterns, and while removing one may tweak the pattern slightly, it is equally likely that the other neurons will just adjust their weights to keep the pattern constant.


They haven't just "found single neurons which trigger memory". They found the only neurons that were active while experiencing a specific situation and memorizing it and they managed to show that activating only those neurons is enough to trigger a reminscence of the very same experience. You cannot say some other neurons would trigger the same memory, because nothing changed about those neurons during "generating" this memory (under the assumptions science currently holds about the brain), so they cannot store it.


Boyden and Hausser gave a talk claiming they can do it in 2009. However, I cannot find a citation claiming they ever published this work. As it goes in the field, first to publish is more important than first to claim.


I was watching Jeff Hawkin's thing yesterday and he kept talking about sparse distribution something. I didn't get most of it but it didn't seem to square with the headline's emphasis on individual neurons.

http://www.youtube.com/watch?v=48r-IeYOvG4


I agree with this. Because of the brain's ability to access things which are not exactly constrained to it's physicality, I think the brain acts as a type of radio and tunes into memories. Not exactly sure how they are stored.


> Because of the brain's ability to access things which are not exactly constrained to it's physicality,

Philosophy and spirituality aren't science. Do you have any citations for brain's metaphysical abilities? Unless proven to be true, personal anecdotes and old wive tales aren't valid data points. Apart from it, I haven't read anything about brain transcending the physical.

There is a lot of unknown viz. we don't know what constitutes consciousness, but that doesn't mean it can be attributed to metaphysical. Physical or metaphysical, you need to know for sure - till then, it's "no man's land".


A more down to earth explanation on where the information is stored, if not in the neuron, would be other neurons that the neuron is connected to (and neurons they are connected to, etc.). In fact it seems fairly likely that information in the brain is just stored in the strength of synaptic connections. Just like in the computer memory case, really. Even though a complete data structure might is not stored in a single bit, all data inside it is eventually stored in bits and not in some magical realm "outside of physicality of the computer".


There's a very strong urge to believe that the brain "tunes" into some kind of non-meatspace realm (spiritual, quantum, etc.)

Fairly sure that it'll turn out that it just depends on the massive complexity of neuron/synapse connections, though.

But then you think of concepts like 'group psyche', and twins that are separated from birth but share the same thoughts/feelings over a distance...

Hopefully science will get to the bottom of it soon :)


The twins thing is easily explainable by a programming reference. Seed 2 instances of the same random generator class with the same seed and have them run a million numbers on different computers... the millionth number will always be the same. Humans are just big wet sloppy computers with a little more noise than our PCs (environment is different for everyone, even identical twins are not in the exact same space) so it's likely that twins will more often than anyone else be thinking in parallel with each other from time to time.


"the brain's ability to access things which are not exactly constrained to it's physicality"

What does that mean? Not being snarky, just interested.


Can you cite something? Did you come up with this on your own or did you read it somewhere?


I can't help but think that what they've discovered is some meatspace equivalent to the hash key or an index key to a memory. By "turning it on or off" you can lose or find a whole table row or hash value, but surely a "single neuron" can't "store" a generalised "memory".


A neuron "could" store a generalized memory (the amount of information it stores is gigantic, w/o pendantically considered the DNA). However, it's more complex than that (there are many types of memories and many means of retrieval). Basically, memories are sets of synaptic connections, and a neuron has many synapses (average ~7,000 synaptic connections/neuron to multiply by our 10^11 neurons/brain). As you said, it may be a "key", but it should be seen more like the memory _is_ a complete list of hashes. Not {"key": memory} but {"key1": {"key2": {"key3": {...} } }, {...}, {...}, ...} and the set of {key1..keyN} is the memory. So if you removed whichever keyI in the middle, you lose the information. (That's not really true because there is high redundancy, but there are keys/synapses/nodes less redundant than others. The fact is, they don't fire on only "one" neuron, they fire at a very precise region but the light still goes through a population of neurons.)


This complexity make my head hurt. Who created this mess of complexity anyways without leaving so much as a code comment or design document to help us sort this all out? Intelligent design my ass.


Fascinating, thanks! Have you got some links or keyphrases I should feed Google or Wikipedia to learn more (at a "curious but not neuro-science trained dilettante" level of understanding)?


I'm in this lab http://www.lppa.college-de-france.fr and mostly learned through colleagues (I'm doing CS/AI/ML, not neuroscience). I've read some parts of "Neuroscience 3rd edition, Purves et al., Sinauer Eds." which has nice pictures but I think there are better books and better advices than mines on this. Perhaps start with Wikipedia, https://en.wikipedia.org/wiki/Hippocampus // https://en.wikipedia.org/wiki/Memory_consolidation


Thanks again! stuck in the reading queue for the weekend. (Phear my curiosity, people waiting for a table in my café!)


The laser can't be aimed at an individual neuron; it is aimed at a cluster of neurons. Besides, activating a single neuron will strongly activate the neurons it has strong synaptic connections to. The key here is that an individual cluster of neurons is directly correlated with an individual memory.


I don't know if it was used in this paper, but you can obtain single-neuron targeting using two-photon microscopy.


I'd see it as the keycode / directions / a path to connecting all of the pieces; At what level all of the pieces are stored at (biological/physical in brain, in subtle energies, other?), who knows?


They didn't 'discover the location of memories', they simply discovered that they can trigger a reaction that looks similar to that of the original incident by focusing energy on parts of a mouse brain that were actively stimulated during the incident.

This is Pavlovs dog with optics, not revolutionary science.


I don't understand your reference to Pavlov's dog, it's more comparable to Penfield's work with stimulation of specific points of the brain. Also the ExtremeTech headline is a bit misleading. The related MIT article is a little more clear that this involves memories in a small cluster of neurons, not one memory per neuron as is implied by the heading above. http://www.mit.edu/newsoffice/2012/conjuring-memories-artifi...


Yea -- sorry, I mention later that it's a cluster, but I will update the first paragraph to mention that it's a cluster, not single neurons.


Not revolutionary? It's a relatively small step from here to mapping out the neural pathways which trigger specific memories. If we can figure out those underlying structures, we can hopefully learn how to read and modify them.

I need my Matrix-style brain downloads!


More importantly imagine how many people this could help!

There's a bit of ethics involved, but what if we could simply remove traumatic memories to treat psychological disorders?

Plus, yeah, plugging in to learn karate would be pretty awesome.


You can actually remove traumatic memories using drugs but it has to be done very soon following the traumatic event.

It is in fact an arena of medical ethical conflict between those who advocate it and those who (properly, in my view) don't.


In this day and age, introducing such technology to the world would be an immensely destructive crime against humanity.


I always was under impression that there are no memories as we think about photographs, but rather state of the brain is one whole memory system, which react to external signals accordingly to previously "memorized" signals.

As I understand, that they just found a way to excite particular small part of the brain which triggered reaction without external signaling. I.e. bypassing all intermediate parts (external sensors, nerves, other neurons, etc). I.e. it is more like to directly feed engine with gas and electricity to create a spark to make it revolve completely skipping engine control unit, ignition key, clutch pedal, fuel pump.

And when they refer to memory loss due dying neuron, this is more like part of circuit is removed which supposed to give certain reaction to certain signals, not like "rm /home/user/file.txt" with the rest left in place. So I do not believe thing similar to MiB is possible with this knowledge.


'The mice “quickly entered a defensive, immobile crouch,” strongly suggesting the fear memory was being recalled.'

Yes, possibly. Or maybe it had something to do with the hole that had been drilled in their skulls and the frickin' laser beams being fired at their brains. Pretty cool stuff though.


They obviously controlled for that.


hahaha

I came here to post that... too late.


The journal article can be found here: http://www.nature.com/nature/journal/vnfv/ncurrent/full/natu...

It doesn't seem to claim that an individual neuron is the location of a memory, but rather that triggering a small number of specific neurons is necessary and sufficient to cause the behavior that would be consistent with the recall of a particular fearful memory in mice, when optogenetically triggered.

In other words, they labeled some neurons with an optogenetic receptor during fear conditioning. Then, thanks to the optogenetic labeling they previously did, they were able to activate this receptor (using light) in a totally different context (one that didn't normally elicit the freezing response associated with mouse fear). When they did so, the mouse exhibited the freezing response. When they ablate these neurons, there is no fear response. The conclusion is that these neurons are necessary and sufficient to encode the fear memory.

Caveat lector: my summary is based on the abstract so I am just parroting what they have concluded; I haven't read the paper's methods and results for myself. Also, this is the "near-final" version published in advance online today. It may change for final publication.


Impressive as this may sound this is not the first study of its kind. Studies since 2009 (mentioned in the abstract) in the amygdala have been able to direct and inactivate fear memories in a reversible manner via optogenetics again (see http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844777/) . In these studies killing as much as 15% of the cells did not erase the memory, indicating that it's ensembles, not single neurons that encode fear memories.

To complicate the picture even more, each neuron is part of more than one memory engrams, and memories are stored in synaptic connections which are formed in the vast dendritic trees of neurons. So you have a very sparse and reduntant encoding of this associative memory.

Finally, note that both the amygdala and the hippocampus are very old structures so we don't know if the same processes take place in the neocortex (although it's likely so).

I don't mean to belittle the article, but it's mostly proof of concept if you have followed the relative literature. Tonegawa's lab had some even more fascinating papers published recently that probe the process of memory encoding to the level of single dendrites.


Here is a simple explanation of the experiment:

1. The set of neurons that was active only during learning was determined.

2. The genes activated in those neurons were determined.

3. Genetic engineering was done to make the activation of those genes always happen in conjunction with activation of the gene being responsible for the neuron becoming sensitive to light.

4. The mouse was put through a learning experience, during which the small group of neurons affected became sensitive to light.

5. Via stimulating those neurons with light, the experience was reproduced in the mouse in a completely different environment (so the comparisions to Pavlov are not justified).

In this way the abstract concept of a memory and of the process of remembering something was related very closely to a specific physical phenomena. Even the methods used to make the experiment are interesting by themselves (at least for a lay person) and I think you cannot easily dismiss the importance of this discovery as some people do here. Please read the article here to get a better picture:

http://www.mit.edu/newsoffice/2012/conjuring-memories-artifi...


I wish I could block all extremetech stories from My HN feed. The are universally overhyped, never the original source, often draw unjustified conclusions, and are a pain to read (at least on an iPad - they use a non standard interface and pop overs).


Maybe some "flag this" concensus would work.


MIT scientist Sebastian Seung talked about this possibility in his 2010 TED talk, "I am my connectome" (http://www.ted.com/talks/sebastian_seung.html).


I think this has been overhyped just a bit. I realize that a discovery of this nature is a big deal, but it's not what journalists are making it out to be.


>MIT researchers have shown, for the first time ever, that memories are stored in specific brain cells. By triggering a single neuron, the researchers were able to force the subject to recall a specific memory. By removing this neuron, the subject would lose that memory.

That's incredibly poor reasoning. Using the same logic, I can "show, for the first time ever" that C structures are stored in individual memory pointers.


I read the MIT news piece and didn't find any reference to the removal of a neuron that would eliminate the memory. Was that in a longer research paper?


My brother, who is studying in medical school right now pointed this out to me: "the whole time i was reading this though, i was bothered by what usually bothers me about studies regarding the mechanics of the brain, which is that we dont know our measures of "activation" are sufficient for determining a causal relationship between activation of certain neurons and the recall of certain memories."


Off topic: I live across the river from MIT. I work in asset management and worked for an angel investment fund. I'm working on a startup at night. If you live in Boston and want to meet for coffee email me.

I'll meet anyone from HN, even if you just want some career advice for working in asset management or just want to talk about random stuff.

My email address is my hn username at gmail.


Maybe this means we can finally build a machine like the one in the matrix that lets us bootstrap our brains with kungfu.

But more importantly, I think the most significant result of this research, should even a bit of it prove to be true, is the possibility that it could help us understand, and maybe even prevent dementia. That's exciting!


Sadly, and I mean very sadly, this doesn't mean that. =)

The reason it doesn't mean that is that they are triggering naturally-programmed neurons via light. They are not programming them via light.


> The question now, though, is how memories are actually encoded — can we programmatically create new memories and thus learn entire subjects by inserting a laser into our brain?

That's a pretty scary area of technology. I hope that as a civilization we will become responsible enough to use that before we discover how.


I wonder if by deleting not so nice memories (i'm guessing that would be one of the first applications) the person will continue to make the same decisions over and over, because they might not remember them and keep re-experiencing (and learning) repeatedly?


If you are interested in this sort of thing (memory storage in the brain, recall etc.) you might be interested in Wilder Penfield's work. (e.g. http://primal-page.com/penfield.htm)


"The mice 'quickly entered a defensive, immobile crouch,' strongly suggesting the fear memory was being recalled."

Perhaps the defensive, immobile crouch had something to do with having a hole drilled through their skull and a laser shined through said hole?


<you drill a hole through the subject’s skull and point the laser at a small cluster of neurons, the mice quickly entered a defensive state>

  This is a expected  response and perhaps has nothing to do with remembering.


This article sucks, the quality of online journalism has taken a new low...there are no citations...just rambling that eventually one finds to be a total misunderstanding of the results and of science in general...ahhhh...


There was a Ted talk some time ago about the work on optogenetics (in fruit flies I think) http://www.ted.com/talks/gero_miesenboeck.html


That title is incorrect. These researchers did not stimulate one neuron to activate one memory. They stimulated a group of neurons in one region using a fiber optic.

I'll go through the experiment for those who don't really get the write up. Quick background:

0. Neurons are specialized cells in your brain, their firing is the basis of cognition. Neurons that fire strongly together tend to get linked. Not all your neurons are firing all at once. Not all neurons participate in a memory, but a large collection (0.1-4% depending on brain region) are used for each memory. (at least in naive mice) Memories are highly distributed across brain regions and within brain regions.

1. There is a gene 'cFos' that turns on in neurons that undergo activity. It is very short lasting, about an hour, and then it is back down. It is very cell specific, only cells used have been activated will show this gene.

2. There is a genetically inducable protein you can put in cells so that when you shine them with light they will start firing.

3. Anything genetically inducible can have a tag added that will make it impossible to induce when an animal is on a specific drug. (We'll call it Dox.)

The researchers made it so that when cFos is activated, it will transcribe the inducible light activated channel. But, it will only do this when the animal isn't on Dox. So they took the animal off Dox, exposed it to fear conditioning (they shocked it in a unique box), then put it back on Dox. So only the cells that were active during the fear conditioning will be turn on when they shine light.

They then put the animal in a new box and shined light, and the animal froze. (A sign it was afraid) They concluded that activating cells that had been active during the storing of the memory can 'reactivate' the memory, even out of it's context.

Of course there are a few major problems with this. The region of the brain they worked in is a relatively minor area of the brain. They admit in the course of the paper that each cell is used to store many memories. (They claim the assembly of cells activate the memory? But they never bother testing this.) Their firing pattern is far from physiological. The mice who underwent this 'memory reactivation' did not freeze as much as mice in normal fear conditioning and did not seem to learn the 'reactivated memory' at all. (Though again they didn't really bother testing this.) Normal fear-memory is learned quite well. This is a very preliminary but very important study in the field. It will be interesting to see how they follow this up.

Also, the Npas4 stuff is completely irrelevant to this article. There are countless markers that when 'knocked out' make it impossible to store memory.

Also here is a (paywalled) link to the paper:http://www.nature.com/nature/journal/vnfv/ncurrent/full/natu...

Here is a link to another (paywalled) paper which came out today which did almost the same thing and say almost opposite results (the details explain this oddity): http://www.sciencemag.org/content/335/6075/1513.full?rss=1

Here is a (paywalled) more layperson write up to the second article: http://www.sciencemag.org/content/335/6075/1455.full



Besides the fact that the scientists used trauma to retrieve their data, what a beautiful notion: collecting a memory at the tip of every neuron.


I don't believe a word of this. It isn't proof of any such a process. This to me appears as inappropriate reductionism.

The brain is highly volatile both chemically and physically and is changing minute by minute. The notion that a memory belongs to a single group or cluster of neurons is absurd. They are constantly being reordered and reprioritised and changed throughout the lifetime of the animal. And being lost due to chemical damage and the effects of ageing and free radicals.

The nature of memory and conciousness isn't just the physical existence of neurons. Who you are is more than the sum of the parts.

The mind is also as much a process of the body -- your conciousness and mind grows within it from birth and a large part of who you are is as much a product of that, its hormones and its metabolic chemistry as anything that resides solely between the ears alone.


The Nature abstract is about a "population of neurons" and triggering a memory related behavior while the extremetech article talks about "specific brain cells" and memory storage.

Nothing to see here, move along.


I always take these with a grain of salt. One day a study comes out saying that memories are not stored in a specific location, that scientists can cut out large portions of mice brain and the mice will retain memories, and the next day a study comes out that specific memories can be created or destroyed with specific neurons.


There's an obligatory Eternal Sunshine of the Spotless Mind reference to be made here...


There was an Eternal Sunshine of the Spotless Mind reference in the article. You should have read it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: