Hacker News new | past | comments | ask | show | jobs | submit login
Could a Neuroscientist Understand a Microprocessor? (2017) (plos.org)
249 points by eggspurt on April 12, 2018 | hide | past | favorite | 92 comments



Testing analytical methods of a field against engineered artefacts is a good idea but there is a fatal flaw here; devices that do a fetch-decode-execute-retire loop against a register file and a memory bus have perversely little in common with what neurobiology is concerned with. A more appropriate artefact would be a CPU and its memory (where NOP'ing out code or flipping flags corresponds to "lesioning"), or even better, an FPGA design (where different functions work in parallel in different locations on the silicon, much like brains).

That the tools of neuroscience choke on a 6502 is as much of an indictment of the former as my inability to fly helicopters is an indictment of my fixed-wing airmanship; not coping well with notoriously perverse edge cases outside your domain of expertise isn't inherently a sign of failure (it's not a licence to stop improving, of course). Brains and 6502s are quite literally entirely different kinds of computing, much like designing for FPGA is weird and different from writing x86 assembly or C.

A far more interesting question is "could a neuroscientist understand an FPGA?".


>devices that do a fetch-decode-execute-retire loop against a register file and a memory bus have perversely little in common with what neurobiology is concerned with.

A key point of the article is that we can't really be sure this is the case, since the analytical tools used by neuroscience arguably wouldn't reveal this kind of structure even if it did exist.


The nice part of evolution is that most of the times you can see some leftover of the intermediate steps. For example, for the eye you can find animals with different complexity of eyes from a simple flat photosensitive area to a full eye of vertebrates (or cephalopods, that have a different but similar eye design).

So in most case you can get some people to specialize and understand the simple models and create concepts and tools to understand the more complex models.

In electronic circles you still have floating around a lot of mini-integrated circuits with 20-50 transistors that are easy to understand. And you can learn to group individual transistors in mall groups that do something useful (for example limit the output current, simulate a resistors, amplification, ...)

Then you can learn to decode the intermediate models with 100-1000 transistors, and then the models with a few thousand of transistors, and then ...

So, it's very suspicious that there are no animals with a minibrain with is finite automata of 3 states.

There are also some cases where all the intermediate steps dissapared, for example the transition form prokaryote(bacteria) to eukaryote (animal, plants, protozoa, ...) And IIRC nobody understand the intermediate steps. But there are some clues, many structures are shared between prokaryotes and eukaryotes, mitochondria are probably trapped bacteria (they have their own DNA and too many external membranes, ...)


>So, it's very suspicious that there are no animals with a minibrain with is finite automata of 3 states.

What is your basis for saying that no such animals exist? Exactly how many states there are depends a great deal on the level of analysis. How many states does a 6502 have? At the physical level, an enormously large (possibly even infinite) number. At the level of analysis appropriate for programming one, considerably fewer.


We do know this already. Our brain is not a pure blackbox.

It is also a counterargument for the paper itself: If you know so little about two different systems, than you can't take any tool from system 1 and use it on system 2 and expect any resemblence or transfer of results.


We don't know that there aren't parts of the brain that use a fetch decode execute loop. The paper points out that although the 6502 does in fact work that way, this is far from obvious when examining it at the transistor level.


Not knowing something doesn't mean we don't know anything.

Yes we are not 100% sure that there is somewhere a fetch decode execute loop but right know with the knowledge we have, it is unrealistic.


What makes it unrealistic?


Because we do know how neural network works and our brain is a huge one.

Why would you wan't something like this to be in our brains anyway? Its already a problem to have data an processing divided.


We don't know how the neural networks in our brains work at anything like the level of detail required to draw the conclusion you're suggesting. What I want is obviously irrelevant.


But this inquiry didn't seem to care about whether the specifics of computer hardware map onto biology or vice versa to any interesting degree. They care about whether the computer is a dynamic system of such complexity that it's resistant to current causal analysis.


They took the system, excluded the hardware where the all the behavioural features they could directly observe resided (RAM, ROM and its contents) from their analysis, and then concluded that their neurobiological-inspired technique was useless for understanding complex systems because it didn't tell them anything useful. If they hadn't done that, they'd have been in serious danger of gaining actual understanding and wouldn't have been able to use the exercise to mock the neuroscience practices they dislike.


Or, much worse but likely more accurate, an FPGA shaped by an evolutive algorithm https://www.damninteresting.com/on-the-origin-of-circuits/

Because that's another issue, evolution is a pretty greedy algorithm and nature doesn't care if you don't understand her architecture decisions (metaphorically speaking ofc)


Er, that's a way in which the microchip problem is easier, not harder


That's my favorite horror story to tell the "autonomous driving is ready for deployment" fanatics (not that it helps).


One thing that came to mind is that computers are notoriously brittle compared to a brain. If you damage a single transistor in the processor you can bring down the whole thing. Not so sensitive with components in memory. We know the brain can take some damage and still function quite well, but are there single neurons or tiny groups of them that will basically bring down the system if damaged?

I also think the brain has important features more detailed in function than a 6502 so the applicability of the methods isn't necessarily invalid. For example, we don't know why neurons can transfer mitochondria between them, we don't know what is encoded in DNA, we don't even know how instinctive behaviors are encoded (or perhaps I just don't know).

As someone who once reverse engineered a processor from the gates up, I honestly don't know how one could do it top-down. The details at the bottom are critical, but also not critical. Decoding the microcode was critical, but determining precise timing and some other things was not needed to write a fairly functional emulator.


> We know the brain can take some damage and still function quite well, but are there single neurons or tiny groups of them that will basically bring down the system if damaged?

I like this question. I'm not a neuroscientist though, but I think most neurons come in fibers that contain several of them. So even if a certain connection was critical, a single one dying will still leave a functional fiber. Because neurons are plastic (to a limited but significant extent), any effect from that single neuron can be partially or completely compensated.


Well, multicore CPUs are generally manufactured with a higher core count than they are sold as. Automated testing detects defects and bins individual wafers based on the maximum clock frequency at which they run without errors. So in some sense, CPUs are robust to some flaws in the lithography process by virtue of disabling broken cores/cache banks. Of course, this isn't done dynamically like a brain.


An FPGA compared to an x86 processor or C program is many orders of magnitude more closely related than either of those to a brain, especially when you start looking at the brain as the result of the decoding of a string of DNA in what once was a single celled organism.

The orders of complexity involved alone dwarf any other comparison you might want to make and that's before we get into the effects of nurture, environment, interaction, education and so on.


Why do you think it is a good idea to use analytical methods of one field for another field when you mention the fatal flaw in your next sentence?

I have really issues even accepting that anyone would even try something like this and get it published. Also the conclusion of this paper is broken.


A 6502 isn't just combinatorial logic. Nearly half the chip is memory, either decode/sequencing ROM or latches.


Great work.

"In other words, we asked if removed each transistor, if the processor would then still boot the game. Indeed, we found a subset of transistors that makes one of the behaviors (games) impossible. We can thus conclude they are uniquely necessary for the game—perhaps there is a Donkey Kong transistor or a Space Invaders transistor. "

A fantastic comment to show that describing a system is not the same as understanding the system!


It might start that way, but then they begin to ask, how do Space Invaders and Donkey Kong differ? Ah yes, in space invaders, you cannot move up and down, but in Donkey Kong you can. OK lets create a Space Invaders task where you can also move up and down. Does adding that function make the "Donkey Kong" transistor break the modified Space Invaders behavior? Yes?! So that transistor might not be about Donkey Kong, but for moving up and down. and so on...

An example. Early research which lesioned the hippocampus, lesions brought about impairments on memory tasks...but not all memory tasks. Memory of individual items, or feeling of familiarity without recollection seemed to be relatively preserved. Particularly affected however, were memories involving relations between pairs of items...but not always...those item pairs could be remembered by constructing a story about them, or making one item a feature of another item..so it seemed that item relations that were arbitrary were particularly affected by lesions, and showed more "activity" in neuroimaging studies. and so on. The hippocampus seems to fulfilling a role of binding high-lvel percepts into memory traces for which there is not some lawful/generalizable relation. This was a broad over view..but this goes beyond characterizing a brain region a "donkey kong".


> Does adding that function make the "Donkey Kong" transistor break the modified Space Invaders behavior? Yes?! So that transistor might not be about Donkey Kong, but for moving up and down. and so on...

Only there's no transistor uniquely responsible for moving stuff up and down. There are transistors responsible for particular bits of output of particular machine code commands, like ADD or MOV. But they are commonly used by almost all the code, so the most probable difference between code triggering the error and a code that's working correctly - would be "how big values we're working on", and if the values are in the "correct" range - that transistor will make a difference. It very well might be that x coordinates are big and y coordinates are low, so it's working for y but not for x. In that particular level of that particular game, assuming the memory was in a particular state before you started that game.

The problem lies in trying to assign too high-level meaning to stuff that works on much lower level of abstraction. It's very similar to alchemy or astrology. Searching for correlation between unrelated events and basing elaborate theories on that.


Instead of assigning too high-level meaning to stuff that works on a much lower level, it might be the opposite.

The reason why you can say that one transistor doesn't handle moving up and down is because we can engineer and economically produce systems with enough spare resources and performance to implement general computing devices capable of running arbitrary programs.

If instead you were playing say... a very early arcade game, you might very well actually have the 'X coord' register implemented in a fixed discrete component.

One of the assumptions baked into biology is that by the time you're talking about cells, you're talking about implementations of behavior subject to reasonably strict constraints (plus hilarious amounts of path dependent evolution), with the assumption that things are more like old school analog/digital electronics where every component counts, versus modern integrated processors.

That's not to say that they have the perfect analytical techniques for solving problems in their domain space, or any domain space.


If that transistor was not about moving up and down, you'd have to explain why lesioning that transistor seems to break games with up and down motion. As what you say is correct, scientists would soon find other tasks it breaks that do not appear to have up and down motion. You might move to questions about complexity of objects, coding of directions, etc. Theories would emerge and begin to be related to emerging understanding of archtectures, transistor function, controllers. It evolves.

Aside from that, I object to the view point more generally. Can one not understand something about how cultures work without knowing how brains work? Can one not know something about how an ape behaves without understanding their gut-biome? I just disagree with the view that the only worthwhile understanding of phenomena is from the bottom...that there is some true level of description at which something must be understood.


> Can one not understand something about how cultures work without knowing how brains work?

Yes, you can, but that's psychology or sociology, not neuroscience. The goal was to understand brain, and be able to replicate it or modify it. Understanding just the behaviour is like understanding that pressing "fire" will shoot in Space Invaders. It's something, but you don't need to dissect a CPU to know that, and it doesn't move you closer to creating your own CPU or modifying this one.

> the only worthwhile understanding of phenomena is from the bottom

That's misrepresentation of my point. I wrote that in the CPU example neuroscience tools failed because of missing the abstraction levels between transistors and behaviours.

> there is some true level of description at which something must be understood

There is optimal level of description, yes. And notation makes a huge difference.


Neuroscience works at a number of levels of description or size. Neuroscience which focuses on synapses Neuroscience that focuses on dendritic tree or whole neurons, Neuroscience that ask questions of small groups of neurons, and Neuroscience that focuses on large scale networks. Most try to relate that function at that level to behavior in some way and to bridge the gap between levels of description.

When we try to understand deep learning, do we try to understand it at the level of individual transistors, at the level of a 'neuron', at layers, or at the level overall design? Well it depends, right?


I'm not a scientist, but it seems the various groupings of neurons yout enumerated are still hardware level, and not the software level.

Maybe there really is no software level between hardware level and behaviours, but it seems unlikely to me, because I can think about abstract ideas using that brain.


But you could have the "same" program break under different conditions, because you can have two different but equivalent programs.

Consider the following pseudo code:

    i <- 0;
    total <- 0;
    while(i < 10) { 
        total <- total + i
        i <- i + 1 }
    output total
This program could be done using a range of different instructions.

At the most naive it would actually use registers and jump and compare instructions to do the loop.

A different approach would do "loop unrolling" to avoid any jump instructions, instead applying sequentially the addition and comparisons.

The most aggressive approach would simply replace this whole program with

    output 45
Without understanding the architecture and program preparation, what hope is there to map functionality to underlying behaviour at the transistor level?


> Can one not understand something about how cultures work without knowing how brains work? Can one not know something about how an ape behaves without understanding their gut-biome?

Absolutely this. An explanation of evolution in terms of atomic interactions would lack some of the explanatory and predictive power we gain at the higher levels between bio-chemistry and phenotype. I think part of good science (and FWIW software design) is choosing the correct abstraction level, the one with the best predictive, descriptive and explanatory power. Sometimes these will be different abstractions or models that have different affordances.

In the case of the OP I think it's easy to underestimate how hard it is to generate a bottom up explanation of a processor given that we already have such an explanation from its status as engineered artefact. The answer is simple if you know the answer.


Agreed. The iomportant thing to understand about neuroscience is that it is being attacked at multiple levels of decscription (synapse,dendritic trees, whole neurons, small groups of neurons, cortical columns, networks of cortical columns. Then we try to bridge understanding across levels of description and all try to relate it to behavior on one way or another.


Quite right. The movement toward a reconciliation between various levels of abstraction and observation is one (perhaps the greatest one) of the ways we know we scientists are "getting it right".

Peter Watson's book "Convergence" is a delicious description of this broad phenomenon in scientific investigation(s).


I'm remindedof the girl in my college who thoughtthere was a special chip in her computer that compiled C++ programs, and asked me if her C++ IDE would work on herlaptop.

She was a computer science major and A student, but her model of how computers worked was way off.


It's an interesting idea here. The paper is arguing that current brain analysis methods don't work well in an alternative environment with a lot of data, so maybe the methods are the problem instead of our lack of data in neuroscience.

However, I think this misses part of the point. We use these methods because we have very little data available. There are tons of interesting new ways to analyze brain data that I think computational neuroscientists are dying to explore, but don't have enough data to do so. If we had a lot more data, we might not be using these approaches.


This explains the diversity of methods and their emphasis on statistical inference. But it isn't necessarily true, that the current best choice for methods implies that the outcomes from using these methods are good. In this case, if we expect that these methods are underpowered in the ecological context, then we should expect them to perform better in a more controlled one, or else they would be useless. But given the author is correct... then, well, there's obviously room for more thought.


I think a big part of the point was that by demonstrating that the analysis software works on toy systems, you can get a big jump on your validation of those analysis methods.

Why wait until you have the best data if there seems to be a lot of low hanging fruit you can get now? It could be wasted effort, but if it does work wouldn't you be more confident on real data?

Even if it doesn't prove to be relevant, this type of statistics seems very interesting and widely applicable. Obviously reverse engineering, probably identifying alien life, characterizing group behavior on the internet... anywhere where you're still discovering structure and don't know what to expect.


Older article in similar vein: "Can a biologist fix a radio?" http://math.arizona.edu/~jwatkins/canabiologistfixaradio.pdf


Possibly worth noting that this article explicitly credits "Can a biologist fix a radio?" as inspiration.



Really happy to see this article. This viewpoint is not new, but it is still far from being mainstream.

A big issue touched upon in this article is that the space of possible dynamical systems represented in the brain is large, and trying to collect data is not a practical way of trimming this search space. It's more useful to look at types of dynamical systems that have certain stability properties that are desirable for computation.

But the issue then becomes that these dynamical systems become mathematically intractable past a few simplified neurons. So it's really hard to make progress either by looking at data, or by studying simplified dynamical systems mathematically.

There is a third option. Evolve smart dynamical systems by large scale brute force computation. Start with guesses about neuron-like subsystems with desirable information processing properties (at the single neuron level, such properties are mathematically tractable). Play with the configurations, the rules of evolution, the reward functions, the environment, everything. This may sound a lot like witchcraft but look at how far witchcraft has taken machine learning in recent years (deep learning is just principled witchcraft). This is IMO the only way we will learn how biological intelligence works.


I think is that the question is really close to how we design a modern Processor. Certainly is important to look at the data (i.e. how behaves a real machine under actual workloads). That data might suggest design improvements (through some form of experience, art and intuition). You use simulation the test those "potentially" good ideas. Most of them are discarded...

Perhaps neuroscience should move in the same direction: how I think cortex work? Test in under (simple) working conditions. See if it makes sense. Move to more complex working conditions. Rinse an repeat.

In actual processors math is rather useless too... beyond some niches. Even in susceptible issues such as formal verification of coherence protocol design, mathematical tools are rather limited (due to state explosion).


This paper offers some good points but exhibits a number of flaws that limit its applicability to the utility of current neuroscience methods. For a generally thoughtful conversation, see http://www.brainyblog.net/2016/08/30/could-a-neuroscientist-... from 2016. One comment that I'd like to highlight from this conversation is pasted below:

" But no attempt is made to analyze the similarities and differences in those behaviors. All three game behaviors rely on similar functions. Depending on the level of similarity between the behaviors, you might think of it as trying to find a lesion that only knocks out your ability to read words that start with “k” versus words that start with “s.” That’s an experiment that’s unlikely to succeed. But if the behaviors are more like “speaking” vs “understanding spoken words” vs “understanding written words” then it’s a more reasonable experiment.

The authors argue that neuroscientists make the same mistake all the time; that we are operating at the wrong level of granularity for our behavioral measures and don’t know it. That argument denies the degree to which we characterize behaviors in neuroscience, and how stringent we are about controls.

The authors point to the fact that transistors that eliminate only one behavior are not meaningfully clustered on the chip. But what they ignore are the transistors that eliminate all three behaviors. Those structures are key to the functioning of the device in general. To me, those 1560 transistors that eliminated all three behaviors are more worthy of study than the lesions that affect only one behavior, because they allow us to determine what is essential to the behavior of the system. You can think of those transistors as leading to the death of the organism, just as damage to certain parts of the brain cause death in animals."


People do reverse engineer chips by photographing them. https://youtu.be/aHx-XUA6f9g (Reading Silicon: How to Reverse Engineer Integrated Circuits). But as far as I know, the same cannot be done with the brains even if we can photograph it. I guess the 3D structure of the brain compounded with high interconnection between neurons does not make it easy.


Much much more to how neural circuits work than just connectivity. Good example to keep in mind is C elegans (https://en.wikipedia.org/wiki/Caenorhabditis_elegans). Despite knowing the connectivity of its 302 neurons, and the whole organism consists of just a few times that many cells, AFAIK we still don't fully understand how its nervous system works, e.g., how it makes decisions etc.

(There is a lot of work on this organism, and I haven't kept up with the literature. So maybe the comment is a bit out of date... someone more knowledgeble should chime in.)


Well, I wouldn't say we understand C. elegans neurobiology and behavior "fully", but great advances have been made at the systems level:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2904967/#__ffn_... [overview of behavioral assays at systems level]

http://conferences.union.wisc.edu/ceneuro/program-info/organ... [next major conference]

And, of course, my favorite, WormWeb:

http://wormweb.org/neuralnet#c=ADA&m=1


The big difference is that we understand chips: We know what kind of elements does an integrated circuit need (gates, registers) and we understand how each of the low- and high-level and elements work. Reverse engineering in this case is puzzle reconstruction with very strong priors. On the other hand, we are truly clueless for brains: we do not know how "algorithms" are run in brains, if there are any simple computing units, how memory is stored ... All what we have are some guesses and statistics.


Or perhaps the brain works nothing like a CPU.

(The very idea that it does is actually laughable, akin to medieval fantasists imagining that flying machines must have huge white wings with fleshy feathers.)


Yes, a brain does not work like a CPU. As noted by another user in this thread, understanding brain is a considerably harder problem. "CPUs are deliberately engineered so that their different functions are nicely decoupled and easy to reason about but brains are the result of a very messy evolutionary process" -SilasX


I think it's partly the regular, simple, easy to understand structure that makes the approach outlined in this paper fail so badly. Because CPUs are made up of a number of simple, regular pieces connected in a way that's generic and easy to reason about, each of which is general enough to be used across many kinds of programs, knocking out individual transistors and observing the effects on the output of complex programs doesn't tell you much. Which parts of the CPU are used for a particular "behaviour" depends on the whims of the programmer and the compiler's optimisation passes as much as anything else.


Engineers are also the result of a very messy evolutionary process ;)


Being the result of a messy evolutionary process doesn't give you any special insight into other messy evolutionary processes.


I don't think that "maybe we could understand the brain if we had a very detailed picture" is the same as "the brain works like a CPU".


Nobody is claiming it does.


So maybe we should synthesize a (probably huge) photograph of a chip, whose network would correspond to one brain ... and then hand it to hackers, as a reverse engineering challenge :-)


The problem then transfers to synthesizing such a network on a chip :)


We could start with something very small, like a worm.


So? We know how chips work, we make them! Going from a picture is much easier, you know exactly what you are looking at. Not just the individual structures, but we also already know what kind of higher level structure to expect. There is nothing unknown about what we see on a chip, by definition (we could not have made it in the first place).


Semiconductor manufacturing is very planar - it's made up of many layers on top of each other, with some interconnecting features or vias between layers. It's a best-case scenario for reverse engineering with photography.


I had the odd, but unique, experience of taking "Computer Engineering" and "Formal Logic" (a neurology/history-of-thought course) during the same semester. One observation from that experience is that there is a great deal of cognitive overlap in our representation and communication of those fields of study. Typically, I would see that overlap as being indicative of broad similarity.

Reading this and the comments makes me question the similarity of the fields somewhat. Perhaps it is just our tools for comprehension that are shared between the two rather than any deeply tactical, functional commonality.

To that end, I think that experts in these fields could communicate very effectively with each other once some vocabulary had been sorted out. How effective one expert would be in the other's field is less clear to me.


what i don't like about this sort of thing is: the only guaranteed way to succeed in the (apparent, revealed) objectives of the paper is not try very hard.

The obvious problem here is the clear mismatch between the behaviors and their research objectives and methods.

If they wanted to understand transistors, they'd do what cellular neuroscientists do, and isolate and manipulate individual transistors inputs and measure the outputs.

If they wanted to understand how clusters of transistors, whose activities are tightly coupled (as you'd expect them to be in a logic gate), then you'd isolate those, and manipulate the inputs and measure the outputs.

If you wanted to understand higher levels of organization, using a lesion approach, you need to decide how much to lesion. In the brain, function is localized in clusters of related activity, and there is usually a lot of redundancy. Single neuron lesions are not usually enough to have noticeable effects. But even then, a lesion approach is more interesting when you couple it with real experiments. Consider this paper by Sheth et al. https://www.nature.com/articles/nature11239, which had subjects perform a cognitive control task before a surgical lesion to the dorsal anterior cingulate, coupled with single unit recordings, and then had them perform the same task after the lesion. The experiment yielded pre-lesion behavioral and neural evidence of a signal related to predicted demand for control, and post-lesion, the behavioral signal was abolished.

Of course, the Sheth paper would not have been possible without the iterative improvements in understanding made by prior work, including Botvinick's neural models of conflict monitoring and control. That is, its iterative; and this cpu paper was never intended to be iterative.


> An optimized C++ simulator was constructed to enable simulation at the rate of 1000 processor clock cycles per wallclock second.

Following links in through "code and data":

http://ericmjonas.github.io/neuroproc/pages/data.html

I found:

https://github.com/ericmjonas/neuroprocdata

But I couldn't find any link to the c++ code. Surely the emulator is also needed in order to be able to reproduce the research?

A bit of a shame they used closed source games - I'm not sure how one would go about obtaining copies (legally). But it would be interesting to try replication via other places/demos - as they only model booting anyway.


Author here, happy to answer any questions! Always a pleasant surprise to find stuff you do making it to HN.


"This example nicely highlights the importance of isolating individual behaviors to understand the contribution of parts to the overall function. If we had been able to isolate a single function, maybe by having the processor produce the same math operation every single step, then the lesioning experiments could have produced more meaningful results. "

I submit that this direction is an important one to pursue.


That also highlights why understanding the brain is a much harder problem: CPUs are deliberately engineered so that their different functions are nicely decoupled and easy to reason about. Brains are the result of a very messy evolutionary process that never came close to optimizing for "easy reasoning for refactoring".


Yet if a particular CPU is produced in enough quantity and is cheap enough people will engineer uses out of it that the designers never thought of, and had no intention of making possible.

It reminds me of talking to another player in an online Risk game where they didn't understand what an AI player was trying to do. The code was open source and something like only three functions, but in practice they did something completely different.


Truth. Compartmentalizing complexity is key


It would be interesting to repeat this for a GPU.

A CPU has so much hardware common to most instructions that any failure will take it down completely. That's less true of a GPU, where a failure of one of the massively parallel units is likely to manifest as some alteration of the output image.


It would be interesting to repeat it with a microprocessor connected to two motor control boards, two ADC boards connected to sensors, and an LCD. Present them as black boxes to the neuroscientist. Run code on the microprocessor that exercises the peripherals, instead of games.

Creating a "lesion" in the motor control boards would effect behavior of the attached devices, as well as, perhaps the output on the LCD. Similarly for the sensor boards.

Once the "lesions" have let the neuroscientist determine the function of the peripherals, they could look at the effect of lesions in the microprocessor on the functions of the peripherals and system as a whole when running various programs.

Maybe a program that exercises the "left side" motors, a program that exercises both sides, etc.

Maybe a microprocessor alone is too small of a unit of functionality, akin to studying an amygdala in a petri dish.


I didn't read it but the abstract's first sentences sound as if it's rather about "Do the issues neuroscientists face when examining the human brain persist when they examine a microprocessor instead?"


Of course a neuroscientist could understand a microprocessor by other methods. The point of the article is the usual methods of neuroscience would have limited results though I think in general in science people use whatever methods they can think of to figure what's going on and the methods of neuroscience are probably the best people can come up with for figuring brains. Though there are also interesting results from the AI researchers mucking about with artificial neural networks also.


From some conversations with neuroscientists, it seems that one issue that limits investment in new tools to measure the brain is that its easier to get a publication by analyzing an existing data set in a new way, or even generating and analyzing a bigger / different data set with fMRI or eeg or clinical data, than it is to develop a novel tool to measure the brain (like optogenetics). but there are a lot of advances being made in new tools to get better data on how the brain works


I would guess doing research with optogenetics in the brain is probably an order of magnitude more expensive than non-surgical methods. And developing new tools that involve surgery is probably an order of magnitude more expensive still. At least.


It probably is, but it is also at least an order of magnitude more beneficial to the field than some of the computational work. There are some groups working on non-invasive optogenetics methods, which are still early stage but really cool


It's definitely easier to understand a microprocessor than chemical-oriented protein systems that have mostly evolved into an operable state by chance.

A CPU is founded on a limited set of basic components that possess reasonable qualities, behave consistently, and only scale to large quantities with identical repetition.

Just leave out the deeper materials science and solid state quantum physics behind the "why" of how transistors operate.


This feels like a silly strawman being made out of the methods used in neuroscience. In a microprocessor, a single bit being flipped the wrong way potentially stops the whole show; your Donkey Kong game from 1981 doesn't run at all. By contrast, you will not anywhere near fully incapacitate a brain by lesioning a single neuron, if at all.


> This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data.

Using the same analytical techniques against a cpu who's design is unknown at time of analysis. Nice meta-analysis, clickbait title.

Edit: reformatted, thanks.


Prefixing lines with spaces breaks formatting on mobile. I've reformatted your excerpt here:

> This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data.


I wonder, if an alien species stumbled upon a DVD, would they be able to decode the video contained within it?


This isn’t even wrong. It follows Drexler’s method. Just keep writing and writing and writing.


is this something like this XKCD? :)

https://xkcd.com/1588/


It is garbage :(.

Srsly a CPU has nothing to do with a brain at all. It doesn't make sense to use technics from one for the other.

I have no idea how anyone comes up with such an idea and even publishes it.

A Brain itself is everything. Ram and CPU.

A CPU is just a CPU there is no state in physical form.

A CPU is a turing machine, a brain isn't.


That question ist weird. "Sure, why not?" would be my reply to that. I thought one should avoid yes/no questions for a paper?

I'm not a neuroscientist (I cannot afford medical education), nor am I a microprocessor engineer (yet). But I understand how systems work, so I might have a chance to understand how neural networks work (as models and their real counterparts) and I might have already an understanding on how microprocessors are designed by principle. So, yes, a neuroscientist who decides to visit some lectures on digital logic circuits and microprocessor design might have a chance to understand it! I'm really confused about this quenstion.


No offense, but it seems like you didn't click through.

From the abstract:

> here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor.

The idea is to apply the modern neuroscience approach to a microprocessor to see what level of understand of the microprocessor is extracted. tl;dr: The high-level "meaning" of the processor's design is not extracted.

The purpose seems to be to examine the inherent limitations of modern neuroscience by applying it to a design that we do understand quite well apart from neuroscience, something we ourselves designed.


“Could a computer scientist understand a brain?”


Computer scientists don't even fully understand microprocessors...


Some computer scientists are trying. Hassabis for one.


Could a doctor understand a car?


Conrad Barski is an MD and author of Land of Lisp, so yes. He also understands cdr.


Can a car understand a road?


Autopilot apparently can't


Give me a neuroscientist who's willing to learn, and one week.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: