Hacker News new | past | comments | ask | show | jobs | submit login
Cubic millimetre of brain mapped at nanoscale resolution (nature.com)
519 points by geox 10 days ago | hide | past | favorite | 201 comments





After reading through all comments as of 2024/05/11 I (as a professor at some major university) am quite surprised that not one single comment has asked the obvious question (instead of dishing out loads of (partial) "textbook knowledge" about brain functions, the difference between mammals and birds, AI and LLM etc.), which would be: what do all those strange structures and objects do which we know nothing about whatsoever? Have a look:

https://h01-release.storage.googleapis.com/gallery.html

I count seven.


I'm in awe at the complexity and unknowability of it all, but I also have to chuckle at the thought that some portion may be vestigial.

I'm particularly fond of the "Egg shaped object with no associated processes". :)


Maybe it's vestigial. But when I was your age that's what they said about "Junk DNA", which turned out not to be junk.

Tech debt in our own brains.

Sure, if you think of it as an egg, instead of as a galaxy of electrons and atoms so dense as to have structure big enough for us to give it the label "egg shaped object".

What if it's a "wireless" device?


me too! :-)

wrt to the egg - could be excreting chemicals modulating the intracellular medium

Neat, thanks.

As a complete outsider who doesn't know what to look for, the dendrite inside soma (dendrite from one cell tunnelling through the soma of another) was the biggest surprise.


It's pretty awesome that all this complexity produces something beautiful like, a smile, or love.

These data do not have a control group of healthy similar tissue samples, thus are not falsifiable, and are not 100% scientifically valid.

The interactive visualization is pretty great. Try zooming in on the slices and then scrolling up or down through the layers. Also try zooming in on the 3D model. Notice how hovering over any part of a neuron highlights all parts of that neuron:

http://h01-dot-neuroglancer-demo.appspot.com/#!gs://h01-rele...


My god. That is stunning.

To think that’s one single millimeter of our brain and look at all those connections.

Now I understand why crows can be so smart walnut sized brain be damned.

What an amazing thing brains are.

Possibly the most complex things in the universe.

Is it complex enough to understand itself though? Is that logically even possible?


Crow/parrot brains are tiny but in terms of neuron count they are twice as dense as primate brains (including ours): https://www.sciencedirect.com/science/article/pii/S096098221...

If someone did this experiment with a crow brain I imagine it would look “twice as complex” (whatever that might mean). 250 million years of evolution separates mammals from birds.


It's amusing to say that bird brains are on the next generation node size.

Would be interesting to see what their wafer yield is. Like, are they more or less prone to mental disease.

all the crows can tell i'm crazy, but i've never met an insane crow.

I dunno anyone who screams “Caw! CAW!”, raids garbage and poops in the street all day would probably be put in a mental institution. (Or just move to San Francisco.)

You say that, but a world with more crows than tax payers honestly sounds kind of serene.

Well, The Stand from Stephen King comes to mind when you say that.

There was a short series filmed, that I enjoyed, but definitely not strong.


I expect we'll find that it's all a matter of tradeoffs in terms of count vs size/complexity... kind of like how the "spoken data rate" of various human languages seems to be the same even though some have complicated big words versus more smaller ones etc.

Birds are under a different set of constraints than non-bat mammals, of course... They're very different. Songbirds have ~4x finer time Perception of audio than humans do, for example, which is exemplified by taking complex sparrow songs and showing them down until you can actually hear the fine structure.

The human 'spoken data rate' is likely due to average processing rates in our common hardware. Birds have a different architecture.


You misunderstand, I'm not making any kind of direct connection between human speech and bird song.

I'm saying we will probably discover that the "overall performance" of different vertebrate neural setups are clustered pretty closely, even when the neurons are arranged rather differently.

Human speech is just an example of another kind of performance-clustering, which occurs for similar metaphysical reasons between competing, evolving, related alternatives.


Humans are an n=1 example, is my point. And there's no direct competition between bird brain architecture and mammalian brain architecture, so there's no reason for one architecture to 'win' over the other - they may both be interesting local maxima, which we have no ability to directly compare.

Human brains might not be all that efficient; for example, if the competitive edge for primate brains is distinct enough, they'll get big before they get efficient. And humans are a pretty 'young' species. (Look at how machine learning models are built for comparison... you have absolute monsters which become significantly more efficient as they are actually adopted.)

By contrast, birds are under extreme size constraints, and have had millions of years to specialize (ie, speciate) and refine their architectures accordingly. So they may be exceedingly efficient, but have no way to scale up due to the 'need to fly' constraint.


> And there's no direct competition between bird brain architecture and mammalian brain architecture

By and large It’s not direct competition but we are stamping our species at an alarming rate and birds are taking a hammering.


Are humans able to destroy all this habitat because they've got a better brain architecture, because they are able to achieve higher brain mass (because they don't need to fly to survive), or because they have opposable thumbs?

There's too many confounding factors to say that the human brain architecture is actually 'better' based on the outcomes of natural selection. And if we kill all the birds, we will lose the chance to find out as we develop techniques to better compare the trade-offs of the different architectures.


    For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much -- the wheel, New York, wars and so on -- whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man -- for precisely the same reasons.

Amen, brother.

I wonder if dinosaurs said the same thing about mammals.

This might be a dumb question, because I doubt the distances between neurons makes a meaningful distance… But could a small brain, dense with neurons like a crow, possibly lead to a difference in things like response to stimuli or “compute” speed so to speak?

The electrical signals in brain are chemical reactions, not conductivity like a metal wire. They are slow! Synaptic junctions are a huge number of indirect chemical cascades, not a direct electrical connection, they are even slower! So brain morphology and connectome has a massive impact on what can be computed. Human twitch responses are done by cerebellum, not cerebrum. It's faster, but you can't do philosophy with the cerebellum, only learn to ride a bike etc. This is the brain doing the best it for the circumstances.

>The electrical signals in brain are chemical reactions, not conductivity like a metal wire.

Nerve signals are both chemical reactions and electrical impulses like metal wire. Electrical impulses are sent along the fat layer by ions Potassium , Calcium, Sodium etc.

Twitch responses are actually done in spinal cord. The signals are short circuited all along the spine and return back to muscle without touching the brain ever.


Regarding compute speed - it checks out. Humans "think" via neo cortex, thin ouside layer of the brain. Poor locality, signals needs to travel a lot. Easy to expand though. Crow brain have everything tightly concentrated in the center - fast communication between neurons, hard to have more "thinking" thing later (therefore hard to evolve above what crows currently have)

Not a dumb question at all; one of the hard constraints of cou design is signal propagation time. Even going at 1/3 the speed of light, when you only have on the order of a billionth of a second (clock frequencies in the GHz), a signal can’t get very far.

I haven’t heard of a clocking mechanism in brains, but signals propagate much slower and a walnut / crow brain is much larger than a cpu die.


> I haven’t heard of a clocking mechanism in brains

Brain waves (partially). They aren't exactly like a cpu clock, but they do coordinate activity of cells in space and time.

There are different frequencies that are involved in different types of activity. Lower frequencies synchronize across larger areas (can be entire brain) and higher frequencies across smaller local areas.

There is coupling between different types of waves (i.e. slow wave phase coupled to fast waves amplitude) and some researchers (Miller) thinks the slow wave is managing memory access and the fast wave is managing cognition/computation (utilizing the retrieved memory).


Actually I think that's pretty plausible. Signal speed in the brain is pretty slow - it would have to make some difference

And here I was wondering if there were heat issues in a crow brain.

Throw some thermal paste on those neurons and they do just fine

IIRC bird brains are 'packed/structured' very similar to our cerebellum.

So one would just need to pick that little cube out of our cerebellum, to have that 'twice as complexity'.


That shouldn't be too surprising, as a larger fraction of the volume of a brain should be taken up by "wiring" as the size of the brain expands.

Interesting! Thank you. I didn’t know that.

I wonder if we manage to annotate this much level of detail about our brain, and then let (some variant of the current) models train on it, will those intrinsically end up generalizing a model for intelligence?

I think you would also need the epigenetic side, which is very poorly understood: https://www.universityofcalifornia.edu/news/biologists-trans...

We have more detail than this about the C. elegans nematode brain, yet we still no clue how nematode intelligence actually works.


How's OpenWorm coming along?

Badly: https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brai... (the comments have some updates as of 2023)

Almost every other cell in the worm can be simulated with known biophysics. But we don't have a clue how any individual nematode neuron actually works. I don't have the link but there are a few teams in China working on visualizing brain activity in living C. elegans, but it's difficult to get good measurements without affecting the behavior of the worm (e.g. reacting to the dye).


Here's a timely bit of new research: https://www.science.org/doi/10.1126/sciadv.adk0002

Summary (my paraphrasing):

They partially figured out how two neurons (AVA, AVB) control forward and backward movement, previous theories assumed one neuron controlled forward and one controlled backward, but that didn't correctly model actual movement.

They found that AVA+AVB combine in a complex mechanism with two different signaling/control methods acting at different timescales to produce a graded shifting between forward+backward when switching directions, as opposed to an on/off type switch (that previous models used but didn't match actual movements).

Interesting learnings from this paper (at least for me):

1-Most neurons in worm are non-spiking (I had no idea, I've read about this stuff a lot and wasn't aware)

2-Non-spiking neurons can have multiple resting states at different voltages

3-Neurons AVA and AVB are different, they each have different resting state characteristics and respond differently to inputs


We don’t know what “understanding” means (we don’t have a workable definition of it), so your question cannot be answered.

Physics of the universe is the most complex thing in the universe

And yet recall all the hype and claims around LLMs reaching AGI within few years.

LLMs that work at a very crude level of string tokens and emit probabilities.


Can a hand grasp itself?

No, but neither can it compose a symphany.

Hmm, that website does not honour my keyboard layout. Not sure how they managed that.

That is awesome !

the sheer number of things that work in co-ordination to make biology work!

In-f*king-credible !


> The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons.

This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.

Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.

There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: https://news.ycombinator.com/item?id=38919548


Or you can subscribe to Geoffrey Hinton's view that artificial neural networks are actually much more efficient than real ones- more or less the opposite of what we've believed for decades- that is that artificial neurons were just a poor model of the real thing.

Quote:

"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.


"Efficient" and "better" are very different descriptors of a learning algorithm.

The human brain does what it does using about 20W. LLM power usage is somewhat unfavourable compared to that.


You mean energy-efficient, this would be neuron, or synapse-efficient.

I don't think we can say that, either. After all, the brain is able to perform both processing and storage with its neurons. The quotes about LLMs are talking only about connections between data items stored elsewhere.

Stored where?

You tell me. Not in the trillion links of a LLM, that's for sure.

The "knowledge" of an LLM is indeed stored in the connections between neurons. This is analogous to real neurons as well. Your neurons and the connections between them is the memory.

I'm not aware that (base) LLMs use any form of database to generate their answers- so yes, all their knowledge is stored in their hundreds of billions of synapses.

Fair enough. OTOH, generating human-like text responses is a relatively small part of the human brain's skillset.

Hm. I've always commented on my (temporarily) non-retrievable memories as, "The data is still in there, it's the retrieval mechanism that degrades if not used." And, sure enough, in most cases the memory returns in a day or so, even if you don't think hard about it. (There are cases where the memory doesn't come back, as if it was actively erased or was never in long term memory in the first place. Also, as I pass eighty, I find it increasingly difficult to memorize things, and I forget recent events more readily. But I remember decades old events about as well as I ever did.)

So, my first response to your comment about the memory not being in the synapses was to agree with you. But I also agree with your respondent, so, hm.


I don't know - it's about the best I can manage some days...

Also, these two networks achieves vastly different results, per watt consumed. A NN creates a painting in 4s on my M2 MacBook; an artist in 4 hours. Are their used joules equivalent? How many humans would it take to simulate MacOS?

Horsepower comparisons here are nuanced and fatally tricky!


Humans aren't able to project an image from their neurons onto a disk like ANNs can, if they could it would also be very fast. That 4 hour estimate includes all the mechanical problems of manipulating paint.

What software are you using for local NN generation of paintings? Even so, the training cost of that NN is significant.

The general point is valid though - for example, a computer is much more efficient at finding primes, or encrypting data, than humans.


The cost of training a human from birth is pretty high, especially if you consider their own efforts over the years. And they don't know a fraction of what the LLMs know. (But they have other capabilities!)

It is using about 20W and then a person takes a single airplane ride between the coasts. And watches a movie on the way.

I mean, Hinton’s premises are, if not quite clearly wrong, entirely speculative (which doesn't invalidate the conclusions about efficienct that they are offered to support, but does leave them without support) GPT-4 can produce convincing written text about a wider array of topics than any one person can, because it's a model optimized for taking in and producing convincing written text, trained extensively on written text.

Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.


Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Ironically, I suppose part of the apparent "intelligence" of LLMs comes from reflecting the intelligence of human users back at us. As a human, the prompts you provide an LLM likely "make sense" on some level, so the statistically generated continuations of your prompts are likelier to "make sense" as well. But if you don't provide an ongoing anchor to reality within your own prompts, then the outputs make it more apparent that the LLM is simply regurgitating words which it does not/cannot understand.

On your point of human knowledge being far more multimodal than LLM interfaces, I'll add that humans also have special neurological structures to handle self-awareness, sensory inputs, social awareness, memory, persistent intention, motor control, neuroplasticity/learning– Any number of such traits, which are easy to take for granted, but indisputably fundamental parts of human intelligence. These abilities aren't just emergent properties of the total number of neurons; they live in special hardware like mirror neurons, special brain regions, and spindle neurons. A brain cell in your cerebellum is not generally interchangeable with a cell in your visual or frontal cortices.

So when a human "converse[s] about stuff ranging from philosophy to cooking" in an honest way, we (ideally) do that as an expression of our entire internal state. But GPT-4 structurally does not have those parts, despite being able to output words as if it might, so as you say, it "generates" convincing text only because it's optimized for producing convincing text.

I think LLMs may well be some kind of an adversarial attack on our own language faculties. We use words to express ourselves, and we take for granted that our words usually reflect an intelligent internal state, so we instinctively assume that anything else which is able to assemble words must also be "intelligent". But that's not necessarily the case. You can have extremely complex external behaviors that appear intelligent or intentioned without actually internally being so.


Do I need different prompts? These results seem sane to me. It interprets laser eye removal surgery as referring to LASIK, which I would do as well. When I clarified that I did mean removal, it said that the procedure didn't exist. It interprets Mid-Atlantic Mountain Range as referring to the Mid-Atlantic Ridge and notes that it is underwater and hard to access. Not that I'm arguing GPT-4 has a deeper understanding than you're suggesting, but these examples aren't making your point.

https://chat.openai.com/share/2234f40f-ccc3-4103-8f8f-8c3e68...

https://chat.openai.com/share/1642594c-6198-46b5-bbcb-984f1f...


Tested with GPT-3.5 instead of GPT-4.

> When I clarified that I did mean removal, it said that the procedure didn't exist.

My point in my first two sentences is that by clarifying with emphasis that you do mean "removal", you are actually adding information into the system to indicate to it that laser eye removal is (1) distinct from LASIK and (2) maybe not a thing.

If you do not do that, but instead reply as if laser eye removal is completely normal, it will switch to using the term "laser eye removal" itself, while happily outputting advice on "choosing a glass eye manufacturer for after laser eye removal surgery" and telling you which drugs work best for "sedating an agitated patient during a laser eye removal operation":

https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925f6a...

So the sanity of the response is a reflection of your own intelligence, and a result of you as the prompter affirmatively steering the interaction back into contact with reality.


I tried all of your follow-up prompts against GPT-4, and it never acknowledged 'removal' and instead talked about laser eye surgery. I can't figure out how to share it now that I've got multiple variants, but, for example, excerpt in response to the glass eye prompt:

>If someone is considering a glass eye after procedures like laser eye surgery (usually due to severe complications or unrelated issues), it's important to choose the right manufacturer or provider. Here are some key factors to consider

I did get it to accept that the eye is being removed by prompting, "How long will it take before I can replace the eye?", but it responds:

>If you're considering replacing an eye with a prosthetic (glass eye) after an eye removal surgery (enucleation), the timeline for getting a prosthetic eye varies based on individual healing.[...]

and afaict, enucleation is a real procedure. An actual intelligence would have called out my confusion about the prior prompt at that point, but ultimately it hasn't said anything incorrect.

I recognize you don't have access to GPT-4, so you can't refine your examples here. It definitely still hallucinates at times, and surely there are prompts which compel it to do so. But these ones don't seem to hold up against the latest model.


I think the distinction that they are trying to illustrate that if you asked a human about laser eye removal, they would either laugh or make the decision to charitably interpret your intent.

The llm does not do either. It just follows a statistical heuristic and therefore thinks that laser eye removal is the same thing


> Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Without anthropomorphizing it, it does respond like an alien / 5 year old child / spec fiction writer who will cheerfully "go along with" whatever premise you've laid before it.

Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?


> Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?

Probably as soon as they have any concept of physical reality and embodiment. Arguably before they know what lasers are. Certainly long before they have the lexicon and syntax to respond to it by explaining LASIK. LLMs have the latter, but can only use that to (also without anthropormphizing) pretend they have the former.

In humans, language is a tool for expressing complex internal states. Flipping that around means that something which only has language may appear as if it has internal intelligence. But generating words in the approximate "right" order isn't actually a substitute for experiencing and understanding the concepts those words refer to.

My point is that it's not a "point" on a continuous spectrum which distinguishes LLMs from humans. They're missing parts.


Gruesomely useful in a war situation, unfortunately. I wonder at what point the LLMs would "realize" that "surgery" doesn't apply to that.

> it does respond like a ... 5 year old child

This is the comparison that's made most sense to me as LLMs evolve. Children behave almost exactly as LLMs do - making stuff up, going along with whatever they're prompted with, etc. I imagine this technology will go through more similar phases to human development.


Couldn't have said it better myself.

Your last point also highlights a real issue that affects real humans: just because someone (or something) cannot talk doesn't mean that they are not intelligent. This is a very current subject in disability spaces, as someone could be actually intelligent, but not able to express their thoughts in a manner that is effective in sharing them due to a disability (or even simply language barriers!), and be considered to be unintelligent.

In this way, you could say LLMs are "dumb" (to use the actual definition of the word, ie nonverbal) in some modes like speech, body language or visual art. Some of these modes are fixed in LLMs by using what are basically disability aids, like text to speech or text to image, but the point still stands just the same, and in fact these aids can be and are used by disabled people to achieve the exact same goals.


Prompted to LlamaV3 70B

What are the benefits of laser eye removal surgery?

> I think there may be a misunderstanding. There is no such thing as "laser eye removal surgery." However, I assume you meant to ask about the benefits of LASIK (Laser-Assisted In Situ Keratomileusis) eye surgery, which is a type of refractive surgery that reshapes the cornea to improve vision.


Like humans, multi-modal frontier LLMs will ignore "removal" as an impertinent typo, or highlight it. This, like everything else in the comment, is either easily debunked (e.g. try it, read the lit. on LLM extrapolation), or so nebulous and handwavy as to be functionally meaningless. We need an FAQ to redirect "statistical parrot" people to, saving words responding to these worn out LLM misconceptions. Maybe I should make one. :/

THe way current empirical models in ML are evaluated and tested ( benchmark datasets) tell you very little to nothing about cognition and intelligence. Mainly because as you hinted , there doesn't seem to be a convincing and watertight benchmark or model of cognition. LLMs or multi-modal LLMs demonstrating impressive performance on a range of tasks is interesting from certain standpoints.

Human perception of such models is frankly not a reliable measure at all as far as gauging capabilities is concerned. Until there's more progess on the nueroscience/computer science (and an intersection of fields probably) and better understanding of the nature of intelligence, this is likely going to remain an open question.


I didn't know that metaphysics, consciousness, and the physical complexities of my neurology are considered solved problems, though I suppose anything is as long as you handwave the unsolved parts as "functionally meaningless".

> Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.

Exactly this.

Anyone that has spent significant time golfing can think of an enormous amount of detail related to the swing and body dynamics and the million different ways the swing can go wrong.

I wonder how big the model would need to be to duplicate an average golfers score if playing X times per year and the ability to adapt to all of the different environmental conditions encountered.


Hinton is way off IMO. Amount of examples needed to teach language to an LLM is many orders of magnitude more than humans require. Not to mention power consumption and inelasticity.

I think that what Hinton is saying is that, in his opinion, if you fed a 1/100th of a human cortex with the amount of data that is used to train llms, you wouldn't get a thing that can speak in 80 different languages about a gigantic number of subjects, but (I'm interpreting here..) about ten of grams of fried, fuming organic matter.

This doesn't mean that an entire human brain doesn't surpass llms in many different ways, only that artificial neural networks appear to be able to absorb and process more information per neuron than we do.


LLM does not know math as well as a professor, judging from the large number of false functional analysis proofs I have had it generate will trying to learn functional analysis. In fact the thing it seems to lack is what makes a proof true vs. fallacious, as well as a tendency to answer false questions. “How would you prove this incorrectly transcribed problem” will get fourteen steps with 8 and 12 obviously (to a student) wrong, while the professor will step back and ask what am I trying to prove.

LLMs do not know math, at all. Not to sound like one myself, but they are stochastic parrots, and they output stuff similar to their training data, but they have no understanding of the meaning of things beyond vector encodings. This is why chatgpt plays chess in hilarious ways also.

An LLM cannot possibly have any concept of even what a proof is, much less whether it is true or not, even if we're not talking about math. The lower training data amount and the fact that math uses tokens that are largely field-specific, as well as the fact that a single-token error is fatal to truth in math means even output that resembles training data is unlikely to be close to factual.


That said, they are surprisingly useful. Once I get the understanding thru whatever means, I can converse with it and solidify the understanding nicely. And to be honest people are likely to toss in extra \sqrt{2} and change signs randomly. So you have to read closely anyways.

> "So maybe it’s actually got a much better learning algorithm than us.”

And yet somehow it's also infinitely less useful than a normal person is.


GPT4 has been a lot more useful to me than most normal people I interact with.

Except you’d be missing the part that a neuron is not just a node with a number but a computational system itself.

Computation is really integrated through every scale of cellular systems. Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.

Pdf: “Protein molecules as computational elements in living cells - Dennis Bray” https://www.cs.jhu.edu/~basu/Papers/Bray-Protein%20Computing...


I think you are missing the point.

The calculation is intentionally underestimating the neurons, and even with that the brain ends up having more parameters than the current largest models by orders of magnitude.

Yes the estimation is intentionally modelling the neurons simpler than they are likely to be. No, it is not “missing” anything.


The point is to make a ballpark estimate, or at least to estimate the order of magnitude.

From the sibling comment:

> Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.

If this is true, then there may be many orders of magnitude unaccounted for.

Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.


> Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.

AKA a quantum computer. Its not a "never", but how much computation you would need to throw at the problem.


That may or may not still be too simple a model. Cells are full of complex nano scale machinery and not only might it me plausible some of it is involved in the processes of cognition, I'm aware of at least one study which identified some nano scale structures directly involved in how memory works in neurones. Not to mention a lot of what's happening has a fairly analogue dimension.

I remember an interview with one neurologist who stated humanity has for centuries compared the functioning of the brain to the most complex technology devised yet. First it was compared to mechanical devices, then pipes and steam, then electrical circuits, then electronics and now finally computers. But he pointed out, the brain works like none of these things so we have to be aware of the limitations of our models.


> That may or may not still be too simple a model

Based on the stuff I've read, it's almost for sure too simple a model.

One example is that single dendrites detect patterns of synaptic activity (sequences over time) which results in calcium signaling within the neuron and altered spiking.


There's a lot of in-neuron complexity, I'm sure there is some cross-synapse signaling (I mean, how can it not exist? There's nothing stopping it.), and I don't think the synapse behavior can be modeled as just more signals.

On the other hand, a significant amount of neural circuitry seems to be dedicated to "housekeeping" needs, and to functions such as locomotion.

So we might need significantly less brain matter for general intelligence.


Or perhaps the housekeeping of existing in the physical world is a key aspect of general intelligence.

Isn't that kinda obvious? A baby that grows up in a sensory deprivation tank does not… develop, as most intelligent persons do.

> A baby that grows up in a sensory deprivation tank

Now imagine a baby that uses an artificial lung and receives nutrients directly, moves on a wheeled car (no need for balance), does not have proprioception, or a sense of smell (avoiding some very legacy brain areas).

I think, that such a baby still can achieve consciousness.


I doubt it really takes that much brain power to move around complex environments, even using legs. Insects manage to do it.

A true sensory deprivation tank is not a fair comparison, I think, because AI is not deprived of all its 'senses' - it is still prompted, responds, etc.

Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I would think so. Let's not try it ;)


> Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I don't think so, because humans communicate and learn largely about the world. Words mean nothing without at least some sense of objective physical reality (be it via sight, sound, smell, or touch) that the words refer to.

Hellen Keller, with access to three out of five main senses (and an otherwise fully functioning central nervous system):

    Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness... Since I had no power of thought, I did not compare one mental state with another.

    I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.
I remember reading her book. The breakthrough moment where she acquired language, and conscious thought, directly involved correlating the physical tactile feeling of running water to the letters "W", "A", "T", "E", "R" traced onto her palm.

My interpretation of this (beautiful) quote is there was a traceable moment in HK's life where she acquired "consciousness" or perhaps even self-awareness/metacognition/metaphysics? That once the synaptic connections necessary to bridge the abstract notion of language to the physical world led her down the path of acquiring the abilities that distinguish humans from other animals?

That's a really good point. Thanks!

Yes and no on order of magnitude required for decent AI, there is still (that I know of) very little hard data on info density in the human brain. What there is points at entire sections that can sometimes be destroyed or actively removed while conserving "general intelligence".

Rather than "humbling" I think the result is very encouraging: It points at major imaging / modeling progress, and it gives hard numbers on a very efficient (power-wise, size overall) and inefficient (at cable management and probably redundancy and permanence, etc) intelligence implementation. The numbers are large but might be pretty solid.

Don't know about upload though...


> Computing power should get there around 2048

We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.

Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of 0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.

I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.


3nm doesn’t mean the transistor is 3nm, it’s just a marketing naming system at this point. The actual transistor is about 20-30nm or so.

Thanks for the comment. I looked more into this and it seems like not only are we in the era of diminished returns for computational abilities, costs have also now started matching the increased compute. i.e 2x performance leads to 2x cost. Moore's law has already run it's course and we're living in a new era of compute. We may get increased performance, but it will always be more expensive.

Artificial thinking doesn't require an artificial brain. As our own walking system, compared to our car's locomotion system.

The car's engine, transmission and wheels, require no muscles or nerves


As important and impressive a result as this is, I am reminded of the cornerstone problem of neuroscience, which goes something like this: if we knew next to nothing about processors but could attach electrodes to the die, would we be able to figure out how processors execute programs and what those programs do, in detail, just from the measurements alone? And now scale that up several orders of magnitude and introduce sensitivity to timing of arrival for signals, and you got the brain. Likewise ok, you have petabytes of data now, but will we ever get closer to understanding, for example, how cognition works? It was a bit of a shock for me when I found out (while taking an introductory comp neuroscience course) that we simply do not have tractable math to model more than a handful neurons in time domain. And they do actually operate in time domain - timings are important for Hebbian learning, and there’s no global “clock” - all that the brain does is a continuous process.

The reverse-engineering issue was popularized by this article, https://www.cell.com/cancer-cell/fulltext/S1535-6108%2802%29...

On the second point, the failure of Openworm to model the very well-mapped-out C. elegans (~0.3k neurons) says a lot.


I just read that article and enjoyed it. Thanks for sharing! I don’t think the author was arguing biological processes can’t be reverse engineered, but rather that the tools and approaches typically used by biology researchers may not be as effective as tools and approaches used by engineers.

Right. The arguments for the study of A.I. were that you will not discover the principles of flight by looking at a birds feather under an electron microscope.

It’s fascinating, but we aren’t going to understand intelligence this way. Emergent phenomenon are part of complexity theory, and we don’t have any maths for it. Our ignorance in this space is large.

When I was young, I remember a common refrain being “will a brain ever be able to understand itself?”. Perhaps not, but the drive towards understanding is still a worthy goal in my opinion. We need to make some breakthroughs in the study of complexity theory.


I love that analogy with birds!

Yes we figured out how to build aircraft.

But it can not be compared to a bird flying. Neither in terms of efficiency or elegance.


> but we aren’t going to understand intelligence this way

The same argument holds for "AI" too. We don't understand a damn thing about neural networks.

There's more - we don't care to understand them as long as it's irrelevant to exploiting them.


> The same argument holds for "AI" too. We don't understand a damn thing about neural networks.

Yes, which is why the current explosion in practical application isn’t very interesting.

> we don't care to understand them as long as it's irrelevant to exploiting them.

For some definition of “we”, I’m sure that’s true. We don’t need to understand things to make practical use of them. Giant Cathedrals were built without science and mathematics. Still, once we do have the science and mathematics, generally exponential advancement results.


Particle physics works in a similar way, but instead of attaching electrodes, you shoot at them with guns and then analyze trajectories of the fragments.

The cheap monkey headset works in a similar way: monkeys just essentially continue to analyze trajectories of medieval cannon balls in the LHC and to count potatoes in the form of bytes.

>> The sample was immersed in preservatives and stained with heavy metals to make the cells easier to see.

Try experimenting with immersing your brain in preservatives and staining with heavy metals to see how would you be able to write the comment similar to the above.

No wonder that monkey methods continue to unveil monkey cognition.


> Try… immersing your brain in preservatives and staining with heavy metals

I think we all do every day


Try using the protocols and doses from the original article.

Annual reminder to re-read "There's plenty of room at the bottom" by Feynman. https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf

Note the part where the biologists tell him to make an electron microscope that's 1000X more powerful. Then note what technology was used to scan these images.


I think it's actually "What you should do in order for us to make more rapid progress is to make the electron microscope 100 times better" and the state of art at the time was "it can only resolve about 10 angstroms" or I guess 1nm. So 100x better would be 0.1 angstrom / 0.01 nm.

We have made some progress it seems. Googling I see "up to 0.05 nm" for transmission electron microscopes and "less than 0.1 nanometers" for scanning. https://www.kentfaith.co.uk/blog/article_which-electron-micr...

For comparison the distance between hydrogen nuclei in H2 is 0.074 nm I think.

You can see the shape of molecules but it's still a bit fuzzy to see individual atoms https://cosmosmagazine.com/science/chemistry/molecular-model...


Resolution is only one aspect of EM that can be optimized.

It would be cool if they had a way of identifying which atoms are which.

Based on the picture of a single neuron, the brain sim crowd should recalculate their estimates for the needed computing power again.

Is there a name for the somewhat uncomfortable feeling caused by seeing something like this? I wish I could better describe it. I just somehow feel a bit strange being presented with microscopic images of brain matter. Is that normal?

For me the disorder of it is stressful to look at. The brain has poor cable management.

That said I do get this eerie void feeling from the image. My first thought was to marvel how this is what I am as a conscious being in terms of my "implementation", and it is a mess of fibers locked away in the complete darkness of my skull.

There is also the morose feeling from knowing that any image of human brain tissue was once a person with a life and experiences. It is your living brain looking at a dead brain.



Is it the shapes, similar to how patterns of holes can disturb some people? Or is it more abstract, like "unknowable fragments of someone's inner-most reality flowed through there"? Not that I have a name for it either way. The very shape of it (in context) might represent an aspect of memory or personality or who knows what.

> "unknowable fragments of someone's inner-most reality flowed through there"

It's definitely along these lines. Like so much (everything?) that is us happens amongst this tiny little mesh of connections. It's just eerie, isn't it?

Sorry for the mundane, slightly off-topic question. This is far outside my areas of knowledge, but I thought I'd ask anyhow. :)


It's kind of feeling a bit like an intruder? There probably is a name for that.

It makes me think humans aren't special, and there is no soul, and consciousness is just a bunch of wires like computers. Seriously, to see the ENTIRETY of human experience, love and tragedy and achievement, are just electric potentials transmitted by those wiggly cells, just extinguishes any magic I once saw in humanity.

Er, why can’t the wires be the experience ?

If the wires make consciousness then there is consciousness. The substrate is irrelevant and has no bearing on the awesomeness of the phenomena of knowing, experiencing and living.


You might be confusing the interface with the operating system.

I dunno, the whole of human experience is what I expect of a system composed of 100,000,000,000,000 entities, with quintillions of interconnections, interacting together simultaneously on a molecular level. Happiness, sadness, love and hate can (obviously) be described and experienced with this level of complexity.

I'd be much more horrified to see our consciousness simplified to anything smaller than that, which is why any hype for AGI because we invented chatbots is absolutely laughable to me. We just invented the wheel and now hope to drive straight to the Moon.

Anyway, you are seeing a fake three dimensional simplification of a four+ dimensional quantum system. There is at least one unseen physical dimension in which to encode your "soul"


Welcome to the Existential Bar at the End of the Universe

I’m not religious but it’s as close to a spiritual experience as I’ll ever have. It’s the feeling of being confronted with something very immediate but absolutely larger than I’ll ever be able to comprehend


When I did fetal pig dissection, nothing bothered me until I got to the brain. I dunno what it is, maybe all those folds or the brain juice it floats in, but I found it disconcerting.

Trypophobia, visceral, uncanny, squeamish?

Neal Stephenson has a good novel that deals with this (https://en.wikipedia.org/wiki/Fall;_or,_Dodge_in_Hell)

I developed algorithms for a Neuroglancer fork which annotates unlabeled volumetric data via self-supervised learning. I am looking forward to more developments in large scale brain mapping projects :)

More project details: https://www.ll.mit.edu/sites/default/files/other/doc/2023-02...


> cut the sample into around 5,000 slices — each just 34 nanometres thick — that could be imaged using electron microscopes.

Does anyone have any insight into how this is done without damaging the sample?



NB: tome in Microtome has the same root as the T in CAT scan: computer aided tomography. Which is to say, slimly-sliced cabbage^W X-ray scans.

It's also the tome as in book, more properly one volume of a multi-volume (or multi-part) set, though it now generally simply means any large book.

<https://www.etymonline.com/search?q=tome>


https://en.wikipedia.org/wiki/Tomography: `The word tomography is derived from Ancient Greek τόμος tomos, "slice, section" and γράφω graphō, "to write" or, in this context as well, "to describe."`

Incredible, love learning language facts like this.

Etymology is disturbingly addictive.

The sample is stained (to make thigns visible), then embedded in a resin, then cut with a very sharp diamond knife and the slices are captured by the tape reel.

Paper: https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4 See Figure 1.

The ATUM is described in more detail here https://www.eden-instruments.com/en/ex-situ-equipments/rmc-e...

and there's a bunch of nice photos and explanations here https://www.wormatlas.org/EMmethods/ATUM.htm

TL;DR this project is reaping all the benefits of the 21st century.


1.4 PB/mm^3 (petabytes per millimeter cubed)×1260 cm^3 (cubic centimeters, large human brain) = 1.76×10^21 bytes = 1.76 ZB (zetabytes)

[AI] "Frontier [supercomputer]: the storage capacity is reported to be up to 700 petabytes (PB)" (0.0007 ZB).

[AI] "The installed base of global data storage capacity [is] expected to increase to around 16 zettabytes in 2025".

Thus, even the largest supercomputer on Earth cannot store more than 4 percent of state of a single human brain. Even all the servers on the entire Internet could store state of only 9 human brains.

Astonishing.


One point about storage- it's economically driven. If there was a demand signal (say, the government dedicated a few hundred billion dollars to a single storage systems), hard drive manufacturers could deploy much more storage in a year. I've pointed this out to a number of scientists, but none of them could really think of a way to get the government to spend that much money just to store data without it curing a senator's heart disease.

> without it curing a senator's heart disease

Obviously I'm not advocating for this, but I'll just link to the Mad TV skit about how the drunk president cured cancer.

https://www.youtube.com/watch?v=va71a7pLvy8


I appreciate you're running the numbers to extrapolate this approach, but just wanted to note that this particular figure isn't an upper bound nor a longer bound for actually storing the "state of a single human brain". Assuming the intent would be to store the amount of information needed to essentially "upload" the mind onto a computer emulation, we might not yet have all the details we need in this kind of scanning, but once we do, we may likely discover that a huge portion of it is redundant.

In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade. Lena [0] tells of the first successfully uploaded scan taking place in 2031, and I'm concerned that reality won't be far off.

[0] https://qntm.org/mmacevedo


> In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade.

They don't even know how a single neuron works yet. There is complexity and computation at many scales and distributed throughout the neuron and other types of cells (e.g. astrocytes) and they are discovering more relentlessly.

They just recently (last few years) found that dendrites have local spiking and non-linear computation prior to forwarding the signal to the soma. They couldn't tell that was happening previously because the equipment couldn't detected the activity.

They discovered that astrocytes don't just have local calcium wave signaling (local=within the extensions of the cell), they also forward calcium waves to the soma which integrates that information just like a neuron soma does with electricity.

Single dendrites can detect patterns of synaptic activity and respond with calcium and electrical signaling (i.e. when synapse fires in a particular timing sequence, the a signal is forwarded to the soma).

It's really amazing how much computationally relevant complexity there is, and how much they keep adding to their knowledge each year. (I have a file of notes with about 2,000 lines of these types of interesting factoids I've been accumulating as I read).


> we may likely discover that a huge portion of [a human brain] is redundant

Unless one's understanding of algorithmic inner workings of a particular black box system is actually very good, it is likely not possible not only to discard any of its state, but even implement any kind of meaningful error detection if you do discard.

Given the sheer size and complexity of a human brain, I feel it is actually very unlikely that we will be able to understand its inner workings to such a significant degree anytime soon. I'm not optimistic, because so far we have no idea how even laughingly simple, in comparison, AI models work[0].

[0] "God Help Us, Let's Try To Understand AI Monosemanticity", https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...


we are nowhere near whole human brain volume EM. the next major milestone in the field is a whole mouse brain in the next 5-10 years, which is possible but ambitious

What am I missing? Assuming exponential growth in capability, that actually sounds very on track. If we can get from 1 cubic millimeter to a whole mouse brain in 5-10 years, why should it take more than a few extra years to scale that to a human brain?

assuming exponential growth in capacity is a big assumption!

We don’t know to even model that state: do we need the position and velocity and charge of every atom, or can a neuron be approximated by a bfloat?

If you can preserve and scan the tissue in a way that lets you scan the same area multiple times you wouldn't need to digitize the whole thing. Put the slices on rotating platters with a microscope for each platter and read parts of the brain on demand. It's a hard drive but instead of magnets storing the bits of an image of the sample, it's the actual physical sample.

Not if you want to actually execute the state of a human brain in a digital simulation to see how it works and whether it still displays certain abilities such as comprehension and consciousness. Otherwise a digital scan of a brain is just a glorified microscope.

AI folks dream about creating superintelligence to guide our lives but all we can do is drosophilla's brain.

It's very lossy and unreliable storage, however. To use an analogy, it's only a huge amount of ECC that keeps things (just barely) working.

wow

I was really interested to see that a single neuron has 1000's of exciting connections and 1000's of inhibitory connections. I know that this is a gross feature but it's a reminder of just how distant NN models are from the biological reality.

> The brain fragment was taken from a 45-year-old woman when she underwent surgery to treat her epilepsy. It came from the cortex, a part of the brain involved in learning, problem-solving and processing sensory signals.

Wonder how they figured out which fragment to cut out.


I imagine they determined the focus of the seizures by electrical techniques.

I worry this might make the sample biased in some way.


Considering the success of this work, I doubt this is the last such cubic millimeter to be mapped. Or perhaps the next one at even higher resolution. No worries.

A similar dataset already exists in mouse cortex. More are underway in the field.

https://www.biorxiv.org/content/10.1101/2024.03.22.586254v1


Imagine all the conclusions being made from a 1cm cube of epileptic neurons.

The manuscript gives some details in the context of the difficulty of obtaining larger useful samples in the future and the difficulty of understanding if a sample is typical or pathological.

Yea, and at the Plank's scale resolution as a logical extension of the nanoscale with their "modern" measurement methodology this cheap monkey headset just disintegrates, haha.

> Jain’s team then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D

How do they know if their AI did it correctly or not?


they don't and talk about the difficulties in their paper. I found it refreshing to see the standard of frankness and openness in how they address this. But - it's all pretty compelling and will surely prompt and sustain a lot more research investigating these results and data and also creating more in the future.

> the model showed neurons with tendrils that formed knots around themselves

I wonder if this plays into the mechanism of epilepsy. Self-arousal...?

Anybody qualified to comment on?


This tissue was from epileptic patient. So we can not rule out that weird things are related to disease.

It looks like spaghetti code.

Why did the researchers use ML models to do the reconstruction and risk getting completely incorrect, hallucinated results when reconstructing a 3D volume accurately using 2D slices is a well-researched field already?

I'm guessing a registration problem.

If all of the layers were guaranteed to be orthographic with no twisting, shearing, scaling, squishing, with a consistent origin... Then yeah, there's a huge number of ways to just render that data.

But if you physically slice layers first, and scan them second, there are all manner of physical processes that can make normal image stacking fail miserably.


The methods used here are state of the art. The problem is not just turning 2D slices into a 3D volume, the problem is, given the 3D volume, determining boundaries between (and therefore the 3d shape of) objects (i.e. neurons, glia, etc) and identifying synapses

Although the article mentions Artificial Intelligence, their paper[1] never actually mentions that term, and instead talks about their machine learning techniques. AFAIK, ML for things like cell-segmentation are a solved problem [2].

[1] https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4.... [2] https://www.ilastik.org/


There are extremely effective techniques, but it is not really solved. The current techniques still require human proofreading to correct errors. Only a fraction of this particular dataset is proofread.

Maybe it's not about reconstructing a volume but about recognizing neurons within that volume.

Regarding the risk, as noted in the article, they are manually “proofreading” the construction.

Another proof point that AGI is probably not possible.

Growing actual bio brains is just way easier. Its never going to happen in silicon.

Every machine will just have a cubic centimeter block of neuro meat embedded in it somewhere.


You’d have to train them individually. One advantage of ANNs is that you can train them and then ship the model to anyone with a GPU.


No reason for an AGI not to have a few cubes of goo slotted in here and there. But yeah, because of the training issue, they might be coprocessors or storage or something.

Hard disagree on this.

I strongly believe that there is a TON of potential for synthetic biology-- but not in computation.

People just forget how superior current silicon is for running algorithms; if you consider e.g. a 17 by 17 digit multiplication (double precision), then a current CPU can do that in the time it takes for light to reach your eye from the screen in front of you (!!!). During all the completely unavoidable latency (the time any visual stimulus takes to propagate and reach your consciousness), the CPU does millions more of those operations.

Any biocomputer would be limited to low-bandwidth, ultra high latency operations purely by design.

If you solely consider AGI as application, where abysmal latency and low input bandwidth might be acceptable, then it still appears to be extremely unlikely that we are going to reach that goal via synthetic biology; our current capabilities are just disappointing and not looking like they are gonna improve quickly.

Building artificial neural networks on silicon, on the other hand, capitalises on the almost exponential gains we made during the last decades, and already produces results that compare to say, a schoolchild, quite favorably; I'd argue that current LLM based approaches already eclipse the intellectual capabilities of ANY animal, for example. Artificial bio brains, on the other hand, are basically competing with worms right now...

Also consider that even though our brains might look daunting from a pure "upper bound on required complexity/number of connections" point of view, these limits are very unlikely to be applicable, because they confound implementation details, redundancy and irrelevant details. And we have precise bound on other parameters, that our technology already matches easily:

1) Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)

2) Bandwidth for training is quite low, even when compressing the ~20year training time for an actual human into a more manageable timeframe

3) Operating power does not require more than ~20W.

4) No understanding was necessary to create human intelligence-- its purely a result of an iterative process (evolution).

Also consider human flight as an analogy: we did not achieve that by copying beating wings, powered by dozens of muscle groups and complex control algorithms-- those are just implementation details of existing biological systems. All we needed was the wing-concept itself and a bunch of trial-and-error.


>Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)

Are you counting epigenetic factors in that? They're heritable.


Why do these neurons have flat "heads"?

Edge of the dataset.

For some people, this is all you need (sorry, couldn’t resist)!

Fascinating! I wonder how different that is from the mind of a man haha



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: