I've long suspected that the brain might be using quantum computing techniques. It's a good physically plausible explanation for how you can get more compute than a modern at-scale data center on 40-60W of power and in a volume about the size of two beer cans.
IMHO there's really only three possibilities:
(1) There exist ridiculously efficient learning, search, and other algorithms that have not been formally discovered that allow the brain to do what it does on a lot less compute than we think. P=NP would be the extreme case of this.
(2) The brain is a quantum computer or is doing something equally exotic.
(3) It's supernatural, and the brain is only like a radio receiver for something in another realm. (Or the universe is a simulation, etc., etc.)
1. Seeing as we have never made a computer able to be conscious, or hear/see/smell/touch as good as us, or reach the same power-efficiency I'd say there is an absolutely an enormous amount we still don't know.
2. Life started off as the smallest thing possible (atom/quark/something smaller) and grew upward into cells, then multi cellular and so on. All the while being present in the natural world and getting feedback from the best test environment there is. In comparison our computers started off as bricks, and over time we've slowly shrunk them down to what we consider small. But we still have nothing on evolution.
3. The unknown can often appear like magic, but as the rise of neural networks in computing has shown, we are ever so slowly unraveling the secrets of the brain. Image recognition has come a long way compared to 10 years ago, and the techniques that are working in machine learning now are changing the way neuro-scientists interpret the structure of the brain.
Or (4) the brain uses specialized electrochemical processes to perform computations that are not well addressed by traditional computing paradigms, which are designed to outperform the brain in areas in which it is naturally weak.
My GPU can compute things efficiently that my CPU can't, and vice-versa. Historically, analog computers have been able to compute things efficiently that digital computes could not, and vice-versa.
Why should a digital computer be particularly adept at computing that which a dense electrochemical network can?
Not yet, but we've tried and managed to build neural network processors with thousands of nodes and millions of interconnects.
100 billion transistors isn't a lot. The interconnect is a beast, but as we get better at layering chips this might not be a big deal.
The thing about biological neurons is they run at very, very low speeds, on the order of KHz, which means the heat normally associated with high transistor densities is a non-issue. You could create a chip so dense it's basically a cube without concern it was going to melt down.
The biggest problem with replicating this sort of thing in silicon is that the neurons reconfigure themselves physically, something not possible with today's VLSI technology.
I think you're very wrong in your estimates. We don't have the technology that allows for 1000 (variable-strength) connections per transistor with 100 billion transistors, not even remotely close, not at 1kHz, not at 100Hz. We absolutely don't know how to do that, even without the requirement to have it small.
If we did, we probably would have built something like that - Google or Apple would easily throw $100M at that project.
You're probably talking about IBM's TrueNorth (http://wikipedia.org/wiki/TrueNorth), which simulates 1M neurons with 256 connections each. So yeah, it's the right direction, but it's 9 decimal orders of magnitude away from the human brain. And that was like a $10-20M project. If you just scale it linearly to 100 billion neurons, it will cost more than Google/Apple is worth.
Realistically, if improvements continue in the exponential fashion, maybe in 4-10 years it will become achievable without going broke.
I think we have all the technical understanding we need to attempt such a thing, but the R&D costs for trying to get this crazy process to work would be astronomical and way beyond the budget of most university labs. I'm speaking mostly in terms of theoretical ability.
We have large scale FPGA devices. Adapting this to be not just reprogrammable in a formal capacity, but constantly self-reprogramming isn't a huge conceptual leap even if it is an enormously complicated thing to prototype and get working.
This is just how the technology behind the transistor made modern computers possible, we've just been iterating and refining on the same basic principle ever since, even though some of those iterations are very painful and expensive to get working. See the current pains around 10nm processes.
When I say we have the technology I don't necessarily mean we have the will or the budget to pursue it. As the costs come down it's inevitable someone will find a solution that's not billions of dollars, but instead mere hundreds of millions.
We're a long way from a proven, working design and process, but at least we can make such a thing, theoretically speaking.
The largest scale FPGA we "have" is probably UltraScale VU440, which you can't really buy yet. It has 4.4M logic units. Again, that's 6 decimal orders of magnitude on the number of neurons alone. Not sure about connectivity architecture.
That's maybe enough to simulate the cockroach brain.
Don't get me wrong, I really really really want this to happen. I just doubt it will happen in the next 3-5 years.
There are other challenges, main one being creating the right connections. It took a long time for the evolution to create and perfect our brain. And it was a highly parallel process too.
Our current computing technologies rely on a hard boundary between two states and ensuring that they are sufficiently far from any noise boundary that an error is very unlikely even with many billions of elements operating many billions of times per second. Neurons operate very differently and there is reason to think that they could compute at a much much higher efficiency.
There's the possibilities that the brain leverages a form of computing that is unlike our own. Similar to how algorithm classes differ between classical and quantum computing, there may be some esoteric form of doing 1+1 that we haven't yet explored.
With regards to (1), what about DNA-based squishy computing systems that solve TSP and the like at a rate far faster and with much lower energy usage than traditional silicon could do it?
IMHO there's really only three possibilities:
(1) There exist ridiculously efficient learning, search, and other algorithms that have not been formally discovered that allow the brain to do what it does on a lot less compute than we think. P=NP would be the extreme case of this.
(2) The brain is a quantum computer or is doing something equally exotic.
(3) It's supernatural, and the brain is only like a radio receiver for something in another realm. (Or the universe is a simulation, etc., etc.)