I think you're very wrong in your estimates. We don't have the technology that allows for 1000 (variable-strength) connections per transistor with 100 billion transistors, not even remotely close, not at 1kHz, not at 100Hz. We absolutely don't know how to do that, even without the requirement to have it small.
If we did, we probably would have built something like that - Google or Apple would easily throw $100M at that project.
You're probably talking about IBM's TrueNorth (http://wikipedia.org/wiki/TrueNorth), which simulates 1M neurons with 256 connections each. So yeah, it's the right direction, but it's 9 decimal orders of magnitude away from the human brain. And that was like a $10-20M project. If you just scale it linearly to 100 billion neurons, it will cost more than Google/Apple is worth.
Realistically, if improvements continue in the exponential fashion, maybe in 4-10 years it will become achievable without going broke.
I think we have all the technical understanding we need to attempt such a thing, but the R&D costs for trying to get this crazy process to work would be astronomical and way beyond the budget of most university labs. I'm speaking mostly in terms of theoretical ability.
We have large scale FPGA devices. Adapting this to be not just reprogrammable in a formal capacity, but constantly self-reprogramming isn't a huge conceptual leap even if it is an enormously complicated thing to prototype and get working.
This is just how the technology behind the transistor made modern computers possible, we've just been iterating and refining on the same basic principle ever since, even though some of those iterations are very painful and expensive to get working. See the current pains around 10nm processes.
When I say we have the technology I don't necessarily mean we have the will or the budget to pursue it. As the costs come down it's inevitable someone will find a solution that's not billions of dollars, but instead mere hundreds of millions.
We're a long way from a proven, working design and process, but at least we can make such a thing, theoretically speaking.
The largest scale FPGA we "have" is probably UltraScale VU440, which you can't really buy yet. It has 4.4M logic units. Again, that's 6 decimal orders of magnitude on the number of neurons alone. Not sure about connectivity architecture.
That's maybe enough to simulate the cockroach brain.
Don't get me wrong, I really really really want this to happen. I just doubt it will happen in the next 3-5 years.
There are other challenges, main one being creating the right connections. It took a long time for the evolution to create and perfect our brain. And it was a highly parallel process too.
If we did, we probably would have built something like that - Google or Apple would easily throw $100M at that project.
You're probably talking about IBM's TrueNorth (http://wikipedia.org/wiki/TrueNorth), which simulates 1M neurons with 256 connections each. So yeah, it's the right direction, but it's 9 decimal orders of magnitude away from the human brain. And that was like a $10-20M project. If you just scale it linearly to 100 billion neurons, it will cost more than Google/Apple is worth.
Realistically, if improvements continue in the exponential fashion, maybe in 4-10 years it will become achievable without going broke.