Hacker News new | past | comments | ask | show | jobs | submit login

> We will wear a device which will be able to read our brainwaves and determine which word we are thinking ala dictation

Since this thread is presumably being read by entrepreneurs making bets on the future of technology, it needs to be said that this will never happen with the current imaging technology. Brainwaves implies EEG, and the research in this field strongly suggests that it is information theoretically impossible to extract this information through the electrical activity on the scalp.

For this vision to become reality we need a new imaging device that has both the temporal resolution of an EEG, and a spatial resolution that probably needs to be better than an MRI.

In summary: Certain things are impossible. I can say with certainty that no algorithmic improvement will allow this to work using an EEG. I don't know whether it is physically possible to create a non invasive imaging device that allows such a signal to be detected reliably, but it certainly does not exist today, and it seems like a leap of faith to assume that it definitely will exist at some point in the future.




I can key morse code at 40wpm with two muscles. With one hand I can chord at 120wpm. On a stenowriter I can transcribe about as quickly as most people can read - 250wpm.

I've invested an extraordinary amount of effort into improving the speed at which I can interface with a computer; I think the practical limit is about 300 baud, half-duplex.

Of course, we're trying to establish an interface with a bafflingly complex lump of grey meat, but are we really daunted by the idea of outpacing a V.21 modem?


Your judgment that present technology is inadequate is based on the assumption that computers need to learn to read the human thoughts.

What about the inverse, that the humans learn how to think in a way that a computer understands? That will be much easier, as humans learn much better than computers, and also much safer - I will have complete control over which of my thoughts the computer can detect and interpret.


The human learning to adapt to the machine has been the way EEG-based brain computer interfaces have been made for a couple of decades. Using machine learning to adapt the machine to the human is a much more recent development.

It is possible today to make EEG controlled devices. They typically differentiate between a small number of real or imagined movements in the user. This is awesome, because it can allow severely paralyzed people to communicate, control a wheelchair. etc. Nevertheless, the algorithms used to do this are perfectly useless when it comes to distinguishing whatever words the user is internally vocalizing.


The keyboard is not very good at determining which words I'm internally vocalizing either, still seems to work. The point I'm trying to convey is that maybe we can learn to transmit words using some form of brain reader, but that measures something else than vocalizing.


Doesn't have to be "brainwaves". The brain has a few outputs that can be highjacked (e.g. a computer with a neural interface that appears to be another muscle in the body). I don't know whether the bandwidth of these outputs is sufficient for interesting communication; we've evolved to take in far more data than we produce.

Edit It seems that more direct methods of neural interface are already plausible: http://www.technologyreview.com/biomedicine/37873/


It doesn't actually need to be noninvasive. If an invasive procedure is useful enough and can be made safe, eventually it will be ubiquitous.


The problem with invasive is upgrading.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: