Hacker News new | past | comments | ask | show | jobs | submit login
Natural image reconstruction from brain waves (biorxiv.org)
117 points by soofy on Nov 5, 2019 | hide | past | favorite | 39 comments



I'm highly skeptical. I mean, a hash function that has four output states also maps anything to one of those four states. That doesn't mean it's some next-level classifier.

The problem here is EEG. EEG bandwidth is not enough to capture that much information. There is far too much noise introduced by the skull and muscles. It's most likely physically impossible to do something like this with EEG.

What's likely happening here is that there's some large scale oscillations that are sufficiently unique to discern the images from each other. This does not mean they are reproducing the images. I am highly skeptical of the methods used here -- they are almost certainly flawed.

I, too, once had dreams of conquering the planet with EEG when I was a grad student. I quickly learned that physics makes this infeasible. Anyone who is serious about BMIs are studying invasive BMIs and how to make them as safe as possible. Going inside the brain is unavoidable, I'm afraid.


About a decade ago when I was still in school, I did some work in brain machine interfaces as well as a friend. I made an EEG from scratch, worked on the DSP and amplifications to make it all work, and also had access to a much more expensive state-of-the-art machine. While I didn't work directly on the project with my friend, at the time they came to the conclusion that non-invasive neural processing (so something topical like an EEG, no surgical implants) could process about 1 bit per second of useful information - the noise to signal ratio was about 1000:1. When most people read the raw data from an EEG they don't realize they can't even see the actual data - they're seeing eye movements, facial muscle twitches, and other noise artifacts that overwhelm the actual signal. I'm guessing the technology has improved a lot since then (I'm in another field now), but it's hard to imagine it gaining however many orders of magnitude in resolution necessary for this to be viable.


Exactly. I have always been wondering how could the brain waves measurements not be overwhelmed by facial muscle signals.


If you have multiple different points where you measure, which all have this overlapping signal problems but at different strengths, couldn't you hypothetically build up a model that "solves" these different weights and untangles the signals?


Yes, but... At the end, you're still reconstructing pieces of information from something that was almost destroyed. Picture it this way: there are amazing deconvolution algorithms that can "undo" all sortf of noise and lack of focuse -- but the end result, however good to the original "bad" data isn't nearly as good as a well taken image to begin with.

Disclaimer: I work in image processing, so the example may be a bit obvious to me.


Isn't what I described more like reconstructing a picture from many copies that were each destroyed in a unique fashion?


Yes, that'd be a better analogy. My point was that, even if you had the best reconstruction in the world, having to reconstruct from a degraded source is worse than working from a good source to begin with.


From a practical perspective they don’t have to be perfect or as food as the original, not even close, just good enough. “Good enough” though is also extremely hard to achieve, assuming it’s even possible with this technique.


There is at least one 'affordable' fNIRS device coming to market that looks promising, https://foc.us/fnirs-sensor/ There's a paper somewhere on using machine learning to help identify signal, this one is specifically about pain, https://www.nature.com/articles/s41598-019-42098-w Say for example you were making an insurance claim for neuropathic pain, this kind of information could be very important.


Instead, it will be repurposed for lie detectors and 'terrorist mindset detectors' in airports.


This is amazing. A not-hotdog for pain would definitely be useful!


Example of how it may be overfit: Waterfalls are loud, audio regions of our brain may activate in response to waterfalls. Classifier reads that to predict waterfalls.


Great example! Similar thing probably holds for moving main limbs. A good EEG should be able to pick up when you think about moving your hand or leg. I doubt a good EEG could distinguish more than a few dozen patterns. Most experiments have trouble with even a handful of patterns. Still could be useful, but just very limited.


fMRI seems to avoid many of the bandwidth issues EEG has, at least from a theoretical if not practical position.

With enough receive antennas and processing power, you can get almost unbounded 3D resolution.


i wonder if one day fMRIs become a home fixture like washing machines


This model is incredibly overfit.

Video: https://youtu.be/nf-P3b2AnZw

Watch how it has preconceived notions of these scenes. It frequently fails to reconstruct the correct scene from video, and it also turns completely blank input into one of the scenes it was trained on.


Imagine the shitshow this will cause once law enforcement adopts this.

Currently eyewitness criminal sketches are still drawn by artist so they are naturally low fidelity.

That will change once you can generate a photo of a face (like https://thispersondoesnotexist.com/) based on your brain waves.

This will be disastrous on so many levels. The eyewitness might not have a good sample of a minority race. The GAN dataset itself might also only be trained on celebrity faces so it doesn't know how to generate anything else (e.g., a teen).

But it will be deceptively high resolution so police will rely on it.

If you have a generic face your life is fucked.


OR they will just find out that this method can just as easily be producing a picture that someone who is good at visualizing just made up in their mind and is actually "looking at" in their mind's eye. This making this technique useless as a form of truth seeking machine.


as someone in this thread pointed out, the model is overfit.

I did my thesis on EEG signals, also having a very idealistic view of what could I do with it, only to find that even the most basic of tasks is hard to classify (even to find motor cortex movement intention signatures (if you want to move left or right hand)).

This work will not go into the real world in this stage, as it is badly done and most certainly having multiple flaws in the implementation, rendering it unusable in the real world.

So, don't get too stressed out about this, if it happens, it will be about 20-30 years from now. And keeping in mind how slow the law enforcement technology moves forward (aren't most of them still using windows xp and vista?) I would count more like 30-50 years.


Reminds me the Crocodile episode from Black Mirror:

https://www.vox.com/culture/2017/12/29/16808458/black-mirror...


My research group is doing the same thing but with music. Music may be more promising than images because of the Frequency Following Response -- a sort of direct resonance effect in the brain in response to sound.

We have 24 subjects listening to 12 songs in random order, with 128 channel EEG sampling at 1000hz. We can then label all these data points with the musical features at the time the data is collected.

We don't have a public repo yet, but we are sharing data.


I dont think their model is working, and Im not sure it ever will. Simply reading brainwaves, a bi-product as I understand it, of the actual neuron activity, couldn't possibly give you an accurate result.


The end results are much, much better than I thought they would be. Luckily, I think it would be easy to fool the training by thinking about a totally different image to the baseline one. Idk if that would stand up to rubber hose cryptanalysis, but there’s got to be a way that can.


The end results show an overfit model. It's not predicting that specific input out of an option space of everything; it's essentially predicting that mode (out of the 4) and probably capturing things like "if brain's audio regions are active, it's a waterfall, because waterfalls are loud and trigger that".


I read in some article some time about use of SQUIDs (https://en.wikipedia.org/wiki/SQUID) to map activity of a single neuron non-invasively. There was a lot of hype at the time for brain-computer interfaces based on that, but then same as with many technologies that were "just 5 years away" those 5 years came and went with no deliverables expected.


There’s a great movie called Until The End Of The World that centers on this kind of technology. Once the scientists get it to work, they realize that they can record and play back their dreams, and they become addicted to watching them.


Lena source image resulting in some other random woman "reconstruction" = model over fitted AF. Put a dead fist and it will continue generating "reconstructions".


It’s interesting to see that the reconstructed Lenna has the high quality reconstruction, but of a generic woman.


See the figure title:

> an original face image replaced by an image sample due to publication policy


Ah, I was having difficulty reading the text due to formatting.


What would a world look like where all thoughts are public?


Dystopian because these are very high tech which naturally concentrates their use and abuse by the super rich nation states and corporations while the rest of humanity will be watched like caged animals with no scope for any kind of opt out or pushback.


There would be a pressing need to ban tin foil because uhhh it’s bad for the environment


* a LOT more weird porns

* we do not need passwords

* eventually, human will be more empathetic

* new educational system


> * we do not need passwords

Probably the opposite. All passwords are machine generated/stored. Everyone uses an HSM.



What would a world look like where all thoughts are public.

When all thoughts a public has is a good thought.

It would be a beautiful world.

So whats the first thing any innovation should bring upon us? being good, thinking good.

What would a world look like if all innovations does good to public?

People's thought become good.


You would still likely need sophisticated equipment to read them. Psycho-Pass explores some aspects of that question.


twitter on steroids.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: