Hacker News new | past | comments | ask | show | jobs | submit login
Reading mind and Visualizing thoughts using AI (thinkml.ai)
36 points by umermirzapk on Sept 4, 2020 | hide | past | favorite | 9 comments



So high level MRI data features are correlated with reported thoughts. This sounds like interesting research but they are far away from any sort of usable system.

This reminds me of people getting excited by natural language advances of BERT and other transformer models because they think the models actually understand language, instead of the truth that these model are just very good at predicting words around a point of interest in text.

That said, the article is worth reading.


> these model are just very good at predicting words around a point of interest in text

Well, yes, but quantity has a quality of its own. Train it with lots of data for emergent behaviour.


The way I read the output here is that a much larger network attached to something with much better resolution will absolutely work. I wonder if fMRI will ever really have enough resolution to get usable information back; the images from the ‘simple’ pictures seems to indicate yes. Presuming this was done inside a giant tube, it’s not a very appealing idea unless someone has severe disabilities.

Overall I guess this paper makes me bullish on Neuralink. I eagerly await the answer to the question of whether we all see colors the same way or not.

Also, somewhat randomly, the photos of the DVD hardware made me think about whether different brains will process them differently. My wife has no interest in DVD players, and I believe from our twenty years together that her brain spends almost no time visually assessing them. I would be very interested to compare outputs of our two brains on different topics and see which ones we each have better specificity on.


This feels pretty sensationalist given the methodology and results, but it's an interesting avenue of research nonetheless. I'm curious about the limitations of fMRI or any other "external" data collection. I'm not eager to see things like NeuralLink be effective, but it feels more likely in some ways.


This seems to be a review of several different studies on analyzing brain activity and isn't just about the first study shown, so while the original title is a little sensationalist, I think the edited title is pretty accurate to what the article is about. Some of the other studies include one at Carnegie Mellon about phrase prediction, an experiment at the University of Oregon about facial recognition, Neuralink, thought reproduction at Russian Corporation and Moscow institute, etc.


Assuming this article itself isn't autogenerated (which given the sheer number of typoes it probably isn't), the author grossly overestimates what the paper(s) report.


I don't think the author is overestimating, I remember when this paper came ouy it was printed/aired all over the media as a The Great AI That Can Read Your Thoughts (TM).

I thinking he's just piling on the hype and showing what it really was.


It's cool, but from a library of 25 images.


I wouldn't dismiss it so easily. 25 images can give you alphabet. That could be useful for people with locked-in syndrome.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: