Hacker News new | past | comments | ask | show | jobs | submit login

Hmm, that sounds like a nod in the right direction, but a rapid initial skim maybe indicates that it's more parallelizing the problem than abstracting it. I've got to read more about it - thanks!

While Minsky & Papert's book on Perceptrons was enormously destructive, I think there is something to their general concept of Society Of Mind, that multiple sub-calculating 'agents' collude to actually produce real cognition.

We aren't doing conscious reasoning about the edges detected in the first couple layers of our visual cortex (which we can't really even access, 'tho I think Picasso maybe could). We're doing reasoning about the concepts of the people or objects or abstract concepts or whatever many layers up. The first layers are highly parallel - different parts of the retina connecting to different parts of the visual cortex, and then starting to abstract out edges, zones, motion, etc. and then synthesize objects, people, etc.

I think we need to take a GPT and a Stable Diffusion and some yet-to-be-built 3D spatial machine learning/reasoning engine, and start combining them, then adding more layer(s) synthesizing about that, and maybe that'll get closer to reasoning...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: