Hacker News new | past | comments | ask | show | jobs | submit login

"Thought experiment" is just a rebranding of "some shit I made up". Calling it an "experiment" belies the fact that an experiment involves collecting observations, and the only thing you're observing here is the speculation of your own brain. No part of your thought "experiment" is evidence for your opinion.

> They can learn to understand "Teacup as a Shape" completely independent of any texture, lighting, background, etc.

Or, maybe they can't. I don't know, and neither do you, because neither of us has performed this experiment (not thought experiment--actual experiment). Until someone does, this is just nonsense you made up.

What we do know is that human infants aren't blank slates: they've got millions of years of evolutionary "training data" encoded in their DNA, so even if what you say happens to be true (through no knowledge of your own, because as I said, you don't know that), that doesn't prove that an AI can learn in the same way. This is analogous to what we do with AIs when we encode, for example, token processing, in the code of the AI rather than trying to have the AI bootstrap itself up from raw training on raw bytestreams with no understanding.

You could certainly encode more data about teacups this way to close some of the gap between the synthetic and real-world data (i.e. tell it to ignore color data in favor of shape data in the code), but, that's adding implicit data to the dataset: you're adding implicit data which says that shape is more important than color when identifying teacups. And that data will be useful for the same program run against real-world data: the same code trained against a real world teacup dataset will still outperform the same code trained against a synthetic dataset when operating on real-world data.

This isn't a thought experiment: it's basic information theory. A lossy function which samples its input is at most only as accurate as the accuracy of its input.

But no image AIs I know of work this way because it would be a very limiting approach. The dream of AI isn't recognizing teacups, it is (in part) recognizing all sorts of objects in visual data, and color is important in recognizing some object categories.

Frankly, it's clear you lack the prerequisite background in information theory to have an opinion on this topic, so I would encourage you to admit you don't know rather than spread misinformation and embarrass yourself. If you want to know more, I'd look into Kolmogorov complexity and compression and how they relate to AI.

I won't be responding further because it's not worth my time to educate people who are confident that their random speculations are facts.




> "shit I made up".

Never read past that. I bet nobody else does either. lol. You're just desperate to be as offensive as possible without crossing the threshold where you'll get flagged.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: