Hacker News new | past | comments | ask | show | jobs | submit login

That's not how AI works. It would distort it to match the examples it's been trained on.



Exactly. The AI shows an abstraction of an abstraction (to the AI its all just bits and patterns) but the thing that's immediately fascinating about the painting is that it is echoing someones experience in the flesh of a wild pig so long ago. The stuff about "humans ability to express abstraction" is very much in second place for me.


the replies to this comment are some of the least brain-intensive pieces of information you could ever come up with. do you think generative AI has not been trained on images of pigs?


which would be precisely the idea.

I think you may be struggling with the word "works". and "exampled". and perhaps even "trained"

vague image + entire corpus of human imagery = ?


Or perhaps they are not struggling with the word "reconstruct"


it sounds to me like people don't like/are sceptical about generative AI, not that they're so expert in the area that they can give a valid opinion on whether this is possible or useful.

part of the technique that diffusion models use is literally to take a vague image of something and then use training data to build it into something more clear. this whole comment chain is so confidently wrong it's unbelievable


Something more clear is totally different than reconstructing.

If an artists rendition of what it might have looked like would be useful, then genAI may be cheaper for that task. But we wouldn't call an artist's rendition of what it might have looked like "reconstructing".

And you couldn't be much more wrong about how much I like and use generative AI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: