Nice idea, it might find some quirk of the face detector though, and exploit that, instead of producing a truly face-like image.
On the other hand, I vaguely recall a talk by VS Ramachandran where he claimed that in essence that's what Picasso and many others did also, i.e., you might see a painting that objectively doesn't strongly resemble a face, but it hits the right spots in your brain's face detector, so your brain says to itself "now that's what I call a face". A bit like how colorful candy triggers "ripe fruit - eat it!".
For the standard Haar-based Viola-Jones detector, all the parameters needed for it to say "yes" to a detection are known ahead of time if the detector is already trained.
They consist of adding and subtracting different rectangles within the image and comparing the sum to a pre-defined threshold (this is called a "weak classifier" in this context). This is done for ~30-50 different weak classifiers and if all of them pass, then a face is declared there.
Therefore, it should be relatively easy to find a set of rectangles that would satisfy these conditions.
My guess, from having worked with face detectors for quite a while, is that it wouldn't be nearly as cool as what painters do -- the false detections would be more mundanely like faces, or so completely off-the-wall if they happen to be a pathological case.
"In this series of images, all pulled from a single stone, Picasso visually dissects the image of a bull to discover its essential presence through a progressive analysis of its form."
Really great project - would almost have convinced me that computers can actually create art, if I didn't know there were a great coder behind it all :)
One interesting observation - I turned up the output resolution without changing the target fitness and started noticing that the (larger) results didn't look nearly as much like faces to me. At first I thought there must be a technical reason for this, until I realized that it was all psychological - the smaller images hold less information, and therefore my brain has to try harder to fill in the gaps, tricking me into believing that the low-res faces have more detail than they actually do. To prove this to myself, I scaled down the larger images and - sure enough - the (imagined) faces became much more believable.
I think these faces are quite beautiful and evocative. So far, I've gotten a sad old man, the west wind, a hound, a hag, and a young woman with flowing hair.
Yes, this is my favorite part - the emotion that can be projected onto these random mishmashes of triangles is astounding! I've gotten a stoic old American Indian chief, a scary demon baby, and spitting images of both Patrick McGoohan and Fat Marlon Brando. The author should really store copies on the server and add a gallery of some hand-picked favorites!
You could do it, but it's generated client-side, so I believe the server doesn't have direct access to that. (I guess I am asking: is there a communication layer I haven't noticed which transmits the "good enough" images back to the server?)
I feel like this is a very generalizable process. Using randomization and a fitness metric to randomly generate and refine something. Seems like it should have a name a few dozen thesis papers written about it. Anyone know?
Also, this seems to be a very clear simulation of how we think evolution and natural selection work together. Fun.
That one's pretty neat too, although in that context (specific target image) it seems like you could do it more directly than black-box optimization. The problem could be posed as something like: given N (overlappable) polygons, what is the way of coloring and arranging them that most closely matches the target image? That should be directly specifiable in a mathematical-optimization framework, and then you'd vary N to get different aesthetic properties.
With the face example, black-box optimization might be the only practical choice, though, since the face-detection component is probably not easy to express in a nice mathematical form.
Very nice project !
I was able to fork it to make the output directly usable by imagemagick. My goal was to generate a lot of images, then crop them to get only the "facial" part, and then stack them with overlay to obtain more human faces.
I wondered if the author really expected me to watch this for the rest of my life. Then I switched from my mobile phone to my notebook and noticed a "slight" speed-up.
It would be way more efficient to build a face with parameterized measurements (much like character-builders used in video games) and then randomly jitter the parameters.
I remember hearing in the commentary for The Incredibles that they basically used one basic face model as the basis for everything. I imagine they did this to avoid having to rig individual models for animation every time they made a new one, plus they could do random faces for the minor characters.
Efficiency is an issue. You can generate thousands of faces using an algorithm meant for that in a few seconds and this would take lots of computer time to generate just one decent one. Even if pre-rendering you're still paying for all the computer time.
Super cool AI/CV mashup there---I've always loved watching GAs at work. We're thinking of coming up with a bunch of cool demos like this to describe different aspects of Computer Vision to those just starting to learn. For example, visualizing Principal Component Analysis or training an SVM for image classification.
A possible optimization worth trying with this project would be to generate just half the face and then mirror it before running the facial detection fitness function. Seeing as how your subject matter is generally symmetrical the algorithm as it is now is spending a lot of time just finding symmetrical images.
It really takes little time to get to default target fitness
but anyway I like the idea of searching for visual patterns e.g. faces in random data. Thanks for that.
Human brains have definitely evolved to allow face recognition and distinction from infancy. There are people with "face blindness," prosopagnosia and they have difficulty distinguishing characteristics of their own face. I have heard someone with face blindness describe distinguishing between two faces similar to distinguishing between two similar river rocks.
It's a trained pattern recognition. When you see a face from your race, you already saw many others like it, so to distinguish people you trained to see the details. You see a face from another race you are not accustomed to, your pattern matcher says "hey, it's that _race_" and then it stops, as it doesn't know to distinguish details for that race.
An interesting study about this[1] shows that children exposed to other races (adopted and moved to another region) lose their initial training.
On the other hand, I vaguely recall a talk by VS Ramachandran where he claimed that in essence that's what Picasso and many others did also, i.e., you might see a painting that objectively doesn't strongly resemble a face, but it hits the right spots in your brain's face detector, so your brain says to itself "now that's what I call a face". A bit like how colorful candy triggers "ripe fruit - eat it!".