Hacker News new | past | comments | ask | show | jobs | submit login

You talk about the ML being reinforced over time by user actions so you must have suffered from the 'cold start' problem in the beginning. How did you address this? Did you initialize it with your own expert curated rules/weightings? Something else?

Have you done any automated mining of the elements in existing "good" company logos (e.g. Nike, AirBnB, etc.). I could see doing something manual too like Music Genome Project or Netflix's internal curated movie characteristics tagging. You'd go through assessing these gold standard logos for qualities like contrast, amount of whitespace, if brand name intersects the artwork, etc and then augment your model with these style rules.

Any insights you'd be willing to share would be appreciated!




Yeah, the ML is still in it's infancy, so we're almost in that cold-start right now. To address that, I built in some more randomness into the algorithm so the initial logo designs can just get the user thinking and then we can get a better sense of their taste as they favorite logos. Without that randomness, the algorithm tries and fails horribly at creating logos.

I would love to analyze 'good' logos. I would imagine it's realllllyyy hard to analyze actual pixels. We just analyze actions and pre-created objects (ie. a font choice).


Analyzing pixels is something ML has gotten much better in the near history :) Even if you have a very simple recommendation engine running behind the scenes, you can throw in some convolutions to utilize your data to create a feature space just like word2vec, which would give you great power to generalize. Even putting in pre-trained networks like VGG16 (instead of training one from scratch) could give you a great headstart on this.


Actually, maybe not, if you have a large dataset that you are willing to share.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: