There is a big gap before 2012. For examples, projects featured at Cybernetic Serendipity Music 1968 [1], and Harold Cohen's AARON [2]. Having that said, I'm not strictly sure if it belongs to machine learning.
I’d love to see a proof of concept of a network that generates the bootstrap front-end and connects a backend. Bonus, if the model applies attention to visualize the focus on the front-end markup as it builds the back-end.
Neural networks that accomplish intense categorization techniques rely on multiple layers at varying precisions. It's fully possible to seed a neural network with good names for elements (like a CSS file) and layouts. It's a neat idea, but dynamic pages would require more finesse than simply being able to generate a static sheet, if you wanted to use instantaneous transitions there'd have to be some sort of state-caching for transitions.
In the short-term, this approach will struggle to compete against WYSIWYG editors. But as soon as they can match them in output, they’ll improve exponentially faster. WYSIWYG editors has a ton of code to maintain, while a model is simple to improve.
Ha, it reminds me of what Andrej Karpathy said "Kaggle competitions need some kind of complexity/compute penalty. I imagine I must be at least the millionth person who has said this." It would be interesting to collaborate/compete on more creative tasks and have different metrics for success.
So true. Another reason to put constraints in Kaggle competition is due to production environment. How many winner models have been used in production? I suspect this number is near zero. High accuracy with a delayed time makes a ML/DL artefact not usable in production, because from users point of view speed is much more valuable than the difference between 97% and 98% in accuracy.
-https://cyberneticserendipity.net/
-https://en.wikipedia.org/wiki/Harold_Cohen_(artist)