Hacker News new | past | comments | ask | show | jobs | submit login
Non-Photorealistic Rendering Using a Painting Robot [pdf] (uni-konstanz.de)
74 points by lichtenberger on April 25, 2019 | hide | past | favorite | 12 comments



I've tinkered with 'hobby grade' acrylic painting with robots before [1] and this is a wonderful paper. Getting the visual feedback on brush strokes alone working well is impressive. The line "Real paint strokes interact in a complex way that is very hard to simulate on a computer" really hits home.

Always nice to see art robotics progressing along.

1: http://transistor-man.com/bluebot_revival.html


Charles "Chuck" Scuri at OSU worked on plotter based painting. He was known as one of the very first computer graphics artists. I had the fortune of meeting him in the 1990s

https://en.wikipedia.org/wiki/Charles_Csuri


There was a neat demo in the science museum in London in either the late 80’s or early 90’s - they had an industrial robot arm (wearing a beret, of course) hooked up to a video camera, and some variety of edge detection running - and it painted portraits of museum visitors in a sumi-e style, with a brush. I’ve still got mine, aged 7 or so, kicking around somewhere, and from what I recall it was a decent likeness.


Very cool, has anyone tasked a painting robot with reproducing images generated from a neural network? I've seen plenty of both, but nothing so far merging the two.


The problem is current conv neural nets are extracting textures, which may not necessarily translate to brush strokes that translate to the same physical rendition. A new net would have to be trained on the mechanical processes of paintings.

I built a robo painter that captures brush stroke movements: https://www.instapainting.com/blog/research/2015/08/23/ai-pa...

With enough input data here a neural net could be trained to produce physical results.


However, how do you measure the quality of the painting accurately? I think in their first paper on e-David Oliver Deussen mentioned something like that for future research :-)


Painting through that motion capture robot isn't exactly like painting with a brush in hand... I've tried creating prototypes with analog motion capture arms where the artist can physically guide the arm. The artist guided version would be the source of truth used for training, and you can evaluate results with cameras.


I hypothesize that a camera watching the robot paint and a conv net with the right loss function (the tuning of which would be the hard part) could do the trick.


By the way, great work... maybe you could learn from each other!? I guess they are very open. I studied at the University of Konstanz and saw their first version back in 2012 when I finished my master on a complete other topic I might add ;-)



What about DeepMind's SPIRAL? They didn't just prototype it in software but ran it on an actual robot arm, IIRC.


Or, have painting robots reproduce images generated from a neural network, then feed those images back into the neural network. Rinse and repeat. I imagine the results would be similar to running a sentence through Google Translate over and over again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: