I have one question about this, I'm sure it's completely explainable and honest but it comes across as suspicious.
In the image labelled "Illustration of the model’s deep learning framework architecture", the input face has a strange line drawn underneath the chin. It seems like an odd thing for a human drawer to put in, and makes the person look like they have a double chin.
Yet in the output shown at the end of the pipeline, it appears as a shadow. I didn't go into the article suspicious, but this immediately made me wonder if for some of these sketches, a face to line drawing network was used for some sort of reverse process.
The image does appear in a part of the article discussing their learning methods, though, so I'm probably missing something important. But given that they "are working to release their code" it doesn't really help with confidence.
Adding a line where you want there to be a shadow in the output seems like something you could learn from trial and error when messing with a model. It somewhat weakens the accomplishment of the paper if the sketches aren’t drawn by naive users, but it’s a lot more defensible than generating the input like you suggest.
Agreed. It just looks a bit strange and doesn't help to instil confidence in the paper. My first guess would be that they've used a reverser for the learning process somehow. As it's a preprint, hopefully comments like this will help them to strengthen the paper and release their code!
In the image labelled "Illustration of the model’s deep learning framework architecture", the input face has a strange line drawn underneath the chin. It seems like an odd thing for a human drawer to put in, and makes the person look like they have a double chin.
Yet in the output shown at the end of the pipeline, it appears as a shadow. I didn't go into the article suspicious, but this immediately made me wonder if for some of these sketches, a face to line drawing network was used for some sort of reverse process.
The image does appear in a part of the article discussing their learning methods, though, so I'm probably missing something important. But given that they "are working to release their code" it doesn't really help with confidence.