This is just confusing misuse of the word "relativistic", being based as it is on _relative_ probability and having little to do with philosophical relativism and nothing to do with physics relativity.
"Relative probability" is only correct for standard GAN, not for other GANs which don't estimate a probability.
My first name was "Critic's difference" since it's literally C(x_r)-C(x_f) so its the difference in critics, but it felt really unclear, it doesn't give the reader any sense of what it is about. Relativism/Relativistic is better since it's to say that it really doesn't matter if the data looks real, what matters if how realistic is real data relative to fake data (and vice versa). The frame of reference is important here.
It's not even probability in standard GAN is it? Since you are taking the difference before the sigmoid. It can't really be interpreted as probability until it the range is clamped. Critic's difference or Critical Difference would be a better term perhaps.
Since sigmoid(C(x_r)) = p(x_r is real) and C(x_f) = p(x_f is real), the sigmoid of the difference expresses some probability that that x_r looks more real than x_f or vice versa (depending on whether it is C(x_r) - C(x_f) or C(x_f) - C(x_r)). Not sure whether there is a probabilistic interpretation of the difference, but it looks so simple that there maybe is one. I couldn't find one in the paper.
Many of the words used by the machine learning community are gratuitously unrelated to what they mean in other fields. If you hear a ML person use a term and you don't know the ML meaning, you probably need to ask for clarification.
This is true of every field of science and even social science I’ve known. Also the same terms have different meaning in different fields. Also the standard use of a term within a subfield can be different from its usage in another subfield. There are even variations in usage within communities of the same subfield (e.g. inside different ML communities).
I agree -- though I'm not sure how to make the right name for this technique. Maybe "relative-probability GAN" is the best.
I also wondered before clicking what a relativistic GAN would be: maybe as the activity of a neuron becomes larger and larger, it becomes harder and harder for it to continue? But that's already true of sigmoidal activation functions.
I almost doubt there exists a single noun in the English language that is not yet the name of some git package, platform, programming language, or text-editor.
It's from same GAN though, C(x_r) and C(x_f) come from the same neural network. It's how realistic real data is compared to fake data (and vice-versa) as determined by C (discriminator without activation function).
Thanks for correcting me! I commonly call a group of people "guys" even if they are women, this is something I need to break the habit of. Will strive to be gender neutral in the future.
Interesting, especially the performance on bigger images, and it looks like a low-effort modification of many standard GAN losses. Seriously, I want to give this a try right now. What do GAN researchers think about this paper?
It should also be appreciated that it comes with code and a short blog post.
I don't see any sense in attacking the credibility by the choice of the blogging infrastructure - the author wanted to convey a message, present his findings and he achieved that. This should be the only thing that counts.
I guess the best (even if in a totally opposite field) argument against opinionated choice of web-technologies would be the website of Berkshire Hathaway [1]. They invest in highly sophisticated companies but still use the website that always provided the service they demanded.