Hacker News new | past | comments | ask | show | jobs | submit login
Jpeg2png: Silky smooth JPEG decoding – no more artifacts (2016) (github.com/victorvde)
187 points by todsacerdoti on July 10, 2020 | hide | past | favorite | 109 comments



Knusperli is similar: https://github.com/google/knusperli

Instead of trying to smooth the entire image, it reduces blocking artifacts by finding plausible coefficients that reduce discontinuities at block boundaries (jpegs are encoded as a series of 8x8 blocks). "jpeg2png gives best results for pictures that should never be saved as JPEG", but knusperli works well on normal photographic content, since it only tries to remove blocking artifacts.

> A JPEG encoder quantizes DCT coefficients by rounding coefficients to the nearest multiple of the elements of the quantization matrix. For every coefficient, there is an interval of values that would round to the same multiple. A traditional decoder uses the center of this interval to reconstruct the image. Knusperli instead chooses the value in the interval that reduces discontinuities at block boundaries. The coefficients that Knusperli uses, would have rounded to the same values that are stored in the JPEG image.


> Instead of trying to smooth the entire image, it reduces blocking artifacts by finding plausible coefficients that reduce discontinuities at block boundaries

This is essentially the same thing, but using slightly different regularizers. In both cases, an image is found that whose jpeg compression is identical to the given one, by selecting the "most regular" image among the large family of such images. Jpeg2png minimizes the total variation over the whole image, and knusperly only along the boundaries of the blocks. The effect is quite similar at the end.


The effects are quite different, since reducing total variation also reduces texture, which is important to perceived quality. Here's a direct comparison-- note how knusperli retains slightly more detail instead of oversmoothing: https://mod.ifies.com/f/200710_lena_decode_comparison.png


Good point. Unfortunately, the lena image is quite bad for comparison, there's almost no texture (except for the blue kerchief). I wonder what would happen for a heavily textured image; say, white noise. Would knusperly smooth it only along the block boundaries and leave it alone elsewhere?


I wish people would stop using that image as an example. Not because of its content (what a stupid beat-up that was) but because the source file is of such terrible quality that it isn’t representative of the images we routinely handle today. And the extreme colour cast makes it useless for judging skin tones.


The jpeg2png should have noise (grain) simulated back.


Use "-w 0.0" switch to jpeg2png, gives slightly sharper results.

Also try with: https://github.com/ilyakurdyukov/jpeg-quantsmooth with "-q6" (default setting without luma-aware chroma upsampling)

My guess is that quantsmooth won't handle squares in the background well, but will be better at the edges (without excessive blurring).

Update: I tested it, it's about 10% quality, jpeg-quantsmooth doesn't handle this quality. At 25% quality - result is fine, starting at 20% and less - not good.

Perhaps the synthesis of all three projects combined into one (taking best of each) could give a great result.

So then...

jpeg2png: overblurs and slow

knusperli: does only deblocking, not removes artifacts

jpegqs: fails to deblock low frequencies (less 25% quality)


I was going to mention Knusperli as well. It's worth mentioning that the "no more artifacts" claim in the title depends heavily on what you understand to be an artifact. With modern, good quality JPEG encoders (e.g. mozjpeg), blocking only appears when the file is heavily compressed, or after a lot of generation loss. A decoder like this can't recover information that isn't there, obviously. So at more moderate levels of compression, where there's no noticeable blocking but the information inside the blocks is distorted or missing, it's not likely to help you a lot.

(As a citation, I can only offer the fact that I've frequently tried to clean up JPEG images I've found online, and Knusperli is usually my first step. It's never a sufficient step, though.)


"For pictures that have been reencoded to JPEG multiple times I recommend shred." (https://www.gnu.org/software/coreutils/manual/html_node/shre...)

That's the funniest thing I've read in a while.


That reminds me of a quote about chlorine trifluoride:

"...It can be kept in some of the ordinary structural metals—steel, copper, aluminum, etc.—because of the formation of a thin film of insoluble metal fluoride that protects the bulk of the metal ... If, however, this coat is melted or scrubbed off, and has no chance to reform, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes."


I've always wondered if there was something you really could do to neutralize a reaction that "really wants to happen" like that. Presuming you're in a well-equipped chem lab and have access to all sorts of non-mundane substances... maybe you could pour liquid helium on the fire? (It'd splatter from the Leidenfrost effect, but the flying Helium droplets wouldn't harm anything in any chemical sense, "merely" a physical "being very cold" sense. So, suit up against frostburn, and move away any glassware to ensure it isn't shattered by the opposite of unequal heating.)


ClF3 will react until there is nothing left to fluorinate. In practice, this means it'll react with most neutral elements, oxides, and chlorides. Certain oxides might act to poison the reaction, slowing it down by absorbing free radicals. Mainly, you wouldn't even want to be close enough to such a reaction to be able to douse it.


I don't know about chlorine trifloride, but that is one of the strategies used for reactive substances, like molten sodium: https://youtu.be/rAYW9n8i-C4?t=2m


Ah, Ignition! quotes are always great. Including test engineers in the list of things ClF3 is hypergolic with gets me every time


This is a poor-mans version of the deblocking filter which you find in recent video standards, but jpeg was to early to get it.

The advantage of those over this is that the encoder knows that the decoder is going to run the filter, and it can use that in its perceptual model to choose an encoding which looks most like the original after deblocking. Video codecs also use the deblocked version as reference frames. So you see these artifacts in mpeg-2 video and jpeg, but not more recent ones.


I'd call newer formats to have poor-man's deblocking, because they literally apply blur to blocky images, instead of reconstructing an optimal gradient.

In newer codecs you can notice that sometimes blocks of 8 or 16 pixels get blurred with a 4px-wide blur, which is barely enough to cover up the edges, but not enough to mask square-ish shapes underneath.


Now we just need an adversarially trained neural network to fill in detail [1] and we have a 21st century JPEG decoder!

[1] Implemented for a custom compressor: https://hific.github.io/


And one day, a law enforcement officer will use it on a bad quality JPEG image and one of the innocent persons whose photos were used to train the NN will be arrested.


Nexus-6 just needs to get better at hiding from the law.


here's one that upscales extremely well:

https://github.com/TianZerL/Anime4KCPP


This looks like it would be super useful in making phone photography look better. I can’t seem to find, is there a way to run this yourself?


This strikes me as conceptually similar to the problem of printing floating point numbers. A naive algorithm will convert the binary floating point to decimal and print that out, with potentially many trailing digits that are more or less garbage. The more human-friendly algorithms figure out the shortest/simplest decimal that would be rounded to the same binary floating point quantity, and prints that instead.


You and the rest of this thread's participants may be interested to know that the problem was solved a while ago; you want the "Dragon4" algorithm from "How to Print Floating-Point Numbers Accurately" [0]. Indeed, the problem is difficult, and having implemented the algorithm, I can confirm that it has earned its reputation. Numeric algorithms are hard and the only reward is accuracy.

[0] https://lists.nongnu.org/archive/html/gcl-devel/2012-10/pdfk... or use your search engine



Yes, I've long wanted a floating point print function that understands that 3.999999999999 should be printed as "4".


My $.02: I’d prefer it say 3.9999999... than 4 so students learn of the inaccuracy of floating point. If one wants a rounded output, one should round before printing.


3.999... vs 4 is a bad example, because 4 is exactly representable, and if your calculation gives you a result of 3.999... [garbage digits omitted] then it means there was roundoff error along the way and your computed result is something that is strictly not equal to 4.

A better example would be something like 1/5, which for almost every purpose is better printed as 0.2 instead of 0.20000000298023223876953125. The latter is absurdly long and gives the misleading impression of precision far in excess of the ~7 decimal digits that single-precision floats are capable of representing.

The difference between 0.2 and that long string is due to a slightly different cause than a difference between 3.999... and 4: the latter is likely due to information loss during calculations and may be reduced and sometimes avoided entirely with careful ordering of calculations and using the right rounding modes. But 0.2 can never be exactly represented, even as the input to a chain of calculations. The loss of accuracy is an unavoidable first step of trying to put 0.2 into a binary FPU register.

Students should learn about the pitfalls of floating point arithmetic. But preferably in a way that doesn't leave them with the impression that it is a non-deterministic process that always leaves you with trailing garbage that needs to be ignored.


I sort of like the idea of avoiding exactly representable floats (you typed 4 but maybe you get 4-ε) to remind programmers that errors creep into most expressions and compound, and tight error bounds require numerical error analysis.


I wish we hd float hardware that tracked epsilon for you. Floats would be two values with epsilon being updated on each operation.

Then a comparison would take epsilon into account instead of just a bit match.


Students can be taught using the standard library printf() or whatever.

It's not as easy as "just round" though, because you don't know where the digits start to repeat. A smart function would also print 4.1249999999999 as 4.125.

A fancy unicode version could even include the mathematical notation to print 3.333333333333 as 3.3 with a line over the trailing 3. Or 4.2525252525 as 4.25 with the bar over the 25. Might be asking too much of a simple print function though, this might end up being an extension of the halting problem.


Sounds like you want a computer algebra system, not a floating point format algorithm.


I don't think I'm the only one. I was thinking as I wrote the response that such a system might have to include a whole lot more math than it looks at the surface, which is probably why it doesn't exist as a general solution.

But maybe there could be some heuristics that get it right 99% of the time (with a don't use this library for serious number crunching disclaimer).


I've used various HP scientific and graphing calculators that have functions to convert a floating point number to a symbolic representation of a rational number, or a fraction of pi. These functions are approximate, and their tolerance for how large the denominator of the fraction may get is determined by the current setting for how many decimal digits the display of numbers is rounded to. In practice, this works pretty well, and they implemented this capability long before they had a full CAS on any of their calculators.


Maybe. I'm having difficulty thinking of a case where you would want to know that a number was 1/2 pi (as opposed to 1.57...), but you wouldn't want to know about 3/19 pi.


It sounds like what you want to do is print the number with the smallest denominator (from the range of numbers represented by this bit pattern). A quick search turns up a plausible formula for this: https://math.stackexchange.com/questions/2494774/questions-c...


But 3.9999999..... === 4. It is literally the same number.


Your floating-point calculator won't give you infinite trailing nines, but it may very well give you something like 3.99999976158 (approximately the largest float below exactly 4) or 3.99999999999999955591 (approximately the largest double below 4).


"jpeg2png gives best results for pictures that should never be saved as JPEG. Examples are charts, logo's, and cartoon-style digital drawings."

Interesting value proposition.

There may be equivalents for other misused file types/formats. The obvious one is the inverse case of photos saved as GIFs, but I imagine audio and text (eg. PDF malapropisms?) are possible too.


I just tried it with a text heavy screenshot, a common scenario where the JPEG encoding produces a poor result with lots of speckle around each letter.

The result is very good. The speckling is effectively eliminated without smearing the letters.

JPEG Encoded, 25% quality: http://jubei.ceyah.org/overcompressed/crop-jpg.png

JPEG2PNG: http://jubei.ceyah.org/overcompressed/crop-converted.png


I find the text on the reference decoded image easier to read, despite the colorful speckles surrounding the words. JPEG2PNG doesn't smear the letters a lot, but does appear to slightly deform them. Consider the leading "a" in "add license, readme", whose upper half seems to be closer together after the JPEG2PNG run.

However, now I'm wondering whether my impression that the reference decoded text is easier to read is a form of overfitting. I have spent the past 20 years reading low-quality JPEG screenshots on the internet, maybe the perceptual centers of my brain have developed an unblocking algorithm?


That's a really interesting suggestion. I'm seeing something similar - the jpeg2png output looks cleaner and at first glance more pleasant to look at, but the reference seems both sharper and easier to read. It feels somehow more easier to ignore the more obvious flaws in the reference, and being used to them sounds like a plausible explanation to my layman ear.


Why the whole picture shifted? This can't be good


They're both cropped out of a full screen grab.


I d say the issue is the prpgram here, a compressor program might have heuristic to detect png-favored images vs photos. I cannot imagine we couldn't detect the easy cases reliably (because I'd say often images are more in a category or another). Crop the image to 64x64, try png and if the result is sufficiently small continue, else convert to jpg ?


And then someone uploads this image to twitter/facebook, and it gets triply re-encoded and resized with the worst algorithms known to man :(


https://mobile.twitter.com/NolanOBrien/status/12045574989900... twitter now doesn't cripple uploaded images, but tries to preserve all information except exiv data.


Kudos to Twitter!

Now if only they stopped cropping images to rand()xrand() in timelines pretending they employ advanced ML algorithms to do so :)


It's not random though. They likely show most contrasted area (with maybe some face detection).


That's what they claim. The result seems to be no better than rand() :)


This is really cool, but in other comments I'm reading that good JPEG encoders and decoders depend on knowing the other.

That you can design a better JPEG encoder for a known reference decoder (which decoded version best matches the original), and you can likewise design a better JPEG decoder for a known reference encoder.

But now that there are both encoders and decoders trying to be more efficient and non-blocky respectively... it feels like a mess. Even more so because a decoder never knows which encoder was used, and an encoder never knows which decoder.

Is there any solution to this mess? Is there a best practice here? Since most JPEG's are probably viewed on the web, do all browsers share the same decoder, and so effort is best spent on the encoder? Or even while encoder improvements are made specifically for the algorithms browsers use, can improvements to decoders still be made that remain improvements, instead of e.g. "overfitting" and actually making things worse?

This just feels like a bit of an "improvements race" on both sides which leaves me a little dizzy in trying to understand it...


The obvious improvement is to use a better format than JPEG. Maybe something that can include information about the encoding process so the decoder can use that instead of guessing. Or maybe even a non-lossy format so we can focus more on data compression and less on visual compression.


Interesting project, but its name sells it incredibly short.


For the first image I much prefer the jpeg decoded image to the de-noised one. The de-noised one just looks like it was sent through a low-pass/median filter and loses contrast, detail, and texture. If you flip between the full-sized images the jpeg2png looks like vaseline on the lens.

The naming is also odd. I don't imagine it would do well with text (which are the sorts of images I associate with png) that was jpeg encoded then decoded with it.


It says it is specifically targeted at images that should have been png in the first place.


The interesting thing is that advanced encoders might create JPEGs that produce odd results with this decoder.


Yes, this is definitely an interesting question. Earlier this year I implemented a version of trellis quantization for some compression experiments that I've been tinkering with in my spare time. My code considers more possibilities than just rounding to the nearest quantization step when it deems that the bit rate savings in after entropy encoding may be worth the additional image loss (e.g., it might even completely zero a coefficient). That would violate this decoder's assumption that the original DCT coefficient must have been within a limited range around the quantization step.

I know that mozjpeg [1] features trellis quantization for JPEG encoding. I wonder how this decoder would do with that?

[1] https://github.com/mozilla/mozjpeg


It goes fine with mozjpeg. I tested a little, didn't find any problems.


Surprised no one has tried to build a GANN to do this (remove JPEG artifacts).

Denoiser in ray tracing are quite a success, and this strikes me as quite similar.

In fact, now that I think of it, you probably don't even need a GANN, since you can create an infinite training set for quasi-free.


I've seen these projects on github. I don't remember the names, only that waifu2x upscaler has an NN filter for JPEG artifacts.


It's long past time to retire Lena as the canonical test image in CG.


Seriously. People wonder why there aren't enough women in CS.

Things like the canonical test image being a sexy "come hither" look, coming from a Playboy centerfold, definitely don't help.

And it's not enough to say "well the rest of the centerfold isn't shown". Come on. We all know the source of the image.

And "tradition" is far less important than CS being a welcoming environment for all people.

Computer science is a profession. Let's use images that are appropriate for a professional setting.


As a woman in CS I couldn’t care less about Lena.

Let’s focus on real things, not on this phony issues!


Passing an indifference test should however hardly be the right bar to clear for a field of research.


My point is: people (often, males) like to speak in the name of all women when it comes to Lena.

It’s offensive, it keeps women out of CS, it’s a symbol of the patriarchy, etc...

As woman, I am saying this is not the case, at least not for THIS woman: I’ve never felt excluded because of Lena, my CS studies were not impeded by Lena in any way, I have actually used Lena’s image for a small project.

Of course I would accept someone saying that the image offends someone (better if this point is substantiated by evidence) but I absolutely reject people speaking in the name of all women, an saying that Lena offends all women. This is not true!


Finding a person not offended does not mean an image does not contribute to a particular environment, or is an example of it. The image need not be offensive to most or even many for it to be true.

Also, be careful not to attack a strawman: the issue is of course not just the image.

As a man I experienced plenty of unpleasant conversation about women, conversation which might be referred to as men-talk or some such. Also, I'm not competitive at all, while many men would identify that as manly behavior. One of the few women in my university program confided that this (usually unfounded) self-assuredness was very annoying and a huge turn-off, a feeling we both shared, but would have been regarded as an essential ingredient to participating in that program.

Of course, such issues are also present in the reverse. My wife had similar conversations but about men amongst some female colleagues. Fortunately, we've met enough people to not have to settle for such juvenile views/conversations. I can absolutely see how a woman would find it very difficult to be comfortable in such an environment however, when I didn't even feel part of that mildly macho culture.

Just because something is 'anti-women', does not mean it can only bother women. Our cultures associate many things to gender, which in my view is a leftover of centuries past and we would do well to remove that sort of association. Maybe then, a picture of Lena wouldn't represent a (sub)culture so accurately and be therefore an indicator of the problem.


I have my evidence that Lena is not harmful to women, at least to this woman. It’s one data point, but better than zero.

You are making a much stronger claim without evidence. How is Lena harmful? How many women did Lena drive out of CS?

Do you have some proof, some data, besides neo-Marxist crap such a intersectionalized mysoginistic oppression?

It reminds me of that video that showed that college students are outraged by stereotypical Mexican Halloween costumes, but actual Mexicans are fine with it.

Are you fighting for women, or for your ego?


If you disqualify my experience but hold your own as a valid data point, I guess we are done.


I don't have any hard stats, but I'm going to try to talk about my experiences. I took a programming class in highschool - it was almost all boys at the time. This resembles my programming classes in college, and the professional environments I've worked in. Crucially, none of us back in highschool had ever even heard of Lena when we signed up for the class, and I doubt very many of the girls who decided not to take it had heard of Lena either.

It feels to me like there's some deeper reason for the gender disparity in tech than Lena. I don't have any issue with changing the picture whatsoever - I'm sure we can find some other picture to use as a baseline (big buck bunny?). But I would be willing to bet that it will do precisely nothing to change the fact that there aren't very many women in CS.

I can think of many other factors that seem more plausible to me. Me and my friends were into minecraft - at the time, you installed minecraft mods by overwriting files inside the minecraft.jar. If you wanted to set up a minecraft sever, you were given some command-line program to run and you had to set up port-forwarding in your router. Just doing this stuff makes you more comfortable with computers, and makes the jump to "I want to start programming" seem much smaller than someone who has never stepped outside Chrome and MS Office. And PC gaming is much more popular among young men than young women, so this avenue of becoming comfortable with the computer is going to be much more accessible to men. Not to mention, if you like games, eventually you'll want to make one - I think every PC gamer has at least thought about installing Unreal and trying to make their dreams into reality. I think if we actually want to increase how many women are in CS (and nothing would make me happier), this is the kind of stuff we should be thinking about, not whether image processing programmers use a picture of an attractive woman with bare shoulders too often.


I don't think the argument I'd that stopping using Lena is a silver bullet. The argument is that the image reflects a particular culture, one that not only women, but women in particular on average find uninviting. Personally I have similar feelings about the sexy calenders in some car workshops. It's just a bit too much irrelevant display of ones preferences.


[flagged]


Of course women can handle it. We're all adults.

But is it appropriate in context? Is it welcoming? If you have a choice of jobs, is it the kind of environment you prefer?

Or is it contributing to an image of "we're a bunch of immature boys who think it's funny to use a Playboy centerfold, har har, wait till you hear the sexist jokes we tell at lunch".

It's not the end of the world. But a welcoming and inclusive work environment is the sum of hundreds of little things. This is one of those things.

And come on -- the old line "what's the matter, can't you handle a joke?" is the oldest line in the book for "defending" sexist behavior. Expecting women to just "handle something" is not the right approach. We can be better than that.


> is it appropriate in context?

Depends on whether you would consider an image very similar to many, many others that get compressed to JPEG is "appropriate" (i.e. relevant, representative), or whether you are asking as a prude.

> Is it welcoming?

I can't see anything unwelcoming about that picture, at least not in any other way than many people's flattering profile pictures on social media. It's just a head of an attractive woman, shot by a professional photographer.


It is from a Playboy centrefold with full nudity, although that part is cropped out of the commonly used version


How is that relevant?


I guess the flip-side is that CS is so welcoming and inclusive that tech conferences will invite former sex workers and celebrate them. No slut-shaming from the techies!


"Booth babes" are not a mark of progress.


Parent comment was talking about Lena, the woman in the picture being discussed.


I was referring to Lena being invited to conferences like IS&T's[0] or the ICIP[1]. At photos of both events she is rather older and more modestly dressed (unlike booth babes).

[0] http://www.lenna.org/lenna_visit.html

[1] https://www.flickr.com/photos/icip2015/22077638933/


[flagged]


I don't think I'm the one being shrill here. ;)

Unfortunately, your type of attitude is precisely the problem. You deny a problem exists, attack anyone who suggests fixing it, and resort to questioning character and intelligence.

I'm sorry that this topic is making you angry. But sometimes it's important to hear other voices. In your case, instead of making a wager, why don't you simply talk to some of your female friends. You don't need to "poll" them, just have a friendly conversation. Don't ask them whether they can "handle" it, but what they would prefer.

Your eyes might be opened.

I'm talking from actual diverse workplace experiences with issues like this. You don't appear to be, or else I don't think you'd be saying the things you are.


A woman just commented that it's not a problem: https://news.ycombinator.com/item?id=23797658


Ah, tokenism at its finest.


Ah, so you’re suggesting we ignore the opinion of the only woman who showed up in this thread? Ironic, isn’t it?


Hi, I'm a woman too and you can read my opinion on the matter in a nearby comment.

https://news.ycombinator.com/item?id=23798157

I don't suggest that you take my opinion as the final word, either. It's a complex issue, and nuanced discussion is necessary and should not be vetoed by a single voice.

And to be sure, I largely agree with the woman in question: there are much bigger problems at hand. But as stated, the effort level required to discontinue using lena.jpg is zero.


My first programming job was for a media processing company, and we used the Lena image, and it’s origin was well known, and we were mostly young men so we all found copies of the entire shoot, and they were occasionally involved in office pranks. None of us learned it from HN/Reddit because those weren’t things in 2000.

I cannot imagine it was a welcoming environment for women.


https://www.losinglena.com/

As a woman in tech, it's actually the discussions surrounding this proposal that I find most telling. The use of Lena is a waterline. Most women (my personal standpoint bias) probably find it inoffensive at face value, and vaguely grating when they learn that it's a centerfold. It's a reminder of a time when porn in the workplace was rather common; one component of a baseline of inescapable sexual harrassment.

We've made progress, and sexual harrassment is less socially acceptable now. Some of us would like to erase that waterline, because it's a reminder of an ugly past. But what of the people, mostly men, who cling furiously to using that image? They make me uncomfortable, because it's an indicator of where they stand on pervasive sexual harrassment.

Unlike changing terms like master/slave etc, this is a zero-cost proposal (where those are very nearly zero themselves, but more pervasive). Replace the image with something that's got high contrast and color variation. I'd say you can even keep the old name so it won't impact your tooling.


I hope I can weigh in here as a woman (and I do not think my opinion carries any special weight). I don't mind the image, but I definitely do not cling to it - in fact, I wouldn't mind if it's gone at all, and I would treat as suspect someone who continually argues for its place in tech. Engineering is engineering, and porn is porn.

That doesn't stop the bizarre campaign linked in your post from being rather hyperbolic. The entire premise is that by removing this one image from common use, "millions of women" (their own phrasing in the trailer) will be empowered to pursue and feel welcomed in tech.

The presence of the image (some arguments can be put aside for a moment[0]) is a symptom, not (as far as I can tell) a cause. A campaign like this (and what methods and to whom it is addressed is not clear) would serve better to give a false sense of victory over sexism in tech. Getting the image unused isn't a "small win", I'd say it's detached completely from the battle. A total inversion of the problem, almost comically.

[0] Often people argue for things out of sheer principle, not caring much for the specifics of the matter. This is especially common, in my experience, in tech circles. However, there are interesting questions raised vis-a-vis the intersection of meaning, intention, and purpose. It is suspect to cling to 'original meanings' and intentions, and on the basis of that argument, some could well make the argument that the image is empowering as an inversion of traditional morality against sexual expression which still holds sway in conservative groups today. Just as a slave from the 19th c. would understand 'slave' in Git to refer to them, the Victorian puritan would consider the cropped Lena an abhorrent and obscene reference. They would be happy to see the terminology and image gone, but totally miss out on their situational context.


> Women can handle it in the same way they can handle a cloudy day

Going with that, then why create that cloudy day when it would take very little effort not to?

And what message does it send to actively defend creating those cloudy days?


>> a man not racing over to hold the door open.

It's telling that you think this is something that would even annoy most women, especially in a professional non-dating context. The women I know would feel creeped out if someone did that at work. Honestly, even in dating if you're "racing" to open the door it looks desperate. When done naturally some women like it and some think it's archaic.


I'm sorry you have to endure such an antagonistic social environment. Where I live (midwest US), it's considered polite for all people hold the door open for all others, regardless of gender.

I'm a male and if I worked in a place where women sneered at me for holding the door for them, I guess I would start only holding the door for men.


Holding the door open for all genders is widely seen as polite and fine, for whoever arrives at the door first, man or woman. It's an optional courtesy.

Having a man "race over" to hold a door, and doing so because they're a woman where he wouldn't for another man, in a professional context, is creepy and weird.

Do you see the difference? What you're describing in the midwest is fine. But it's not what the parent was responding to. The "antagonistic social environment" with "women sneering" you're describing is a total straw-man of your own imagination.


OK, time for my pet theory on this.

It's not the picture itself that creates an obstacle. It's that emotionally-aware people want to go into a career where they like being around their co-workers.

So if they get the impression that many of the people in the field are completely tone-deaf about basic stuff, they start asking themselves questions like, "Since I have limited time on this planet, why would I want to spend it where a good percentage of the people around me are insufferable asshats?" And then they pick some other field. Not because they can't handle it. Because they know they can do better for themselves.


Fortunate for us that there is a similar race to apologize for erotic photographs in tech demos, as if it's not an obvious anachronism.

Really, defending something so clearly out-of-place is deeply suspicious. Maybe everyone can handle it but why are we being asked to?


[flagged]


Why is it never a sexy/revealing male photo?


It's not as common as Lena, but I've seen a lot of papers use a picture of Fabio


It is astonishing that we can't even agree that the photo is erotic.

Or that there's a meaningful difference between media you choose to consume for entertainment (which may be honest-to-goodness porn and that's _fine_) and irrelevant eroticism casually sprinkled in tech demos that don't explicitly involve sex.

I wish we could at least be real about the facts in evidence.


Why? It's like "lorem ipsum" at this point.


If lorem ipsum were softcore erotica...


What if I told you it actually is?

> Quis autem vel eum iure reprehenderit, qui inea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur?

Translation: Who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?

This text could very well be the first page of a softcore erotica (or even hardcore!). The rest of the text is not usually visible. But then again, neither is the rest of lena!


Then... you'd be wrong.

It's a philosophical argument against Epicurianism. Which has as much in common with softcare erotica as the phone book.

Not sure what your point is.


This is a really big stretch in interpretation but also in plausibility, since Latin is a dead language.

But if it were true, then you'd have it your way: it'd also be inappropriate in professional settings that don't naturally necessitate it.


> This is a really big stretch in interpretation

According to Wikipedia[0], "[t]he placeholder text is taken from parts of the first book's discourse on hedonism." The first book being the first book of Cicero's De finibus bonorum et malorum.

[0] https://en.wikipedia.org/wiki/De_finibus_bonorum_et_malorum


[flagged]


To be honest I'm sure it isn't erotica just as the Holy Bible is not despite verses like Ezekiel 23:20. I must also confess I did find the revelation a little amusing. It certainly says something about men (whether that's just the men of 70s from when both originate).


I knew other commenters would bash you for proving their point, but this is quite the best comeback I have seen in long time.


I would humbly note that, by volume, almost all of what generic lossy image compression algorithms are applied on, is in fact some form of erotica. A better compressor specifically in the domain of pornography—if widely applied—would probably do more to save total global Internet bandwidth usage than, say, better compression for YouTube videos would.


Fish have no concept of water in very much the same way as you not seeing the problem here.


How about you respect women in tech more and stop assuming they're so fragile that they need men to change an industry so they don't get their feelings hurt.

Women are people, they don't care.


Woman here, you are absolutely right, I couldn’t care less about Lena!


Came here to make the exact same comment, agree completely!


Lena and Big Buck Bunny for life!


Welp... I had no idea this was a centerfold photo and never gave it a second thought until now. I appreciate the heads up, op.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: