Hacker News new | past | comments | ask | show | jobs | submit login
BPG Image Comparison (xooyoozoo.github.io)
357 points by rayshan on Dec 16, 2014 | hide | past | favorite | 123 comments



I think the big question is whether people can put BPG pictures online without getting sued.

edit

Why am I being downvoted for saying this? Do the people downvoting me realize that BPG is using HEVC which is patented by many entities?


Of course you can't use this compression technique without being sued. And that is the key reason software patents are really bad.

You may hear a lot about trolls and grossly invalid obvious patents making millions in licensing and extortion by legal fees and I agree those things are bad. They are very bad. But the real problem is the apparently valid patents.

Monopolizing a programming technique is a long-term barrier to invention that provides little or no incentive to create anything new compared to all the invention it prohibits. It's the same in any field where experimentation doesn't require billion dollar research teams and each new idea is built on thousands of others. No new work you do in video, audio, or image processing or encoding will be legal without a mountain of licensing agreements. Anything beyond common established software applications will be unlicensable and effectively outlawed. Google had to fight for years with a giant legal team to get permission to release WebM and you don't have that kind of power.


> It's the same in any field where experimentation doesn't require billion dollar research teams and each new idea is built on thousands of others.

So how little money do you believe has been spent creating MPEG, H.264, H.265, and all the other codecs? Do you think these were created by a handful of people in a garage over a weekend?

Meetings for these codecs were more than 300 people and those were just representatives of larger groups. This is research that does require billions of dollars and thousands of experts working for years. That's not going to happen as a charity or by weekend hackers. Linus isn't going to get frustrated at the size of video files and take a week off to come up with a better codec.

Software patents created these video codecs by funding the research and development, and patents aren't long-term barriers as many of the early formats have expired or are expiring now. You want to use MPEG-1? Go ahead. Or if you want to leverage hundreds of thousands of man-hours experts put into improving on it for the next generation then for a few more years you'll need to pay for it to cover those costs. That's exactly how patents are supposed to work.


> That's not going to happen as a charity or by weekend hackers.

To be honest that's exactly what happened with Vorbis (1 FOSS guy) and Opus (2 FOSS guys, 1 Skype guy).

The MPEG process is full off politics and massive overheads.


Those are audio codecs, not video codecs. Xiph worked on Theora for 7 years and never got close to H.264, and Daala is promising but HEVC is amazing right now.


Theora and Daala are both hamstrung by working around minefields of patents on essential processes to video processing.


Interestingly, the Vorbis project only really took off after mp3 started facing licensing issues. From http://en.wikipedia.org/wiki/Vorbis

> Intensive development began following a September 1998 letter from the Fraunhofer Society announcing plans to charge licensing fees for the MP3 audio format.

Who says patents don't promote innovation? :-)

(I'm only half-joking - invention forced by working around patents has long been a post-facto justification of patents.)

Also, nit, it wasn't all done by just "1 FOSS guy". From the same wikipedia link:

> Chris Montgomery began work on the project and was assisted by a growing number of other developers.


In my case, a substantial amount of my work on Opus was done literally from my basement (sorry, I didn't have a garage).


Meetings for these codecs were more than 300 people and those were just representatives of larger groups....

That's easy to do when your goal is to have as many different people and patents involved as possible, rather than simply develop an efficient codec using an efficient process.


To be fair a lot of the research that lead to these codecs was made by publicly-funded research institutions.


Spending big money and organizing meetings with many people does give you right to prevent others to invent their own codecs? Does Linus sue other OS makers, that they make OS kernels too and therefore they must (by definition) infringe on his rights?

And according to my math, I should be able to use MPEG-2 (publication date 1996), AC3 (1995) or MP3 (1992) already, with MPEG-4 ASP (1999) getting there soon.


> Spending big money and organizing meetings with many people does give you right to prevent others to invent their own codecs?

No, making a novel and non-obvious contribution to the arts gives you the right to apply for a patent on your invention, which then gives you the right to prevent others from using that specific invention. Spending big money and organizing meetings may or may not be involved.

Nothing prevents others from inventing their own codecs.

Now commercializing those codecs may be another matter entirely, probably requiring resolving licensing issues depending on how much they rely on other patented methods. Ostensibly, pools like MPEG-LA exist to make this easier.


> Now commercializing those codecs may be another matter entirely, probably requiring resolving licensing issues depending on how much they rely on other patented methods. Ostensibly, pools like MPEG-LA exist to make this easier.

That would be fine. However, confusing the issue, claiming infringement by these new codecs in general, without providing any proof and otherwise trying to destroy the adoption of these new codecs is not OK. MPEG-LA and other right-holders conspiring to damage the alternative codecs uptake did cross this line.


> That's not going to happen as a charity or by weekend hackers. Linus isn't going to get frustrated at the size of video files and take a week off to come up with a better codec.

That's kind of what he did with Git.


Distributing HEVC content is royalty-free, even commercially[1] - however, royalties come into play with distributing encoders and decoders, and it's the latter that could cause problems (similar to the case of Firefox and H.264 decoding support).

[1] http://www.mpegla.com/main/programs/HEVC/Documents/HEVCweb.p... - page 7, HEVC License Terms


How could distribution of the encoded images be infringing a patent? It's the distribution and possibly use of the encoders and decoders that would be within the scope of patent law, not the output itself. The encoded image isn't the invention.

If I own a patented 3d printer, would you be worried that those patents could restrict my ability to distribute the objects printed by it?

The javascript decoder would certainly be covered by the patents as much as any other decoder, so it doesn't solve the patent-related barriers to adoption even if it takes care of the practical issues with using the format.


Err, because MPEG-LA and the other patent holders that didn't join knows that Fabrice Bellard didn't license the patent from them, so you used an unlicensed encoder.

If you stick that poly-fill on your website they can also sue you for distributing the decoder to everyone that downloaded the javascript file.


IANAL, and this might be a simple case of letter vs. spirit, but my understanding was that shipping the decoder's source is allowed, so with a JS implementation you're theoretically safe. This was one of the arguments around Broadway, the JS H.264 decoder.


I'd argue there's a difference between shipping source code, and actually embedding it in your page via a <script> tag. Once you embed the script into your page for execution, you're building a derivative work -- not just publishing technical details.

Otherwise JS would become a universal patent loophole, which I'm pretty sure a judge would find fault with.

(IANAL also.)


The usual technique is to claim the encoding technique, the decoding technique, and the data structures expressed by and expressing the technique. There's also equivalents and the doctrine of induced infringement.

The patent courts and ITC have looked very unfavorably on First Amendment claims and on defenses based on lack of infringement within the USA and technologies with alternative non-infringing uses not being liable.

The Supreme Court is less extremist than patent courts on insisting that patents pre-empt the Constitution, but the Supreme Court takes very few cases and you'd already have spent millions to get there and -- most importantly -- the patent courts largely don't consider Supreme Court cases to be necessarily binding precedent next time around.


Not sure what the score looks like for your post, but this is certainly the big question for me too considering that one of the primary factors in the terrible stagnation of widely used media formats (which has consequences as terrible as people commonly and unironically using GIFs -- fuckin GIFS!!!! to share "videos") has been serious patent concerns associated with most new contenders.


From the original link for the BPG format http://bellard.org/bpg/:

Some of the HEVC algorithms may be protected by patents in some countries (read the FFmpeg Patent Mini-FAQ [https://www.ffmpeg.org/legal.html] for more information). Most devices already include or will include hardware HEVC support, so we suggest to use it if patents are an issue.

The mini-FAQ contains some relevant information.


I wasn't aware, but BPG is a Fabrice Bellard[1] project.

Bellard is possibly the best example I'm aware of the 100x programmer.

[1] http://blog.smartbear.com/careers/fabrice-bellard-portrait-o...

[2] A subset of things he started: LZEXE, FFMpeg, QEMU. He's also held the record for the calculation of the largest known prime and the most digits of Pi, wrote the first x86 emulator in JS that could boot Linux, won the International Obfuscated C Contest, etc etc.


In all fairness, there are tons of programmers out there doing an incredible job, wrapped up in a company IP or some patented tech. Their name is often overshadowed by a name of a start up or diluted among other devs that contributed to about 5% of his/her genius.


True.

Oh. Wait...

How many companies can point to software as impactful as FFMpeg and QEMU? (Don't forget that most Linux based non-VMWare virtualization solutions use QEMU at least in part).


> How many companies can point to software as impactful as FFMpeg and QEMU?

Microsoft, Apple, Google, Amazon, Facebook, etc...

> How many individuals can point to software as impactful as FFMpeg and QEMU?

Tougher question.


Microsoft, Apple, Google, Amazon, Facebook, etc...

I almost put this in myself, but I assumed that people would fill in the blank themselves and realize that they are comparing the output of one person to that of AN ENTIRE FORTUNE 500 COMPANY. Judging by the downvotes I guess I should have spelt it out. Mea Culpa.


This is absurd.

"Newton is the best example of the 100x scientist."

"Christopher Lee is the best example of the 100x actor."

"John Paul II is the best example of the 100x pope."

"Van Gogh is the best example of the 100x painter."

Saying 100x assumes an expected production. All those people didn't do what was expected of them, they did what they loved.


It's not expected, it's average. Thousands or millions of painters do what they love and never become Van Gogh.


And thousands become better than Van Gogh, but the surrounding culture has changed and their work matters far less.


I'm concerned that highlighting his accomplishments every time one of his projects comes up gives the wrong impression to the newer programmers reading the site, or even to experienced programmers.

You, reader, can be a Bellard. There's nothing "special" about what Bellard has done, in the sense of it being beyond your abilities. You just have to believe in yourself, along with having a willingness to work hard most days. But working hard is easy when you find an interesting problem.

Bellard is able to do so much because knowledge is like compound interest: The more you know, the more you can learn. Bellard has been saving up for a long time.

Competition is sometimes motivational, and if you insist on looking at it like a competition, then realize that every day Bellard devotes to relaxation is a day you can catch up to where he's at. Competition is what motivated me, when I first started. And once you realize there's nothing magical about what they know, you realize you can sprint very hard towards where they're at, which is exciting.

But once you start down that path, competitive spirit tends to transcend into something else entirely. You stop comparing yourself to others. You focus on putting one foot in front of the other, taking the next logical step towards your goal. Repeat for N days, and suddenly people are highlighting the projects you've done.

But what's your goal? Well, that's up to you.

So pursue your interests! You can do it.


I'd love to believe this, but it's my day job. By the time I've poured my brain 9–5 into what my employer wants, I just feel too drained to do what I want.

It's a vicious cycle of desire and fatigue.


Been there. It sucks. There's no easy way around that.

What ultimately worked for me was to capture a question. Find a question that fascinates you.

The ability to do that, for me, came down to resisting the urge to come home and zone out in front of Netflix. It's tempting after an 8-10 hour day.


> too drained

Drained, or scared?


I'm pretty sure there is no need to troll this comment.

Even if your point was logical, it would be debatable.


I believe the comment was meant to motivate, or provoke a bit of self-reflection; not for trolling.

Many people are unwilling, or can't or look for excuses to not take the first step because fear[1][2] might be (secretly) hindering them.

I was offered a great job and I was afraid/unwilling to take the job because of the responsibility in involved (even though, I would've loved the job) and was afraid I wouldn't be up to the task or my own expectations (now I realise I would've been more than capable).

Although, my issue was a consequence of a much bigger emotional problem, I wouldn't have ever resolved it, if it wasn't for a tactless and stubborn friend who forced me into helping myself.

Just my two cents.

--

[1]:Atychiphobia

[2]: or Jonah's complex


Yes. I realize the comment was short and blunt, but the intent behind it was simply to question motivations behind "working for the man" and "doing what brings you joy". I encourage everyone to consider the risks and fears associated with building something new and revolutionary. Sometimes that takes digging deep and facing your fears!


You slipped the blank slate fallacy here: the belief that people are born exactly equal, there is zero genetic component to IQ and programming capabilities, and all that is needed is "working hard" and "believing in yourself".

This is far from universally accepted.


Nothing is incompatible with telling people to work as hard as possible and make the most of themselves and the idea that some people are more gifted than others.

"blank slate fallacy" ... Steven Pinker probably has to pay a marketing person a quarter every time someone says that. I should construct a straw man, position myself as a iconoclast against that straw man, and then rake in the TED/book deal $$$ instead of working hard like a chump.


> You, reader, can be a Bellard.

In the sense that you, reader, can write ffmpeg, that's not really true.

In the sense that you, reader, can achieve your maximum potential in any given field if you commit yourself wholeheartedly to doing so, then yes, it is.


In the sense that you, reader, can write ffmpeg, that's not really true.

Of course you can write ffmpeg. What's so special about ffmpeg? Research how it works, then write it yourself.

Maybe it seems pointless to do that since ffmpeg is already written, but it's not. You'll learn an amazing amount from the experience.

Putting "mythical 100x programmers" onto a pedestal is mistaken.


Some people can dunk a basketball with no training [1]. Some people can dunk a basketball with a little training. Some people can dunk a basketball if they make it the primary focus of their lives and train hard towards that goal. And some people will never be able to dunk a basketball no matter how hard they try [2].

But one thing is certain, if you train hard at basketball, you will get better at it.

[1] http://en.wikipedia.org/wiki/Manute_Bol

[2] http://en.wikipedia.org/wiki/Stephen_Hawking


Or... you can invest your most precious possesion you have, that is the remaining time in this planet, to have a life ;) That doesn't go against achieving full potential mentioned above, just that narrowing your efforts and energy only to what is daily job for most people here... let's say most people have other priorities in life


I agree. The classic talk on this subject is Richard Hamming's "You and Your Research." [0] I think there may be video online.

[0] http://www.cs.virginia.edu/~robins/YouAndYourResearch.html


But I can't be a Bellard and keep up w/ HN, reddit, facebook and twitter.


Heh, I can tell you for a fact that Fabrice hangs out on forums as well :) His personal fave is still Slashdot...


I liked this message better when it was said by Rhonda Byrne.


Compare original vs BPG in the soccer picture, then look at the green strip on the shirt of the red team player and slide over it. It's magical how perfectly the compresion make the red dots disappear.


Another great one is the Irish Mannequinn. I thought BPG blows up the edges, because every other format smoothed them out. But no, when you compare to the original, all the interesting edges (hair, hanging fabric, top of the lips) should be very visible.

I really like the results!


Yes, it seems like BPG is smarter in finding interesting parts of the image.


If you look at the Air Force Academy Chapel, the horizontal window bracing are almost completely eliminated when comparing BPG and the Original.


JP2K-Medium seems to preserve a resemblance of the dots


Yep, I really liked the look/size tradeoff of JPEG 2000. It's too bad it hasn't caught on, particularly for cameras as an alternative to JPEG.


That looks like an aggressive chroma subsampling/quantization.

Image encoders must allocate the bitrate among the channels, luma (Y), and chroma (Cb, Cr). Most of the time it is better to allocate most of the bitrate to the luma, because that's what the eye is most sensitive to. However it can create artifacts like this.


The issue I had was with how the cheeks lost their colour, when compared with JPEG.


This conversation showed up a couple of weeks ago...

Any new "image compression format" isn't going to gain widespread traction unless is can displace JPEG in terms of hardware support and ubiquity in cameras.

I was part of the JPEG-2000 support. The company I was with at the time had technology around how blocks were encoded (it gave up block independence at the benefit of 20+% encoding efficiency -- downside would be an intermediary step to reconstruct the JPEG steam).

None of those solutions took off.

Explain to me how beyond a hobbyist market BPG will make a splash?


It’ll make a splash if (0) the legal issues are sufficiently sorted out so that adopters don’t feel threatened using the format (1) the polyfill works everywhere, as it seems to, without too substantial a slowdown, (2) it saves enough bandwidth that major websites start sending BPG instead of JPG to at least some web clients, (3) one or more major browsers decide to build in native support, and maybe an extra boost (4) it manages to support use cases that other formats don’t handle well (e.g. photographic/naturalistic images with full alpha channel) and carves out a niche as the main format for those particular types of images.

I can imagine Safari and IE building support for this format entirely as a counter to Google’s WebP, since they already need support for HEVC as a matter of course.

Many current websites send very heavily compressed JPEG images to save bandwidth, and just live with the big hit to image quality. If they had a similar-filesize alternative that they could use for some or all clients that didn’t degrade image quality, and still was guaranteed to work smoothly, it’s plausible they might adopt it.


Who is going to re-encode their images en masse? Content companies? Photo sites?


Facebook. Which is probably a large percentage of the images going over the web. Heck, they could start by just doing it on mobile.


Nobody will reencode anything in photo sites. Just newer content will be supported. As for FB, their internal re-compression of jpegs is annoying as hell. Fine details taken with D750 become blocky almost cell phone quality images. Solution? Upload PNG.


It doesn’t need to be done by anyone en masse. It can easily be phased in gradually.


JPEG-2000 is an actual standard. I forget how supported it is. That said, not many images for the web showing up so encoded.

I don't think a format requiring JS to decode and negating the whole parallel download/decode thing has a chance regardless of the elegance.

It's not an imaging format discussion I this case.


JPEG 2000 is an actual standard but to get it working you need to license a decoder from someone or build a new one, you can’t just piggyback on some existing HEVC decoder. JPEG 2000 doesn’t currently work in every browser, and there have been at least a couple of security vulnerabilities in the decoder (Kakadu I think?) that OS X ships by default and enables for Safari. In many cases where there’s no native browser or direct plugin support you can get it working with a slow, buggy, resource-hogging Flash implementation, but I’ve had pretty poor experience as a user on sites that incorporate one. I don’t believe there are any pure-Javascript implementations.

As far as I can tell, this new format already has a better client compatibility story than JPEG 2000 right off the bat, as well as better quality for the same file sizes.



> Any new "image compression format" isn't going to gain widespread traction unless is can displace JPEG in terms of hardware support and ubiquity in cameras.

I believe part of the point of BPG is that it's an HEVC subset. If cameras can shoot HVEC, they have BGP hardware support.


If an American company incorporated Bellard's reference decoder into a commercial product that is distributed in binary form to clients, would that company need to license the HVEC patents to avoid being sued by an HVEC patent holder?


Almost certainly yes, you would need to license the many patents covered in H.265 and H.264.

However if you look at the history of JPEG and MP3, which were also encumbered with patents, the public domain basically won due to the sheer number of violations being too large to take down or even force licensing.

It will be interesting to see if any product (ie browser) actually puts any of these algorithms in their codebase.


Yes. Part of the idea seems to mostly be that if you have HVEC support via the OS or licensing, you get BGP support without any additional legal hassles.


Jaws dropped. Maybe having the option of adding 1-2% of luminance noise after decompression to disguise the plastic feel would make it look even better. It would look a bit different every time you load the image, though.

I would love to be able to use this today.


Must not necessarily look different: use a PRNG with a fixed seed, possibly as part of the image itself. Since there's nothing cryptorelevant here, the PRNG can be a simple one and its exact parameters specified as part of the standard.


The x265 encoder might not have Psy-RDO enabled --http://x264dev.multimedia.cx/archives/37

It penalizes "loss of detail" like that, biasing towards (inaccurate) noise over (accurate on average) absolute pixel values (PSNR).


I find BPG isn't nearly as plastic as WebP in this comparison.


BPG does a much better job at smaller sizes: its artifacts are way less jarring than JPEG’s (they just look like smudges).

However, to my eye, Large is the only acceptable quality level on these examples, and at Large the difference between JPEG and BPG looks almost imperceptible. Considering the cost of polyfilling, I would stick with JPEG.


> However, to my eye, Large is the only acceptable quality level on these examples, and at Large the difference between JPEG and BPG looks almost imperceptible.

I feel we must be looking at different examples, or must have very different definitions of “imperceptible”. I find that there is a very substantial difference in high frequency detail between mozjpeg and BPG in nearly all of these examples at the 'large' size, as well as many images where mozjpeg produces highly objectionable artifacts even at the 'large' size and BPG does not.

Speaking for myself, I would prefer to use a higher quality setting than any of the ones demonstrated even with BPG for most of my personal purposes. For my typical use cases, the 'large' BPG versions of most of these images are at the edge of acceptable quality, and every one of the other compressed versions falls far short of acceptable.†

Many other people have use cases where precise image replication isn’t as important though, and at every size BPG seems like a dramatic step up over the competition.

One thing I hope gets worked out before this goes mainstream is color management / color profile support. In Safari on my Mac there are several images where the rendered BPG image is obviously not correctly applying the image’s color profile.

† Kakadu’s JPEG2000 encoder and WebP at that 'large' size also seem noticeably better than mozjpeg, but they still seem to wipe more high frequency detail and produce more artifacts – especially edge artifacts – than BPG. On some images packed with very high contrast fine detail (e.g. Vallée de Colca), WebP performance and BPG performance seems pretty similar and a bit better than Kakadu; in general compression is hard in these cases so the “large” size ends up being pretty large.


I have to strongly disagree. For several of the images there are artifacts and noise in the large JPEG that looked much better in a medium BPG (which are about 60% the size of the JPEG).

Also, saying "Large is the only acceptable quality level," means that you are only considering only a very narrow use-case


When I clicked it I thought this was supposed to be a test damning BPG somehow, but I get the impression from other comments that I am NOT supposed to see this: http://i.imgur.com/XDXOB2w.jpg

All the other image formats work fine.

Any idea why? Windows Chrome 41.0.2243.0 dev-m (64-bit)


I see it too. Identical version: 41.0.2243.0 dev-m (64-bit) and also on Windows. I see it here as well: http://bellard.org/bpg/lena.html which explains "The BPG images are decoded in your browser with a small Javascript decoder." You can find more on the format at: http://bellard.org/bpg/

Someone filed a bug today: https://code.google.com/p/chromium/issues/detail?id=442599

Fixed in Canary 5 days ago: https://code.google.com/p/chromium/issues/detail?id=439743


OT ... but I find Canary to be much more usable and stable than Dev.


I get the same bug, and I find it pretty attractive, really, from a glitch art standpoint... Too bad it's apparently already fixed in Chrome :)


The "Production" picture [0] is quite challenging for all encoders. The original has fine details like text that are smoothed out of existence even in the "Large" profiles - though to its credit BPG preserves more of it than the others. (Also, relative to other high-detail pictures this one has an unusually low filesize.)

[0] http://xooyoozoo.github.io/yolo-octo-bugfixes/#production&bp...



Maybe Bellard is reading these comments. I made a very unpopular remark about his use of Lena.jpg and while I believe it's mostly coincidence, he used an imageset without Lena.jpg.

If you're reading, Bellard, hi!


Hmmm. While I understand you may don't like the source (for the young'ns: Playboy), it's a standard image that has been used to compare compression methods for 40+ years, so it serves a specific purpose.


I understand your point, but I believe some tradition must be put to rest at some point as we grow wiser. On the other hand, Wikipedia says the Lena image was remastered in 2013 so it may lose its historical significance if the remastered version is used from now on. (It's one of those moments where making something better actually makes it worse.)


I can't really think of any benefit outweighing the negative message, unless you'd want to compare to historic images or some other convoluted scenario.


Ignore the haters - I read your comment and upvoted it. It is everyday sexism and role-confirming behavior, and it is off-putting to women and making them feel unwelcome. But I think people are not aware of it and mean no harm, which is why it's good you pointed it out.

I do think some people can only relate to the argument when you propose to replace Lena with a seductively pretty half-naked (gay?) man, though.


Some people use the Fabio test image: http://4.bp.blogspot.com/-jUHXZ0jW4Lc/T1t3W0ex2BI/AAAAAAAAFe...

I don't see the problem with using either one and think the controversy is ridiculous.


I'm trying to figure out why it's inherently sexist. The woman in the photo posed, doesn't object to its use, gave a presentation about herself at an imaging conference, and had a career helping handicapped people work on computers. Image compression algorithm's are tested against faces since that's what we take the most pictures of. This took hold for whatever reason.

The reflexive hate for anything "role-confirming" is sexist in denying legitimacy to any man/woman in that role.

Fine change it to a half-naked guy. I don't know why if he's gay it'd make a difference, but sure.


I'll try to answer to this as best as I can, since I'm new to this feminism thing there's bound to be stuff that I'll miss.

Well first of all it's very nice of her approving the use of image and Playboy too for not going after their rights. But I think it's far fetched to see her approval as a positive message. There are women out there contributing to the objectification of women too, for whatever reason they have. (One weird side of this is people think women are shielded from criticism of their own objectification, but it's a delicate matter to say the least. One has the right to objectify themselves so it's hard to say something without getting in the way of their right to self express.) There are more than one side to this issue; but it's not about her consent, it's about how it might be contributing to boys club image of tech.

It's easy to see that we have a problem of the lack of women in tech. I find this very depressing since they're able as much as men are and it looks wasteful to dismiss half the population. I think we're getting better each day, but it does not happen magically. People fight for it and will keep fighting until there's no discrimination based on sex. I accept that Lena image is one of the minor issues, but I still see this as one of the factors that drives women away. This boys club image of tech gives the implicit message that women are not wanted here.

I know there's no nudity in the image and one needs to research to find its origins in Playboy so it seems unlikely to come across it. But the real reason here is if we're willing to combat sexism, it will give women comfort that we're willing to change tradition to be more welcoming. I believe actions speak louder than words, and if the reason to keep the tradition is not all that important, we should do it.

(I also don't think it's fair to compare it to an image of a naked guy, since it's unlikely to drive boys away from tech. It's not just naked guy vs. naked girl, the context and the message makes a world of difference.)


More relevantly, it is a good test image. It's a woman with a bare shoulder! Nobody would have a problem with it if it were an attractive man in that image instead (sexuality irrelevant).


Ok, so would you be comfortable if this http://imgur.com/coB85tW was the standard test image then?


Yes.

Although there is a lot more going on in the original that justifies its use as a test image; varying levels of subtlety of detail (the two parts of the top of the hat, DoF, and especially the feathers or whatever you call it) as well as the human in it.

original being referenced: http://bellard.org/bpg/lena.html


I can't imagine many people would be uncomfortable with that


The author of this comparison is a HN user, but he isn't Fabrice Bellard.

edit: https://news.ycombinator.com/user?id=xooyoozoo


Oops, my bad for not checking. Thanks for clarification!


Very nice compression! As others have mentioned, BPG seems to be a bit better at picking which details to save. It's an improvement over mozjpg - even at "tiny" size, the usual (and annoying) "JPEG/MPEG-style block noise" seems to be eliminated. (no offense intended to mozjpeg, which is working from an old spec)

--

Speaking of image formats, it might be nice to have a new format where the container is designed to only allow a minimum of features. Something like a file with only [MagicNum, xsize, ysize, xdpi, ydpi, gamma, <compressed pixels>, CRC32].

The idea is that it should NOT support "metadata" like comments or EXIF. As we've seen, there is a problem with getting most people to understand that they should probably strip EXIF/etc before uploading, and having a format where we could say "only upload .foo pictures" instead of "run $tool to strip EXIF". A browser plugin could even auto-convert every image while uploading.

Unfortunately, it would face the same problem as any new image format: nobody wants to use a new format until it is already popular.


Do you think the difference (especially for medium and large versions) is worth using a patent encumbered format?


It's interesting how bad mozjpeg looks compared to classic jpeg compression tools like jpegoptim, given how many blogs have been writing about Mozilla creating the "ultimate" jpeg encoder. Comparing it with jpegoptim reveals that at the small-medium-large sizes mozjpeg produces a result with more visual artefacts.

  wget http://xooyoozoo.github.io/yolo-octo-bugfixes/comparisonfiles/Original/Ricardo_Quaresma-L-,_Pablo_Zabaleta-R-Portugal_vs._Argentina,_9th_February_2011.png -O src.png
  convert src.png -quality 100 -sampling-factor 1x1 full.jpg
  jpegoptim -s full.jpg -S85 --stdout > s85.jpg
  jpegoptim -s full.jpg -S50 --stdout > s50.jpg
  jpegoptim -s full.jpg -S30 --stdout > s30.jpg
  jpegoptim -s full.jpg -S18 --stdout > s18.jpg


How are the file sizes?


Exactly 18-30-50-85 kB if you use the -S syntax.


Whoops, I assumed that was an arbitrary "quailty" setting. Thanks


opera 15:

[16/12/2014 06:25:05] JavaScript - http://xooyoozoo.github.io/yolo-octo-bugfixes/ Inline script thread

Uncaught exception: TypeError: Cannot convert 'file' to object

Error thrown at line 5, column 4 in <anonymous function>() in http://xooyoozoo.github.io/yolo-octo-bugfixes/js/splitimage2...:

    for (i = 0; i < file.length; i++)
called from line 1, column 0 in http://xooyoozoo.github.io/yolo-octo-bugfixes/js/splitimage2...:

    (function() {


Fruits: Mozjpg medium is still worse as BPG and is nearly double the size, it leaves a lot artifacts in the cloudy areas. Found it very convincing.

I think the results depend a little on the kind of input, but some of the pictures where rather convincing. WebP seems to do better than jpeg.


A key observation is that WebP biases towards smoothness (which certainly produces pleasing images!) at the cost of sacrificing detail. mozjpeg produces images with numerous macroblocking artifacts (ick) that seem to preserve more of the original image detail, from the cases I looked at.


Note that the decoder is 68K compressed. That's not much, and of course can be cached, but considering that the improvement (from WebP, anyway) seems to be most notable for the <50K images, it may not be terribly practical...

Considering that the page says it's based on a Daala comparison page, it would be interesting to stick a Daala snapshot in the mix for comparison using the same images (reencoding to PNG like it does for WebP, I'm not suggesting porting Daala to emscripten). A quick browse of the original page shows Daala sometimes beating x265, sometimes losing to it, but I guess both x265 and the Daala {encoder, codec} may have been improved in the last six months.


I would love see how does with scanned documents for preservation ... for example :

- http://www.digibis.com/digibib-demo/i18n/catalogo_imagenes/g...

- http://www.digibis.com/digibib-demo/cartografia/es/catalogo_...


Seems to give screwed up results for the BPG in Google Chrome Dev: Version 41.0.2243.0 dev-m (64-bit) But works in my copy of Firefox.


Same results here too - corrupted in Chrome 41, works ok in Firefox. Filed a bug: https://code.google.com/p/chromium/issues/detail?id=442599


And as reported in the above bug, fixed 5 days ago. We're living in the almost bleeding edge here ;-) https://code.google.com/p/chromium/issues/detail?id=439743


Why does the comparison use x265 instead of the reference HEVC encoder? Fabrice Bellard recommends that as it produces better quality encodes at the expense of CPU time.

I'm interested in seeing the result of using 4:4:4 with BPG. If you have the originals to compress, of course (meaning, the original is not chroma subsampled).


He either made his judgement based on non-obvious criteria or, more likely, looked at x265 at the "wrong" moment. Right before libbpg was announced, x265 had several commits fixing large 10bit and 422/444 bugs.

As far as intra-prediction goes, x265 should be better than the HM by at least a few percents MSE-wise and around 10% SSIM-wise[0].

[0] https://github.com/strukturag/libde265/wiki/Intra-Prediction...


Integration with that library would be great too. Thanks for the link!


Subjectively, the best overall trade-off between quality and file size seems to be BPG Large. Great comparison.


Is anybody has comparison of original image and one decoded from BPG with highlighted difference? E. g. using Imagemagick's compare command. It is interesting in which areas BPG differ from original.


I just wanted to point out, that BGP in "tiny" mode changes the image quality in a way that is strangely reminiscent of film. A lot of details get blurred out, but in a pleasant way.


It's interesting to see the comparison at a fixed output size. It would also be interesting to see a comparison at a) fixed encoding time, b) fixed decoding time.


Makes me happy that the JS method used doesn't require me to do the same <picture> <source> and provide multiple formats to support older browsers.


Impressive. What are the differences for decoding/load time? Especially on mobile devices this is a big deciding factor.


I want to see a comparison with a normal JPG picture.


You can compare between JPG and BPG by choosing "Mozjpeg" on one side and "BPG-x265" on the other. There are many pictures to choose from. Or did I misunderstand your question?


bpg is definitely better for tiny pictures


Yay! Another new image thing for 1 browser to support and the rest to need polyfills for!


Check the discussion from 11 days ago. I think your concerns are misplaced. BPG ships a polyfill... and it works. That's what you're seeing live on the parent page. Previous discussion: https://news.ycombinator.com/item?id=8704629




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: