Hacker News new | past | comments | ask | show | jobs | submit login
Image optimization decreased website's page weight by 62% (freshman.tech)
315 points by ayoisaiah on July 16, 2018 | hide | past | favorite | 105 comments



I'm a bit surprised there's no mention of image optimization proxy / services like thumbor[0] (which is open source). Instead of pre-processing all your images, it lets you worry about it later. You can compose different transformations and filters (e.g. add a watermark, resize, crop etc). This is especially useful when things on the website change. It lets you keep the original at full size, and transform them as you need.

There are some commercial services in this space, as well as other similar open source services.

If you're looking for a quick way to get thumbor up and running with docker, I'd plug https://github.com/minimalcompact/thumbor

[0] https://github.com/thumbor/thumbor


You can also run it on AWS as a Lambda function with S3 backend.

AWS even created a cloudformation script to deploy it with few clicks of a button

https://docs.aws.amazon.com/solutions/latest/serverless-imag...


This definitely looks interesting for using with S3/Lambda, but it's not up-to-date to the latest thumbor version[0], and not sure if it's most cost-effective at larger scales.

In any case, my main point was that there are plenty of great options for resizing/manipulating images on-the-fly rather than pre-processing :) how you choose to deploy it is secondary.

[0] it's providing thumbor 6.4.2, and the latest is 6.5.1 (I'm pretty sure it's possible to upgrade though)


Similarly, if you're still living in the past and / or want to maintain your own server instead of pass it through a 3rd party like that or a CDN, a few years ago Google's mod_pagespeed (module / plugin for both apache and nginx) did image optimization and a lot of other optimizations, all transparently - page caching, creating sprites, inlining small images (as base64), above-the-fold CSS optimization, etc.

I'd still recommend it if you've an old-fashioned webserver (like idk, random wordpress installation) and don't want to pay a third party. I don't know if it's still being maintained or updated though, I haven't heard much of it since its release.


>Similarly, if you're still living in the past and / or want to maintain your own server instead of pass it through a 3rd party like that or a CDN

You acnkolgege some people want to "maintain there own server", but its worth pointing out for high-traffic orgs there are some good reasons for doing so. Maybe you want to be an organization that does this at the CDN origin rather as CDN as to be CDN agnostic / have a multi-CDN strategy. This doesn't mean you are living in the past. It means you want CDNs to compete for your traffic and not have you locked in with crazy price leverage over you.


> Similarly, if you're still living in the past

you mean like when privacy was still a thing?


If you have a public-facing site, how is using something like Imgix a privacy issue?


You're letting a third party track your users and their usage of your site.


Alternatively, this also applies when you are living in the future, and making highly static websites: maintaining your own servers and small CDN gives you a cost and speed advantage.

Bandwidth is cheap - except on cloud services!


> Similarly, if you're still living in the past and / or want to maintain your own server instead of pass it through a 3rd part

So, users' privacy is now officially a thing of the past?


As someone else mentioned, it's much easier to just use ImageOptim if you're on Mac. There's also a CLI and accompanying tool for NPM that includes this and ImageAlpha and JpegMINI: https://www.npmjs.com/package/imageoptim-cli

But one thing I'd caution is that webp is not a panacea for image optimization. It's only supported in Chrome. If you want to fully leverage next-gen image formats cross browser, you'll also need JPEG 2000 and JPEG XR...and even if you do all of that, you still won't get support for Firefox.

There's also srcset and lossy compression, which are also viable options: https://userinterfacing.com/the-fastest-way-to-increase-your...


The market fragmentation is why I prefer to use JPEGs more intelligently than authoring multiple versions of the same image for each browser. It feels like the 90s all over again. Using fewer images, smaller images, and caching them has worked well for me.


I'm at the point now where I've realized that nothing is ever the solution all the time. It's a little frustrating.

I used to use the rule of thumb that JPEG was for images, and PNG for charts, and things with text.

But these days I go through each image I get from the art department and optimize it myself in Photoshop, cycling through a series of presets.

To my surprise, depending on the image, sometimes an 8-bit PNG will end up smaller than a JPEG, and provide better visual quality.

Naturally, your mileage will vary; at participating locations; not valid in Alaska, Hawaii, or Puerto Rico; no cash value; batteries not included; do not taunt Happy Fun Ball.


Coincidentally, when I was a freshman in high school in the late 90's this was a topic our instructor drilled into us. I remember trying to shave off every little kilobyte for .gif and .jpg files and to make my personal website load as quickly as possible with a reasonable amount of quality over a modem.

From my perspective, it seems that everything has gotten way more bloated there is an assumption that everyone has unlimited data and bandwidth. I used to have a 1GB data cap on my phone that I would blow out in a couple of weeks from just reading news websites. For example, bloomberg.com shouldn't need to make nearly ~300 requests and download 18mb of data just to load the front page.


Your comment made me bring out DevTools, because I thought your number of requests you said ("~300") sounded a bit high.

Boy was I wrong!!!

  - 500/565 requests
  - 7.0 MB transferred
  - 9.86 second to load
Holy crap!!!


This is weird. In Chrome incognito, I get ~400 requests (without scrolling), and in Firefox private browsing, I get ~120 requests before scrolling, and ~190 after. What in the world? I disabled all the extensions in Firefox too.


Doesn't Firefox private browsing has a tracking blocklist active by default?


Wow, I didn't know that! I thought the tracking protection was just about the DNT header and I didn't expect that to make much difference here. However, even disabling that, I only get ~200 requests. There's still a lot missing... is it still blocking things under the hood?


It may also be that different resources are loaded based on browser agent to track the user in different ways/display the page correctly on different browsers/polyfill javascript differences


I just re-ran it in Chrome ingongito mode to get the following:

  - 400 requests (adds about 1 per second after that)
  - 6.8 MB transferred
  - 6.1 seconds
Then I ran it AGAIN, and got this:

  - 307 requests
  - 4.6 MB
  - 9.03 seconds
You're right - it is WEIRD!


"Install our app!"


Try dailytelegraph.com.au for a good shock!

On mobile, but last I checked was around 700 HTTP requests on a desktop without adblock.


Checked it on Pingdom, from Melbourne, it failed after one minute of trying to load, with 585 requests and 3.4mb.

I always find it difficult to imagine what kind of process leads to that result. The developers can't be happy about it, surely. It has to be a case where management and everyone else throws things they want into the project, a big concensus party, and you end up with a bloated monster at the end.


In my experience this often means that management is not measuring the right things or that the engineering team is super junior and not empowered to influence strategy.


That's truly incredible. It took several long seconds on a usually very quick internet connection. At least they made the page reflow nicely as it loads but it's still ridiculous


I've been debugging a GIS application lately. We have a remote customer & a corporate connection to them that has a lot of latency. Since most GIS sites work by sending hundreds of square map tile images, there ends up being a lot of requests, especially for maps with many layers.

As helpful as decreasing image size was, switching to HTTP/2 was even better. Highly recommend giving HTTP/2 a try if your dealing with a lot of file requests & latency. A small test of 150 images went from 6 seconds down to 1 second in our tests.


I can collaborate about HTTP/2. Even my blog, with about 10 requests and 500KB (mostly images), I noticed a clear improvement (~200-500ms quicker load) by upgrading from HTTP/1.1 to HTTP/2, and doing nothing else.


I was just debugging our new corporate intranet portal. Same thing, tons of requests, 17MB page load.

Turns out the intranet team had decided to add a panel for Yammer updates. It doesn't help that the Yammer implementation has errors. For example, the X-XSS-Protection header returns 1; mode=block twice in the same header.

If there is anyone here from the Yammer team, let me know and I'll show you.


A long time I ago I "automated" the optimization of images on my site by running `optipng` in a cronjob. Every file that touches my server gets optimized.

I wrote about it here: https://ma.ttias.be/optimize-size-png-images-automatically-w...

Benefits:

- Don't have to think about it

- Optipng is really good at reducing PNG's to their bear minimum

Downsides:

- Doesn't resize images (if a 1024x768 is displayed as a 10x8, it'll still download the 1024x768)

- Only does PNG

- If your images are stored in git (and you didn't pre-optimize before committing/deploy), you can get merge conflicts

Still, better than nothing.


I wrote a similar article a while ago, explaining also which sizes work best on retina displays and how to generate all the necessary images easily -> https://www.codecamps.com/responsive-images/

It helped to increase the Google's PageSpeed Insights rating from 56/100 up to 99/100.


The node package Sharp does a great job of optimizing images and letting you manipulate them for different sizes, color and channels.

http://sharp.pixelplumbing.com


If using gulp you can use gulp-responsive which uses sharp internally.

The automatic cropping methods are pretty cool and work really well, too.


There’s jpegoptim and imagemagick to resize.


The git conflicts issue sounds like a perfect usecase for git pre-commit hooks


Would you get the merge conflict if you mark the *.png extension as 'binary' in .gitattributes?


<picture> <source srcset="sample_image.webp" type="image/webp"> <source srcset="sample_image.jpg" type="image/jpg"> <img src="sample_image.jpg" alt=""> </picture>

Didn't know you could wrap images in a <picture> tag and browsers (except for IE) will automatically download the .webp version if they support it. Used to do this via a Javascript. I like on-demand scaling where you pass scaling parameters in the url, such as: /200x200/sample_image.jpg.


No need to do on-demand scaling via JavaScript either! The srcset attribute is also available in <img> tags and lets you define differently sized images for different viewports: https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimed...


If you’re developing on a Mac, ImageOptim can handle all of the image compression (JPG, PNG, etc): https://imageoptim.com/mac

For SVGs, svgo (brew install svgo) usually produces the best results for me.


I'd second ImageOptim on Mac or PNGGauntlet on Windows https://pnggauntlet.com . I'd be careful using pngquant as well, as it will only ever give you 256 or less colour image or your original image back. PNGGauntlet and ImageOptim will optimise your PNGs losslessly.


I tend to use SVGO’s Missing GUI: https://jakearchibald.github.io/svgomg/. (Great demonstration of service workers for offline support, too.)


For Linux users I made https://trimage.org


Thanks, I'll try it.


On Windows, I only need to optimize images occasionally and tend to use the free version of https://kraken.io There are probably better solutions, but this is very easy and no install needed.

On Mac, you're definitely right that ImageOptim is the best.


What would you recommend for an iOS app written in Swift that has to optimize images drag-and-dropped on it by the user?


Because it suits their workflow?

Because it adheres to the company's IT policies?

Because they like to review results instead of relying on some black box automation?

Because they like to work in-house and not on someone else's cloud?

Just off the top of my head. I'm sure there are a thousand other reasons.


svgcleaner is supposed to be faster and have a better compression ratio than svgo: https://github.com/RazrFalcon/svgcleaner

It also has a GUI.


svgcleaner looks not to support any lossy compression at this stage. Granted, svgo’s lossy compression is very pathetic, but for many images it will have enormous impact. (I say very pathetic, because it’s pretty much just rounding of numbers to N decimal places—no awareness of perception or how different scales may mean that one decimal place means radically different things in different parts of the image, no path simplifications using Visvalingam-Whyatt or similar, &c.)

Being focused on theoretically lossless compression techniques, svgcleaner’s GUI doesn’t look to include rendering of the SVG, and focuses on batch operation. For svgo there is Jake Archibald’s SVGOMG which renders the SVG and so lets you see what the impact of the lossy compression employed.


After fighting with the options, failing to find a good default, and still consistently getting larger images, I fail to see how svgcleaner is better.


I think it's only with lossless compression that it's better.


svgo has a gui online called SVGOMG.


I just ran a quick comparison on two big ecommerce sites I'm familiar with. I know for a fact they performed as much optimization as they possible could in their respective file formats

Webp and .jpg file both had similar dimensions, picture detail complexity, and dpi. Webp format came out to 50% smaller file size

I didn't have enoughof a sampling and/or tests though.

I personally don't think image optimization with webp should be a thing though. The lack of native web support is one issue, the next is lack of native support on windows OS is another.

Two things IMO are most important about image optimization for heavy-image load sites. One is lazyloading.js (frontend library) via specifying a class for those images past a threshold browser height on the backend. Most notably this is used in many ecommerce sites, but on analysis amazon doesn't seem to be using this.

Next would be sprite compression of common social links. A great example of this is amazon actually, checkout this image I extracted from from their webpage.

https://images-na.ssl-images-amazon.com/images/G/01/gno/spri...


I think with images, just as with videos, there will never be a consensus; some people will insist on certain proprietary formats, forever. At least there's standards now, so that you can provide fallbacks - if you as a website owner would e.g. prefer people use webp because it compresses more than jpg then you can offer both, without forcing one or the other on them.

I wouldn't mind a solution on the server-side though, where in your html you just put an img tag and the server determines what the best format is based on browser support. Of course, that would mean there'd be a header (or some smart user agent analysis) for every image request, and you'd like to keep that overhead to a minimum.


I've been using this great imagemagick script for optimizing images the past few years. Works like a charm. Any images that are going to be served from my websites first get optimized via the script.

  smartresize() {
      mogrify -path $3 -filter Triangle -define filter:support=2 -thumbnail $2 -unsharp 0.25x0.08+8.3+0.045 -dither None -posterize 136 -quality 82 -define jpeg:fancy-upsampling=off -define png:compression-filter=5 -define png:compression-level=9 -define png:compression-strategy=1 -define png:exclude-chunk=all -interlace none -colorspace sRGB $1
  }
Usage:

  smartresize image.png 300 outdir/
  
Looks like I must have found it here https://www.smashingmagazine.com/2015/06/efficient-image-res...


Thanks. I noticed a gain of 20% on my images with this method.


If you are using webpack, I highly recommend imagemin-webpack-plugin [0] (although I might be a bit biased as I created it...)

It will run a slew of image optimizers by default using imagemin, and has support for a wide range of others.

It also supports caching and optimization of images that aren't being directly imported through webpack (thanks to some awesome contributors) so it's a great way to set it and forget it and never have to worry about sending 3mb images to your users by accident.

[0] https://github.com/Klathmon/imagemin-webpack-plugin


That's a nice one, thanks for sharing! Makes sense for those who already use Webpack


If you are using Ruby, I can recommend the image_optim[0] gem together with image_optim_pack (that packs the binaries), it is maintained by a great person I only know by the name of his handle "toy".

I used to give him a few dollars per week when Gratipay was still up and running, sadly I don't know of an alternative now.

[0] https://github.com/toy/image_optim


Curious as to what Gratipay was, so I did a little a research. It turns out the maintainers of GratiPay have handed over all assets over to Libreapay[0]. Libreapay appears to be a fork for Gratipay. This might be what you have been looking for!

[0] - https://liberapay.com

*I have no affiliation with liberapayn just did some curious research.


Another option, if your images are simple enough, is to use https://github.com/fogleman/primitive to convert it to SVG. Might not be worth the effort though as the space savings could be too insignificant to matter for the artifacts produced. Neat effect though for small amounts of shapes.


Another approach is to design the site with SVGs as the priority format. Use flat cartoon like graphics in place of JPG photographs, whenever possible.


Sometimes I visit a project and drop all public images (logos, icons, stock photos, whatever) in the project in imageoptim. 75% reduction in file-size for some images is not uncommon.


One of the nicest additions to the GitHub Marketplace is a bot that will optimise your images automatically: https://github.com/marketplace/imgbot

(No connection to me - just think it's a great idea, and totally free.)


Another GitHub bot[0] was shared earlier. This one looks similar. Thanks for sharing!

[0]: https://www.shrink.sh/


Making sure that grayscale/monochrome images are set as such (oftentimes they aren't) can also shave the size down. I use imagemagick for that.


This applies to any image will never be seen in its original state - eg. photos that are opacified or dulled down to act as background elements.

We had a site outsourced (because I found Shopify to be ..'tricky'.. to meaningfully work with on a PC) and the developer was darkening 'hero' images on the homepage shout in CSS, rather than specifying that we simply pre-process them in something like Photoshop before uploading.

When I realised (I wasn't particularly hands on by this point) I was livid, so changed the code and we knocked a few hundred Kb off the front page. Our site is necessarily image heavy, so any gains anywhere are useful.

When I first started I had a ceiling limit of about a 120Kb for the entire page - images and all - so today's internet is a weird and foreign land to me.


Setup metrics that you'd like to hit for your pages. When I was a kid putting up a fan site for games, (back on Geocities), I was mostly stuck on a 28.8k modem, and aimed to have it finish loading in 30s.

Now, I was quite brutal in the compression and probably should have backed off a bit to avoid artifacts.

It might be nice to have an automated test that the system under test limits the bandwidth and try to load your pages, and set some times that you'd like to hit.


is there a robust, automated, and universal process for gauge image optimization result, and comparing it to required image quality level? I've always done it manually and I have a keen sense of image degradations as I increase the compression.


I personally have been working around web technologies and performance for the last 14 years. I understand where this article is coming from and I actually have to give it some credit, it has a good click-bait title but it's an old solution. I'm tired of reading about this type of solutions, there are thousands of articles exactly like this one.

Reducing the size of your images is just the first step and there are many things you need to consider in order to make your website faster, things you need to solve: - Format: deliver the images in the right format for each browser (e.g. using WebP for Chrome) - Size: What size for each image? What happens on mobile, tablet, desktop and the different screen sizes and pixel ratios (it's not only retina or not) - Quality: Is your image being resized by the browser? Are you using raw files to generate the optimized images? - Thumbnails: Are you also generating thumbnails for listings or smaller versions of your images? How are you going those thumbs to your original images? Do you need to use a Database? - Storage: Where are you going to store those images? - Headers: Caching static assets it's key for recurrent users, are you using Apache or Nginx? Is your setup working well? - CDN: are you using a CDN to deliver those assets? CloudFlare is great but it's not the fastest way to deliver images. What about setting the right configuration for that CDN? How much are you going to spend?

So what's next? Going to one of the API service to optimize images, read their 500 pages of docs to get to resize and crop an image? Ad complex plugins to your backend and have a high dependence integration?

I mean, if you like to add more dependencies to your project, maintain more code, spends hours rebuilding scripts and running cron jobs to update your images, go for it.

That's why about 9 months ago we started working on a new concept, solving all these problems with a service that integrates as easy as a lazy-loading plugin and it solves EVERYTHING about image optimization (and yes, everything that you're talking about on every comment on this HN post).

Don't get me wrong, we have a lot to improve and there are many details of our product that we need to polish, but we believe we've built a solution that solves perfectly all the most important parts of image optimization and delivery. It's not about reducing the image size by 1KB more, it's about everything else and understanding the big picture.

We love feedback and our backlog is prioritized based on our customer needs, let us know what you think.

Here's the link to our startup website: https://piio.co


Author here. Thanks for the feedback. I agree that what I've done is nothing revolutionary, but the vast majority of websites out there don't have these sort of thing. That's one of the reasons why the average page weight continues to grow every year [0].

Moreover, this solution is good enough for people with small blogs just like mine. Anyone who needs something more involved can use your service or other alternatives.

As for responsive images, I implemented that on my site too although I didn't mention it in the article. I plan to write a follow up article on that topic soon.

On a side note, you might want to bump up the font size of the navigation links on your site. They're too faint.

[0]: http://idlewords.com/talks/website_obesity.htm


Totally agree with the page weight and that we're missing something. Increasing page weight when we're also increasing connection speeds is only bad when the first one increases more rapidly, and I believe that we're under that situation.

Would love to connect to chat about this and for sure I'll read the follow-up article.

Thanks for the feedback too!


Some things you didn't mention are domain resolution latency and caching, browser request parallelization, delivery protocol selection and reducing the raw number of content requests. Basically you are entering CDN territory here.


That's why I added the CDN part, didn't wanted to go to deep on that issue. As you're saying CDN it's a big part of the image delivery, even more important than having a great algorithm to compress those images. We've found that after passing a certain threshold, eg. the image being < 200KB the download time it's not something important, so the size doesn't matter that much, the TTFB (time to first byte) is crucial.

There's also a difference between performance, speed and actual perception of speed, but that is very hard to measure and maybe it's an issue for another topic.


After seeing people forgetting to do basic optimization step on images at our respective jobs, a friend and I build https://www.shrink.sh. The goal this tool is to create a catch all system. It also prevent you from installing some tools that will slow even more build or deploys and that you would need to maintain forever.


For one of my websites (a static page with some pretties), I challenged myself to remove as much cruft as possible without degrading the experience.

I used Fontello to strip out unnecessary FontAwesome icons and uncss to remove unused Bootstrap styles, replaced some Bootstrap JS with vanilla JS and made use of SVGs (optimised with SVGOMG) for backgrounds and the logo.

The resulting site is a total of 178Kb when viewed in Chrome (down from over 1MB), including bootstrap, analytics, some screenshots, a custom font and animated logo. There's plenty more I could do to trim off size, but I had more important things to do.

There are so many ways to make webpages smaller and more efficient, and it can be a really fun learning experience.


https://github.com/DarthSim/imgproxy/ imgproxy I've not seen mentioned, it is probably the fastest image processing I've used yet. It uses libvips, https://github.com/jcupitt/libvips/wiki/HOWTO----Image-shrin... which not only handles resizing & other basic img needs but optimizes on top of being very light weight on memory & CPU cycles compared to most other implementations.


There's some unexamined hooey in this post. For example you can't really compress a JPEG, but you can re-encode it to a lower quality, which can have dramatic effect on the file size. That's what moz2jpeg is doing for this person.

But they could have just done the same thing in Photoshop, Preview, MS Office image tool, etc. JPEG is a standard; file size does not depend on what tool you use to create it. It's strictly dependent on the image itself, and the render settings you choose. Same with PNG.

In fact, you'll get better quality for the file size if you go directly from the original image straight to your final resolution in one step. Rendering to high-quality JPEG, then re-rendering on the server to shrink the file size, will give you worse image quality than just going straight from the original file to the final in one render.

WebP looks promising but is not yet well-supported. Most sites can go a long way just by caring about, testing for, and adjusting image rendering defaults to optimize for file size.

EDIT to add a bit more:

If you are optimizing images as part of a pre-deploy build process, you can use whatever library you want. The only thing that really matters is your choice of format (JPG or PNG), and the render settings. Or, you can hand-optimize the images and drop them into your repo to deploy as-is.

If you're running a CMS where non-developers are going to be uploading images through an admin UI (like Wordpress), your CMS should be using a server-side library to render optimized versions of the images that get uploaded, then serving the optimized versions. You can adjust the settings of the server's image library, although that might require a plugin or module, or custom code, depending on the CMS.

Missing this is a common killer mistake in page load times. I visited a site the other day that served a 16 MB JPEG file for the "hero" image on the homepage. My guess it that it was the JPEG straight out of a high-resolution camera.

This is also good for user privacy, as the server-side rendering should remove IPTC and EXIF data that would get served with the original image.


> JPEG is a standard; file size does not depend on what tool you use to create it.

That’s simply not true. The specific choice of cosines can make a huge difference on compression while having nearly zero perceived visual difference. Most encoders however take a naive approach whereas something like Guetzli does an amazing job of compressing JPEGs way better than Photoshop ever could.


Guetzli is not appropriate as a general-purpose tool for optimizing website images. That's not really what it's designed for.

I guess I should specify that I'm trying to give practical advice for people who think the linked blog post is instructive. For the vast majority of people, the simple act of thinking about, selecting, and testing the available settings in popular image optimization tools is going to have a far greater effect than the small optimizations (and sometimes big tradeoffs) that might come from cutting-edge stuff like Guetzli.

The reward per effort of going from "not optimizing my images" to "purposefully optimizing my images using common tools" is typically much bigger than the step from the latter to "using the absolute best possible tool for each image."


> Guetzli is not appropriate as a general-purpose tool for optimizing website images. That's not really what it's designed for.

What is it designed for? I downloaded and compiled it, and it seems to work quite well for the photographs on my website. The README says:

> Guetzli is a JPEG encoder that aims for excellent compression density at high visual quality

https://github.com/google/guetzli/


Guetzli is for minimizing file size at the highest quality levels for JPEGs. Essentially, it's for nice looking photos.

It only goes down to quality 84 and it takes a looonngg time to optimize. As a point of comparison, the author of the linked blog post dropped his JPEG quality to 70 and was happy with it. A JPEG at 70 (or lower), if you're happy with the look, has a good shot to be even smaller than the smallest Guetzli output.

Generally speaking, the easiest gains in JPEG file size will probably come from just dropping the quality down and down in tests, and deciding what you can live with. But if you have to have the best quality, and have plenty of resources/time for encoding, then maybe Guetzli will be a good fit.


In that case you should probably still compress from an uncompressed original though.


> JPEG is a standard; file size does not depend on what tool you use to create it. It's strictly dependent on the image itself, and the render settings you choose.

This is not correct. For both JPEG and PNG the compression ratio can depend on the tool.

For JPEG the reason is that the quantization tables are not fixed. The quantization tables dictates how information is thrown away, and as such is responsible for the lossy part of JPEG.

The JPEG standard merely contain recommended quantization tables, however a lot of research has gone into deriving better quantization tables, especially image-dependent (tailored) ones that can provide better compression for the same image quality, or better image quality for the same compressed size.

For PNG the standard defines five pixel filters used to transform pixel values into something more compressible. However it leaves the encoder free to decide which filter to use and when. Thus a simple encoder is free to use the "None" filter for everything, ie don't do any filtering.

In addition, the PNG standard allows for additional pixel filters to be registered as extensions. Thus encoders with more advanced pixel filters could potentially compress better than an encoder supporting only the standard filters.


For PNG it absolutely does depend on the tool.

Here's why: PNG has a pre-processing step to turn pixel data into bytes, then a general purpose compressor is used on the bytes. The algorithm used is picked per row, and if you pick the right one the compressor will have a better shot at small output, but which algorithm is best for each row of your image?

The popular libpng reference implementation contains a weak heuristic to pick algorithms that might do well, but a tool can do much better... Or it can do much worse. Early PNG support in Adobe Photoshop just picked "do nothing" for every row, resulting in huge PNG files.


Interesting... can you ballpark the percent improvement folks would expect to see between the most well-known image tools or libraries available now?


I suggest you just try running optipng on some random PNG files you have on your HDD. Results vary greatly, but I've seen savings between 10 and 20% on files coming from GIMP where I always set the compression to 9 (maximum). I never touched Photoshop in my life so no idea what you can tune there and what its defaults are, but general experience shows that a lot of people either just don't bother and go with the defaults, or go all crazy clicking and manipulationg everything without knowng what's going on.

I agree with you that for JPEG there is no silver bullet, but like that other comment here suggests, you could perfectly just run optipng on max settings in a cronjob and call it a day, because you know there will be absolutely no quality loss if you don't explicitly request it.


Here's a blog post that compares them. Apparently the difference can be pretty extreme:

http://daleswanson.blogspot.com/2013/08/a-comparison-of-png-...


> For example you can't really compress a JPEG

You can compress (or rather, optimize) an existing JPEG if the Huffman tables were suboptimal (which they often are). It usually gives a few percentage points of decreased file size.

You can also drop metadata (e.g. EXIF) in many circumstances, which saves a bit more.


You can save another 7.6% on that png by passing it through advpng+pngout. ImageOptim is fantastic for this: https://imageoptim.com/mac


We at Gumlet also provide image optimisation which just works: https://www.gumlet.com It's also accompanied by client side javascript library: http://github.com/gumlet/gumlet.js


Cloudinary is another option to get optimized images/videos without having to manually optimize everything. Definitely a good option if you have lots of user uploaded content. Running an optimization script within your app will likely use more resources than it's worth unless you're already operating at scale.


If you use Akamai's CDN you can get per-browser auto image optimization (size, quality, format) as an add-on service via Image Manager.


2016 Google developers conference had something on this : https://m.youtube.com/watch?v=r_LpCi6DQME


we recently moved all our images to s3, we have created some lambda function which compress the images using Guetzli, it's very slow but the results are good.


This article like many others is full of fallacies.

Image formats are not used wisely: [1] is PNG not JPEG and [2] is JPEG not PNG.

> I found that setting quality (mozjpeg) to 70 produces good enough images for the most part, but your mileage may vary.

You can get away with this setting for hidpi sizes but 1x will look horrible [1]. If you care about quality, the mileage is actually 75-95.

> (Pngquant) quality level of 65-80 to provide a good compromise between file size and image quality

Again, it may only be applied to hidpi sizes, and it will easily ruin any gradients or previously quantized images.

Pngquant is a great color quantization tool but it does not actually perform any lossless PNG optimizations, which can save you at least 5% more, and up to 90% in some cases.

All of these tools will also blindly strip metadata (but it's not guaranteed!) along with color profiles and Exif Orientation resulting in color shifts and image transformations respectively.

Most importantly, none of them are good enough for automatic lossy compression. Guetzli is the closest but it still has some severe issues [3]. I'm also trying to build a real thing, and it is hard.

> there’s value in using WebP formats where possible

WebP lossless and WebP lossy are quite different formats. WebP lossy being always 4:2:0 is not a good replacement for JPEG [4] especially at higher quality. On the contrary WebP lossless has evolved into a decent alternative for PNG including lossy [5].

Proper responsive images would give you considerably smaller page weight and improve performance on mobile devices. BTW Google treats oversized images as unoptimized [6].

[1] https://freshman.tech/assets/dist/images/http-status-codes/e...

[2] https://freshman.tech/assets/dist/images/articles/freshman-1...

[3] https://github.com/google/guetzli/issues

[4] https://research.mozilla.org/2014/07/15/mozilla-advances-jpe...

[5] https://twitter.com/jyzg/status/958629795692150790

[6] https://developers.google.com/speed/pagespeed/insights/?url=...


Hate to ask a dumb question but What are the best tools/strategies to optimize images?


I just did convert -strip -quality 40 for a website I had to make


I use Kraken on every site I do for this reason.


There are better ways that place usability first. By usability this means that there is nothing for the content creator to do and nothing for the frontend developer to do.

I use mod_pagespeed - there are versions for nginx and Apache that do all of the heavy lifting.

With mod_pagespeed you can get all of the src_set images at sensible compression levels. All you need is to markup your code with width= and height= values for each img.

With this in place the client can upload multi-megabyte images from their camera without having to fiddle in Photoshop etc. It just works and the hard part is abstracted out to mod_pagespeed.

By taking this approach there is no need to use fancy build tools. However, a background script to 'mogrify' your source images is a nice complement to mod_pagespeed, if you want your images to be in Google Image Search then 1920x1080 is what you need.

The really good thing about taking the mod_pagespeed route is that you do get 'infinite zoom' on mobile, e.g. pinch and zoom and it fills in the next src_set size. Keep going and you eventually get the original, which you have background converted to 1920x1080.

There is also the option to optimise image perceptually, so you are not just mashing everything down to 70% (or 84%).

On your local development box you can run without mod_pagespeed and just have the full resolution images.

Or you can experiment with more advanced features such as lazy_loading - this also comes for free with mod_pagespeed.

If you want your images to line up in nice squares then you might add in whitespace to the images. Maybe taking time in Photoshop to do this. However, it is easier to just 'identify' the image height/widths and to set something sensible for them, keeping the aspect ratio correct. Then you can use modern CSS to align them in figure elements to then let mod_pagespeed fill out the src_sets.

Icons and other images that are needed are best manually tweaked into cut down SVG files and then put into CSS as data URLs, thereby reducing a whole load of extra requests (even if it is just one for a fiddly 'sprite sheet').

Oh, a final tweak, if you are running a script to optimise uploaded images and to restrict max size then you can also use 4:2:0 colour sampling. This is where the image still has the dots but the colours are 'halved in resolution'. This is not noticeable in a lot of use cases and particularly good if you are using PNGs to get that transparency.

As mentioned, mod_pagespeed reduced project complexity by offloading the hard work to the server, keeping cruft out of the project and making the build tools out of the way. It can also be covered to inline some images and plenty else to get really good performance.

Mileage may vary if the decision has been made to use a CDN where such functionality is not possible. However, if serving a local market then a faux CDN is pretty good, i.e. a static domain on HTTP2 where the cache is set properly and no cookies are sent up/down the wire to get every image.

https://www.modpagespeed.com/doc/filter-image-optimize https://www.modpagespeed.com/doc/filter-image-responsive


> How Image Optimimization decreased my website's page weight by 62%

Title needs spellcheck.


Would save 3% too.


$ echo "How Image Optimimization decreased my website's page weight by 62%" | wc -m

67

$ echo "How Image Optimization decreased my website's page weight by 62%" | wc -m

65

$ echo "scale=3; (67-65)/67" | bc

.029

Math checks out, sir.


But:

    $ echo "" | wc -m
    1
You probably want to add the -n flag to echo. It doesn't change the validity of the statement though.


bc; A new utility I did not know about, thanks! #themoreyouknow


Optimisation


Venkatraman Santanam.

Made deep learning improve thumbnail representation. Facebook (or google) showed up, gave him a 6 to 8 zeros to the left of the decimal, preceded at the far left by a $ and O(1).

https://arxiv.org/pdf/1612.03268.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: