Hacker News new | past | comments | ask | show | jobs | submit login
We increased our Lighthouse score by making our images larger (rentpathcode.com)
180 points by vlucas on July 6, 2021 | hide | past | favorite | 83 comments



When a measure becomes a target, it ceases to be a good measure.

Which is to ask: So they improved their Lighthouse score, grats, but did they improve or reduce their user experience? Where's the data on that?

This is of course caused by Google's "good intentions" gone awry. They originally created Lighthouse to help better inform developers, so that developers could create better user experiences.

But Google is now [ab]using Lighthouse as a target for Google Search page-position preference. Developers in turn given this situation code towards Lighthouse (even into its bugs/quirks) because they need that higher ranking, even if it hurts the UX ultimately.


I was surprised that when I tested my canvas library website using the new Lighthouse metrics it scored 92 on performance (and 99 on accessibility, which is the metric I most care about). The website[1] includes a massive animated banner in a <canvas> element which requires loading two large (yet responsive) images alongside processing etc to get the animation playing. The LCP measure was 0.7s

Checking web.dev's LCP page[2] explains my luck - Lighthouse does not (currently) consider <canvas> elements, or their asset requirements, when calculating the LCP score. They do include: "<img> elements; <image> elements inside an <svg> element; <video> elements (the poster image is used); an element with a background image loaded via the url() function (as opposed to a CSS gradient); and block-level elements containing text nodes or other inline-level text elements children."

I am certainly not suggesting that everyone starts loading their image-heavy banner content into <canvas> elements to help improve their LCP scores as I fully expect Google to address such attempts to game the system at some point in the (near) future.

[1] - https://scrawl-v8.rikweb.org.uk/

[2] - https://web.dev/lcp/


I agree with the premise here, but we literally added ~100px to the width of our really small thumbnail images that were hard to see with most modern screen resolutions. People come to the website to view property photos. They were literally smaller than a 256x256 map tile. The size of them should have been increased long ago.


Well, I'm glad that worked out in your case, but that's just getting lucky, no? Sounds like in most other scenarios it would backfire.


That's not the take I got from reading the write up. There was logic in the decision to increase the size that was not difficult to understand. I'm not familiar with what Lighthouse is/does/why/etc, but they described the situation enough for this to make total sense on why it worked. Maybe they are lucky to have people working for them that can work the problem?? That's me stretching to accept lucky in the least as the reasoning.


It only worked because Google Maps loads images in 250px tiles. If Google Maps decided to update tile size tomorrow, the fix regresses. Arguably the entire map as a whole is one content, and that should be measured as the LCP, but more importantly it shows the measure is just very brittle and easily gamed, as other comments in this thread already attest to.


Does setting the size inline with width="250" in the embed affect the LCP even if G decides to default to 512px? I'm actually surprised there's not a ?width=250 value in the G embed url


> There was logic in the decision to increase the size that was not difficult to understand.

I agree, but the logic doesn’t make sense if your goal is ‘optimizing the user experience’ as opposed to ‘optimizing our lighthouse score’.


Apologies, I wasn't specific enough in expressing what I meant.

You understood it to mean "they didn't know what they were doing and got lucky", which is a valid interpretation but not what I was going for. They definitely do know what they're doing.

What I meant was more along the lines of "the metric to optimize for was no longer strongly correlated to the original goal of improving the user experience. They were lucky that in this scenario optimizing for the metric aligned with an improved user experience."


Now here's me doing some back bending playing devil's advocate. If the image being optimized for UX resulted in the wacky scoring situation that lowered their ranking in search, would that also not be considered a bad UX if it's harder for users to find you? The team noticed the lower rankings, then investigated. They then realized why the ranking was lower due to the bad LCP. The investigation found that their image optimization was causing the bad LCP, so a quick change on that fixed the bad LCP. So it's not like they were chasing a metric for metric's sake. They did the correct optimization first. So again, I think calling it lucky is a bit harsh in this case. However, I totaly get where you are coming from and know that a lot of places will luck there way into things without ever knowing the why.


> If the image being optimized for UX resulted in the wacky scoring situation that lowered their ranking in search, would that also not be considered a bad UX if it's harder for users to find you?

If you insist, but even then that is fixing the symptom instead of the cause.


Don't hate the player, hate the game? These people obviously tried to keep their UI/UX in mind by originally using the optimized image. That wound up screwing them in a way not normally thought about in UX optimizations. They published their findings so that others that don't normally think in this manner can be aware of their findings.

They are the player.

The game is the bullshit G does, and all of the other vendors that latch onto the bullshit G does that provide hoops for devs to jump through.

That's the game.

Who's really needing the ire of internet readers?


It's not that I'm disagreeing with you, it's that I can not agree with you because we're not even arguing about the same thing here. I don't know what else I can say to clarify that.


In this particular case the developer had wondered if chasing Lighthouse was a good idea, but realized that they had already made a similar change on another site for different reasons. Better Lighthouse scores were just a bonus.


It's been annoying seeing these metrics creep into marketing reports and seeing devs wasting hours "fixing" the issues. Page speed is definitely an issue but I rarely see these stories accompanied by actual user speed data from the site. Google Analytics captures this stuff so you can actually compare pages per session, time on page, scroll distance, etc to paint, load, and render times to "quantify" what users are actually seeing when they use your product.


Well, they went with a simpler approach earlier, with AMP. AMP couldn’t make any literal guarantees about UX or performance, but in comparison to the mobile experiences reading articles before it, it did seem to often be an improvement.

And of course, people hated it, for numerous reasons. You could just make pages faster and less annoying by using plain HTML, obviously. The problem isn’t that anyone knew this, it’s that most of the big publications DGAF, and they were the entire point of AMP as far as I can see.

There are many reasons to dislike the way AMP was executed, even if they did eventually rectify many of the serious issues... but the AMP cache concept in combination with the soured reputation probably spelled the end of it no matter what would happen next. AMP for E-mail doesn’t really have any of those problems, so I suspect AMP will probably live on in e-mail.

But Google does definitely want to push sites towards better performance and users towards better performing sites; they simply have a mutual interest in this, I don’t think most people would dispute that. Even just in general, but especially when new Android phones are not packing the same punch as new iPhones are... so that scroll jank is going to hit a lot harder.

Lighthouse is generally good. It actually discourages a lot of things that people already thought were bad ideas, even ideas from Google. It tends to score plain HTML pages very well, and if it does catch a snag on something you can usually at least work around it. It provides decent diagnostics for how to optimize a web page. It does have some issues, and that will contort people into doing otherwise silly things. But if the website in question chooses a metric in something like Lighthouse in a way that is detrimental to their UX, chances are they just never gave a shit about UX to begin with.

While I don’t want to come off as a shill, and I also don’t want to come off as accepting Lighthouse just because it’s “less bad” than AMP, I just think that the “abuse” here is less substantiated. Although there are edge cases where Lighthouse can be silly, it does generally force you to make leaner pages. I think it makes a decent measure of performance. As long as a few points on Lighthouse is not make-or-break for SEO scores, and I doubt it’s that sensitive given the variability per-run, I think this is actually a good thing.

Search engine metrics are always imperfect. Anyone familiar with Wikia/Fandom can understand how its subpar content can overshadow much better community-driven websites due to its SEO position, and that is even in spite of what an awful, slow, ad-ridden pile of shit it is. I’d be delighted to see it Page 2’d over Lighthouse scores. But that’s just my feelings on the matter. I’m sure there’s more to it than that.


The LCP metric is particularly brittle. It's concerning Google is linking it to search ranking thereby ensuring everyone caters to it.

In our case, our hero image (formerly the LCP element Lighthouse picked up) is an animated image illustrating our product. It starts animating very fast for most of our audience and finishes loading in the background as it continues animating.

However, the Lighthouse LCP timestamp is not the time which the image starts animating, instead it's the time the animated image _completely finishes_ loading. So even though the animation starts almost right away and doesn't stutter, our LCP was several seconds or more.

We "solved" it by making the animation bounding box size slightly smaller and some text boxes on the page slightly larger so the LCP was tied to the text box loading.


There's a similar unsolved LCP issue about progressive images. LCP is currently completely unaware of progressive rendering, so even when an image is rendered at a good-enough quality, it doesn't count as a "paint" at all.


Wow, it's not even like progressive JPEGs are anything new, they have been around forever (decades?)!


Yes, decades. I saw them in the 90s. They needed that feature to compete on the web during the dial-up era because GIF already had that feature, and it was well-established and featureful.


Unbelievable. I have a website with an intro animation element that loads instantly and lasts 7 seconds. Guess what my LCP is? 7 seconds. The combination of power and neglect from google here is astounding.


My lcp is about 0.7 seconds in real life with no cache but 4s on web.dev due to a custom font that actually loads almost instantly (100ms or so). Their metrics are bugged out


You could also split the animation up and seamlessly switch.


This is a different thing, but it actually is possible for larger JPEGs to be faster (smaller file size) than smaller JPEGs.

Back when 4x "retina" resolution displays first hit the market, someone observed that you could take a 4x sized image, crank up the JPEG compression (from say 80% quality to 40%), and the resulting JPEG artifacts would visually disappear. The file size of larger, compressed images are frequently smaller than that of smaller, less-compressed images. And the render quality of "squishing" a 4x image down to 1x on a lower-resolution display similarly looks fine.

The images on the article in question are ironically broken, of course, but anyway I've been sampling images this way for years now. https://alidark.com/responsive-retina-image-mobile/


Lossy compression is fascinating because of how we can make certain optimizations based on the human eye, not keeping data in the right order necessarily. JPEG has seemed to stand the test of time so far, but new image formats like AVIF are definitely interesting.


In the inverse, smaller images can be bigger. There's a piece of software that I interact with a lot that's always trying to save space by crushing PNGs down to smaller dimensions, but in the process can introduce a lot of new complexity to the data (by turning what were crisp pixel edges into blurs), hurting PNG compression and increasing file size.


Yup, that makes sense.

Really I find the whole topic just infuriating… so much of the bullshit we've been dealing with since the nineties, from browser incompatibilities to javascript stagnancy to CSS layout (grid, flexbox) has gotten immeasurably better over the years. I never would have imagined that we'd still be stuck with JPG/PNG forever, though. (I hope that WebP is the savior but people still seem to be waffling on it.)


using the terms "small" and "big" referring to the weight of an image is confusing... it is better to use "light" or "heavy".


How do you measure the "weight" of an image? Sounds quite confusing to me.


There seems to be a misconception that the Lighthouse score is linked to the page experience search rank update. Higher Lighthouse scores won't give you better SEO. Only better results from field data (albeit only Chrome field data) will impact your search results [1].

Lighthouse is only intended to be a guide (name checks out) for developers to identify potential opportunities to improve real-user performance. Core Web Vitals is how Google has decided to align lab and field data in a more unified way. Historically, this has been pretty difficult, particularly with interactivity measurements. For example, Total Blocking Time (TBT) is a lab proxy metric for First Input Delay (FID) — they don't measure the same thing. The team at Google has frequently communicated that the only true way to know is from measuring on real users [2].

While the metrics aren't perfect, they are taking in feedback to adjust how metrics are measured and weighted, such as with the windowed CLS update [3]. I for one have found the tools and browser support for measuring performance to have improved significantly, even in the last few years. Kudos to the Web Perf community, who I'm sure would appreciate any feedback.

[1]: https://support.google.com/webmasters/thread/104436075/core-...

[2]: https://web.dev/vitals-tools/#crux

[3]: https://blog.webpagetest.org/posts/understanding-the-new-cum...


KPI's/Goodhart's Law in a nutshell... In theory Lighthouse is a nice resource for finding ways to optimize your site but since it's tied to page rank it means we are encouraged/incentivized to eschew best practices and to use lighthouse practices instead.


I've posted here numerous times on how you can make your site worse and increase your score. It's frustrating that Google is pushing such a broken set of metrics for SEO. It's easy to trick but often in ways that make the user experience worse.


A good question is why Google is using only one tile to measure LCP for sites that have maps taking 50% of the viewport.

Is it just a coincidence, or did they make it to avoid ruining LCP metrics for large sites using Google Maps extensively (Airbnb, rent.com, etc.)?


Is this really a bug though? Its a bit silly but in a very real sense they added more fast loading, eye catching content.


It's interesting to me that the final "here's the new improved layout" screenshot is a 2000px wide screen shot. If you have to have such a large window to not fail cramped just to make some random metric happy, then I think you've failed.

The page itself also really seems to fight me moving around the map due to how it manages state and the URL stack. No doubt my browser isn't the same as a normal users, but it strikes me they're chasing the wrong goals here.


We are actually working on this! Just because we did some small fixes doesn't mean we aren't working on something bigger :). The current app is a full isomorphic SPA with React + Redux. It's a beast, and around 80% of the code is completely unnecessary on the client. Our initial approach was little tweaks and performance wins like this, but those are diminishing returns with a JS payload size so large.

Next week we are rolling our our new property detail page for Rent.com that is mostly server-rendered with a few interactive React components injected and hydrated on demand as you scroll. It brings the app bundle size down from about 1.5MB (yikes!) to around 140kb. It's been a huge, long effort and is a completely new app architecture. It of course increases our score way more than just 17 points :)


Can you share what you guys used for hydrating on demand as you scroll?


We custom wrote our own Babel plugin + some runtime functions that handle this. It changes the way we import our components and we have to choose async vs. server rendered component at import time, so that is the main trade-off.

We will be sharing more about this approach along with an open source library soon. We have been working on it for the better part of a year.


The first thing rentpath should do is stop scrolljacking.


I wouldn't be surprised if this is because it reduced the Cumulative Layout Shift (CLS).

Having larger images defined reduces the likelihood you'll have layout shifts while the page is painting, which greatly impacts your overall score.

https://web.dev/vitals/


Didn't the web fix this with <img width="xxx" height="xxx" /> long ago? Page authors know how big the image is going to be on page load, so renderers can just allocate the space for the image there. No need for layout shifts.


This is correct. Weirdly, Lighthouse will ding you if you set the dimensions in CSS rather than HTML, even if the CSS is loaded first, blocking rendering thereby preventing layout shift.


I'm pretty sure it just warns you if the width and height aren't set, but it doesn't affect your score unless the layout actually shifts. You're fine if you've set the size in CSS instead


A lot of web designers don't do this any more


Yeh I have been looking at a major brands site and the thing that triggers "poor" pages seems to be CLS based on what GSC is telling us.


The end result was that increasing the size of our thumbnail images raised our performance score by reducing our LCP score.


... LCP is specifically mentioned in the 2nd paragraph of the article, in addition to the conclusion quote:

> The end result was that increasing the size of our thumbnail images raised our performance score by reducing our LCP score


Clicking one marker on the map triggers almost 100 network requests. Seems crazy, why push all the images for all properties when I only need the first, then lazyload the rest when I interact with the slideshow.

Not sure why sorting then reloads the data, as it looks to sort the matched results that are already on page.


Higher score, not faster site nor improved UX.


If they’re using loading=lazy it might actually be faster and a better UX, because fewer images load initially. But the same could of course be achieved with increased font size and whitespace.


LCP is garbage. If you have top navigation and left navigation (think Jira, Gmail, etc) their sum could be your LCP even though none of your actual content has loaded. Good for your LCP score, bad if you want to use it to actually measure performance


What, this is improving your "score", but not actually improving user experience of perceived load time, is it?

It's just gaming the score?

An odd thing to brag about? or am I missing something?


What alternative approaches did you try?

Would loading a placeholder image for the map and transitioning it in when ready help?


Unless I recal incorrectly, LCP will finish when changes to that area are finished. So a placeholder is used, it will be pointless.

loading=lazy can actually negatively impact LCP


That was my thought load a lowres fast image and lazy load the map


What is the point of a lighthouse score?


Better rankings in Google's organic search.


Also, better perceived performance by humans.


Except it's not.


Time to include a hidden 10000x10000 white PNG to all your webpages I guess.


This is kafkaesque. "Here is the stupid thing we did to game the stupid metric and make it happy"


This "stupid thing" was the right thing to do for end users. They come to our website to see pictures of properties they are looking for. We just made them ~100px wider. They were literally smaller than a 256x256 map tile. It also matches the image thumbnail sizes of its sister site as well (which were already larger), which was mentioned specifically at the end of the article (I doubt you made it that far).

Beyond that, page speed is a ranking factor in Google, so it's absolutely not "stupid" to pay attention to it and make small changes that improve it: https://searchengineland.com/google-speed-update-page-speed-...


Why was it right for end users?

From what I can tell, it just wasn't wrong for end users.


I don't think this was the right thing to do for end users. The largest perceived element on the page is still the map on the right, and it takes as much time to load as before the change.

They are just gaming the metrics changing what the LCP measures with no added value to the users.


The whole map, yes - and it still is the largest element on the page by far. The LCP score was being based on each map tile.


> was the right thing to do for end users.

Only because the company made the change anyway on a sister site, for an unrelated reason, and hadn't bothered to make the same change here.


Because a 230px thumbnail is hard to see and was unnecessarily small on most modern devices and screen resolutions. It's the reason we changed it in ApartmentGuide.com too.


"Google's stupid system reminded us to propagate a change we'd already made on another site for pure usability reasons" is not at all a bad thing but I think the general tenor of the responses here makes it clear that the blog post didn't make it obvious that was a reasonable tl;dr.

(I thought I'd read to the end but hadn't realised that was the tl;dr until I read your comments, FWIW)


Why is making the site slower the right thing for end users?


It didn't make the site measurably slower. We specifically tested this. We already optimize in other ways and lazy-load most of the property images. The only ones that load right away are the first 3 property images in the list.


You have a test showing that after sending X bytes, Y additional bytes take 0 time to arrive?


This is impossible to achieve in the real world. Load times have a lot of variance. Our tests were performance neutral after the change.

In a vacuum yes more data = longer load, but that's not how it plays out in a real browser. Things load in parallel, things block in certain parts but not others, etc. So yes, you really can sometimes send more bytes with no impact on total load time. It just depends on what they are, where they are, and how they load.


I read your article several times, because I was incredulous at the tone that was being used in the writing.

This is an extraordinary Goodhart's Law example.


For the last half decade, if your users are accessing your website through a search query that isn't your site name, you have the wrong audience

SEO is a fools errand that should be relegated to history books

People click-thru from social media and chat apps, not type


> For the last half decade, if your users are accessing your website through a search query that isn't your site name, you have the wrong audience

Hard disagree on this. How am I supposed to discover rentpath.com if I don't know I'm looking for it? But if I'm looking to "connect with prospective renters by offering powerful digital marketing and leasing solutions," like the marketing copy on their landing page says, I can easily see myself landing on rentpath.com.

Tell me: did you literally type "hacker news" into google in order to discover this site?


You discover it because its going viral already on your ads and feeds

Or the search query you did use was shared to the data broker and now whats going viral to you already matches what you would have looked for

Not advocating for this, it also drives traffic and is whats happening right now

Its not important that rentpath or some other service you never heard of was what you picked, only their ad spend and ongoing virality is important as they rely on a predicted distribution of the population to be shown their service and click through


> Or the search query you did use was shared to the data broker and now whats going viral to you already matches what you would have looked for

This sounds like a conspiracy theory. Search engines generally aren't sharing your queries?


Uhm ok 2010 called

Its not important which specific service shares what when they all use the same concepts or an analytics package they added happens to be doing that without the developer/executive’s specific knowledge


Google, DuckDuckGo and Bing are all using 1P scripts.


I've got an adblocker and no 'feeds', but here you are acting like my money doesn't spend.


I runs a niche news site (not in English). Our traffic from search engine for keywords related to our niche isn't insignificant.


Nope not based on my experience,


Not really they don't say if its larger dimensions or image payload.

You can have a physically larger image that has a smaller file size




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: