This is pure politics. The AOM want proof of loyalty and shipping a competing image format threatens that loyalty pact.
It’s also incrementalism at its finest. “We already shipped av1, adding a single frame decode is minor overhead” is the likely rationale. But it misses the point that images are gazed at while video is moving and can get away with a lot of quality sins that you can’t with images. But worse is that av1 is full of optional features. Sure the client could implement it, but how can you know? Even the lauded animated avif is horribly broken across the ecosystem because of this optionality — and there isn’t any way to tell a priori if the browser can support animated avif. Or 444 avif. Or ycocg avif. It’s just down right broken as an image format falling to the lowest common denominator.
While I agree that it is too early to turn to a conspiracy theory, the supposed rationale against JPEG XL is also pretty unsubstantiated and it does feel that the Chrome team doesn't want to share the actual rationale for whatever reason.
Google's rationale is very weak, yes. But it makes no sense to say that this is the AOM pushing Google to do this as a proof of loyalty. Why wouldn't the same apply to Apple and Mozilla? In fact, if deprecating JPEG-XL is something the AOM wants, wouldn't you expect the browser vendors (all of whom are AOM founders) to all conspire to kill JPEG-XL? The right conspiracy theory here isn't even that AOM is forcing Google to do this as a proof of loyalty, the conspiracy theory which at least makes some semblance of sense is that Apple, Google and Mozilla are working to kill JPEG-XL to help their own image format win. But then it's just a group of companies acting in their own self interest and not some coercion, so I suppose that story isn't as sexy.
Any time the AOM is mentioned people keep talking about how evil AV1 is and how evil Google is for pushing AV1 through YouTube, and those comments are consistently among the most upvoted comments. I'd say it's safe to say there's a decent amount of hatred being expressed, and I just don't understand it. It's almost as if a decent chunk of the HN audience has a vested interest in eliminating open patent-free media codecs.
> HN audience has a vested interest in eliminating open patent-free media codecs.
Well, first, because in the case of AV1, a self claimed patent free Video Codec is not, and never was, patent free. ( They stopped claiming to be one since around 2019 ).
Second, the AV1 is evil or evil Google ( I dont think they said it but you could argue they do implied that ) you were referring to is very recent. To the point it was this month on HN. But yes, it has grown, quite a bit in the past year or so.
Third, because a few HN audience actually cares about quality and trade offs. One could argue VP9 being a royalty free codec is actually a better fit than AV1. And despite being so called Open, Google isn't very receptive to criticism of AV1. Whether it was the codec itself or its development model.
Had Google ( or AOM ) actually worked on AV2 [1] with all these in mind, may be there wouldn't be as much backlash.
[1] AV2 was originally promised to be out by 2020.
I don't understand why you and everyone else put so much focus on Google. AV1 is developed by the AOM, so why would Google be receptive to criticism of AV1? Why is it Google's responsibility to develop AV2? People keep directing (well-deserved) criticism of Google towards AV1/AOM, and in your comment you're directing criticism of AV1/AOM towards Google. Is Amazon, Cisco, Intel, Mozilla, Apple, Microsoft, Nvidia, ARM, Netflix, Samsung, Facebook, etc etc. truly so insignificant in this that AOM and Google are basically one and the same? Why would that huge group of absolutely enormous companies let themselves be bossed around by Google?
Regarding patents: my understanding is that AV1, like VP9 but unlike H.265, is unencumbered by patents, and you can use AV1 in products and FOSS projects without paying a license/royalties. Is this incorrect? If so, why has nobody updated Wikipedia? They still list AV1 as a royalty-free codec. If it truly is just as parent encumbered as H.265 then I agree with all the criticism, and our best bet is to stick with the very first version of H.264 (whose parents will expire next year) and JPEG. Because surely any patents which apply to AV1 also apply to its predecessors VP8 and VP9.
>Why would that huge group of absolutely enormous companies let themselves be bossed around by Google
Well first AV1, is practically 90% VP10, On2 hence Google. Second Those companies didn't contribute to AV1 development. They are merely supporting it.
>unencumbered by patents
unencumbered by patents is not patent free. Which is what confuse a lot of people. ( I guess, or I hope ). Google has lots of patents on AV1. Even when creators of the technique dont want it to be patented [1]. The AOM as group are there together so they dont sue each other on patents, and will sue ( cough defense ) anyone who challenge them on patents.
We also watch on the CBC Gem Apple TV app. It seems to have everything we want... live events, replay events, "PrimeTime" coverage where they switch between popular events with commentators. We even watched USA vs ROC men's indoor volleyball last night. There were no commentators, just the live feed of the match. It was odd to listen to an empty gym with no commentary but at least we were able to watch.
5.25" floppy disks are more likely to suffer physical damage at this point in history than a 3.5" disk. This would also contribute to the supply-demand curve biasing to 3.5" (on top of the ubiquity argument)
3.5" have a more durable design - the hard plastic, the spring loaded shield and, most importantly, the center disk that rested on the enclosure housing that prevented the magnetic medium from sagging.
When 5.25" disks are stored on end, they sag over time causing them to physically be unreadable. You have to store them flat. 3.5" are (mostly) resistant to this sag and therefore will be more likely to survive long term.
That's what I was wondering: if you came upon a cache of 5-1/4" floppies, what are the odds of them being readable? How well will the magnetic media hold over, say, three decades?
(While I'm sure there are lots of reasons you might want a floppy drive, the case I have in mind is "Hey, here are our old financial records from the 80s" or "I wonder what's on this disk in grandma's attic".)
I can see the slider getting snagged, but I don't see how that can destroy the drive. At least for the model drive I have the slider clears way before the floppy gets near the head. Worst I can see happening is the floppy getting jammed inside, but that should be easy enough to fix given a screwdriver.
Let’s call a spade a spade. The only real world problem that WebBundles (and Signed Exchanges) really solve is to allow AMP to impersonate your website.
Google wants all the click data and the click through navigation data about users (by way of passive logs) so they can sell more ads.
There are no other real world problems that web bundles solve.
The real world problem web bundles solve is distributed caching. Right now sites have pick one or a few CDNs and have a trust relationship with them and allow them to impersonate your site.
Web bundles changes this relationship so that anyone can cache sites if it benefits them to do so. If you share a link on Twitter or Facebook or Discord or Slack they can cache the page on their servers and deliver it through the connection you already have open to them.
Web Bundles also open the door for network-local caches that don’t require MitM or trusting the cache.
It's not without your control. You don't have to use bundles or signed exchanges. You can use bundles without signed exchanges. You can bundle only some resources, and leave plenty of things like dynamic content, comments, ads, etc. unbundled.
This feels like such a weird stance. I can’t imagine someone saying something to the effect of “I don’t want my DNS records just circulating without my control.” This isn’t like AP giving CNN republishing rights, this is getting a magazine from the stand at the convenience store rather than having going to the Condé Nast corporate HQ.
Like it’s your site, exactly as it would be if it was delivered by your server just delivered by someone who already had a copy on hand rather than fetching a new one every time. This is what HTTP proxies used to do, what DNS caches and browsers still do. TLS broke web caches because TLS secured the connection instead of the content.
For sure, but the big value prop is better speed and less load on your own servers when your content primarily comes from Twitter, Google, IG, Facebook, Reddit, etc. Small sites can use this to not need a CDN and avoid the hug of death.
If it doesn’t come with a benefit to you then it’s all good.
HTTP caches were always problematic from a business perspective. Great for downloading large binaries (installs) but problematic when they don’t expire as expected, or if content needs to change for contractual reasons.
I mean you’re the one who gets to decide how long the signature is valid for just like you can choose your TTL in DNS. And a malicious cache can’t continue to serve stale content because browsers will reject it. You get a hard guarantee that your TTL will be respected.
Links on the page are the same as before signed, so the only actual problem with them is not being able to change/delete the documents hosted elsewhere immediately.
Yea, but the web server delivering them is now google. Google now gets the access logs and using the persistent tls socket can follow the users activity. Sure the content is signed, but the delivery is no longer private.
It doesn't seem like this would materially change the information Google receives. The status quo is that Google knows (via redirect links) what search results I click and when. It doesn't technically know what data the website will send me, but normally it's the same as Google's cached copy. It doesn't know what resources my browser will block, but in a bundle scenario, my browser is free to ignore resources even if they must be transmitted as part of a bundle.
> using the persistent tls socket can follow the users activity
Even if this caused browsers to keep idle sockets to Google alive more often, what information is there to be gained from an idle socket?
CDNs are a known commodity with business relationships. You can’t have an unknown CDN in the mix. They are an extension of your infrastructure and you can control if they are or aren’t in the path of control. They key here is that there is also a legal and business relationship.
For those who aren't initiated in the world of T1D there is some amazing research coming out of the Faustman lab (MGH). There is both a promising cure (BCG vaccine), but also research which indicates that islet cells _do_ regenerate for decades after diagnoses. Islet cells are the part of the pancreas that generates insulin - needed to store/save sugar. That means, that the pancreas is constantly trying to repair the damage from the immune system.
Let that sink in.
Many diabetics suffer from 'random' lows or highs that can't be explained. Not because they aren't doing the right things - because they are - but more likely because their body is bringing islet cells online, producing extra insulin, then the immune system promptly kills them and knocks off the extra production. It's a war within the body!
This is why Loop is sooo amazing and needed. You need a closed loop system that monitors and calibrates to these kinds of bio and environmental changes. Unexpected sprint for two blocks to get to class in time? no problem. Unexpected insulin production in the blood stream? no problem. This project is truly hero work.
As a spouse to a T1D, life is sometimes scary. I, like many partners always have a backup plan in the back of our minds for that that fateful day of an extreme low will not be caught in time. It's scary.
I for one, look forward to life with a bionic partner.
Totally agree. To be clear, I wasn’t trying to overhype the ‘cure’ but rather emphasize that the problem is way more complicated than many believe because of the islet regeneration. That’s why I’m a believer in the tech we have now because it’s the most viable path to long term management.
I'll get my hopes up when something is released. For now it's Loop and some other things. One trend that I don't like is fully automatic without a way to do manual override of all things. The variation between people and needs and even the same person is too great and hasn't been codified. Loop with Omnipod is somewhat on this side currently, but it is still new too.
A few clarifying points:
* the author specifically said -1% Conversion. If this were AWS, that would mean a daily reduction of $6million in revenue!
* the author specifically says this is for IE users. This implies two things: a) webp is not an option and b) likely we are also talking about lower class of hardware (mom & dad). I would expect WebP had older hardware probably has the same kind of performance tax, but it's harder to identify because at least with IE/Edge there is an implied age-of-hardware
It’s also incrementalism at its finest. “We already shipped av1, adding a single frame decode is minor overhead” is the likely rationale. But it misses the point that images are gazed at while video is moving and can get away with a lot of quality sins that you can’t with images. But worse is that av1 is full of optional features. Sure the client could implement it, but how can you know? Even the lauded animated avif is horribly broken across the ecosystem because of this optionality — and there isn’t any way to tell a priori if the browser can support animated avif. Or 444 avif. Or ycocg avif. It’s just down right broken as an image format falling to the lowest common denominator.