"Your website has been banned for illegal activity and all content has been deleted. The suspension is immediate and indefinite. Please consider using another Internet"
Also, their algorithms mistook live streaming of Notre Dame fires as 9/11 incident. How can live stream be a past incident?
You are confusing the term “live stream” to mean something actually happening at that moment in the real world. All a “live” stream actually is to Youtube or FB or any other streaming service is just incoming RTMP packets. Youtube matches incoming streams against their ContentID database just as they do for normal uploads. I would bet that the same thing would have happened if the same footage were uploaded normally after the fact. People will use unlisted streams to broadcast pirated content otherwise. I faced a similarly false claim once where me playing Super Mario World on a real life SNES was falsely matched to some major label song, and since it was my third strike (all false) my account got banned from being able to use Youtube Live entirely. In the end I’m kind of glad that happened because it led to me developing my own minimal self-hosted stream site for small friends-only streams, including things Youtube would give legit claims for. My tiny VPS wouldn’t stand up to thousands or probably even dozens of simultaneous viewers like Youtube or the other big names can, but that doesn’t matter at all for me.
"YouTube Live is an easy way to reach your audience in real time. Whether you're streaming a video game, hosting a live Q&A, or teaching a class, our tools will help you manage your stream and interact with viewers in real time."
Youtube live is supposed to be streaming of live events. My point is that their algorithms incorrectly decided a live stream was a past event. Even google/youtube acknowledged it was incorrect to tag a live event.
The experience on iOS remains profoundly buggy. The URL bar doesn't hide properly, scroll-to-top doesn't work, rotation is busted, text selection is wonky, reader mode is disabled. How I wish I could disable this monstrosity.
I partly agree with this. If you run a WordPress blog, then EasyEngine [1] and OpenLiteSpeed [2] can really boost your site performance.
The performance will be greatly affected if you run some cancerous theme with endless JavaScript calls. But both of the mentioned "engines" have changed the way I see blogging with WordPress.
Best of all, this is accessible to your average user as well. DigitalOcean can spin you up an OLS instance in a minute or so...
No it isn't. It's designed to make the user experience better on sites that frequently host Google ads (and also often contain a ton of bloat, 3rd party js, poorly constructed DOMs, awful CSS, etc).
The only way Google could proactively "solve" this problem was by creating a "standard", and then also offering to absorb end user traffic for sites that adopted the standard. FWIW, AMP is an open standard not solely owned or contributed to by Google.
7. Looking at the last few merged PRs nearly everyone involved is a Google employee. I realize this could be a coincidence but I'm not going to analyze the whole repo.
8. The TSC is 3/7 Google employees.
Regardless, until Google issues a legally binding release of the project to an independent organization it is owned by Google. The TSC and AC could be removed at Google's whim.
Why is that the only way? Seems like they could easily have achieved the same result by significantly penalizing sites based on load time and number of external requests.
A fast load time when a page is indexed does not guarantee a fast load time when it is served up to the actual viewer. Serving the page from cache is the only way to guarantee that the page will still be fast when the user wants to view it.
Because users want relevant search results much more than fast websites. Google already factors in a website's performance in their rankings, but weighing it too much over content relevance will make search results worse.
If they actually cared that much about making the results "relevant", they wouldn't mix a bunch of irrelevant suggestions into the results page, each marked with "missing: <query_term>" pointing out exactly how they ignored part of the user's request.
By that logic then, what's the point of AMP if Google is saying page load speeds aren't really that big of a factor? Why go through through the trouble of deriving a whole new subset of HTML?
Because users want relevant search results much more than fast websites. Google already factors in a website's performance in their rankings, but weighing it too much over content relevance will make search results worse.
What's the difference between influencing positions and visibility based on AMP support vs overall page performance?
If visibility is influenced by AMP then Google benefits, users using Google services likely benefit, web developers suffer, users not using Google services to view the content continue to suffer (because companies will continue to maintain two versions of the website, a bloated version with 100 external tracking requests that will be shared on twitter/reddit/facebook/hn/etc, and an AMP version that will only appear on Googles services), and the internet as a whole suffer. Whereas if visibility is influenced by page speed+external requests then everyone would benefit.
- AMP is a transparent and unambiguous standard that leaves no uncertainty as to whether you are somehow "performant enough" to qualify for the simple but limited visibility boost (referring to the news carousel)
- AMP prevents important usability problems beyond performance, like page content jumping
- AMP can enable advanced/extreme performance optimizations by default that are somewhat rare in practice (eg. only loading images above the fold) or isn't really possible to do safely/properly without a spec like AMP (eg. preloading content before the user clicks the link without unpredictably disrupting the website's servers) or sometimes avoided due to cost (eg. fast global caching with Google's impressive CDN). Important for users in the developing world.
Addressing your other points:
- Users who don't use Google services don't suffer. AMP is not Google-exclusive, all the major search engines (like Bing, Yahoo, Yandex) are stakeholders in the AMP standard and are free to support AMP. AFAIK there is nothing in the AMP standard that favors Google over other search engines or any other platform that might support AMP.
- Not sure how web developers suffer more from AMP. I'd think web developers would suffer more from trying to wrangle their bloated website performance independently rather than use a standard toolkit that enforces best practices and enables difficult/expensive optimizations out of the box.
- It's not clear to me how the internet as a whole will suffer, but I suspect this is just general hyperbole and not a specific point.
as long as google is using their grip on the web to drag crappy websites kicking and screaming into having acceptable load times, i'm okay with it.
yeah, there's a few sites out there that are faster than amp. but most of them are not, and before amp the trend was certainly not to make anything lighter or faster.
AMP pages loaded through Google search with hot cache load slower than some of the websites I've developed when loaded with cold cache.
It's absurdly slow, uses tons of unnecessary JS, and it is a privacy nightmare because now I can't just use server-side GDPR and ePrivacy guideline compliant analytics anymore, but either have to give up analytics entirely, or have to use privacy-obliterating Google Analytics.
And if a user ever loads the page with JS disabled (which all my sites are designed to support), AMP breaks and just shows nothing at all for over 8 seconds.
> AMP pages loaded through Google search with hot cache load slower than some of the websites I've developed when loaded with cold cache.
Basically this. AMP sets a hard upper bound for how fast your webpage can be. Have a purely static HTML+CSS blog but want to get the page rank boost from AMP? Just add reams of unnecessary Google Javascript to what should be a very simple site.
> AMP pages loaded through Google search with hot cache load slower than some of the websites I've developed when loaded with cold cache.
On a mobile device in India? Nonsense. Your page load time is dominated by latency, which the AMP user doesn't see because it is preloaded from near caches.
> uses tons of unnecessary JS,
Which of the JS is unnecessary? The JS to load images allows AMP not to preload images below the fold, which is absolutely necessary for speed and for being friendly to data plans.
> now I can't just use server-side GDPR and ePrivacy guideline compliant analytics anymore
Explain. You still get first party tracking that gets fired when the user clicks to your page and can get user consent via data-consent-notification-id.
> And if a user ever loads the page with JS disabled
In that case, it's the SERP's fault for showing the AMP page instead of the non-AMP page. In the normal JavaScript-enabled scenario, the SERP would be stupid to show your non-AMP page.
> On a mobile device in India? Nonsense. Your page load time is dominated by latency, which the AMP user doesn't see because it is preloaded from near caches.
My test device is a Huawei Ideos X3 on a 56kbit/s throttled 3G connection. The same effect also applies with a Pixel 1 on the same connection, or either of the devices on a modern 3.9G LTE connection. (Tested on O2 net in Germany, works reliably better than AMP even and especially while on a train — if you've ever tried using O2 on the intercity train between Hamburg and Münster you know that every third world country has better internet than, I've seen 8kbps with 13 seconds latency there)
> Which of the JS is unnecessary? The JS to load images allows AMP not to preload images below the fold, which is absolutely necessary for speed and for being friendly to data plans.
AMP uses megabytes of JS for that purpose, I do the same in under 1kiB (even including an intersection observer polyfill). And my CSS is much much smaller as well. Part of why I get a 100/100 in all pagespeed and lighthouse tests, including when simulating mobile connections, while AMP pages get only 60/100.
> Explain. You still get first party tracking that gets fired when the user clicks to your page and can get user consent via data-consent-notification-id.
I want JS-free analytics that do not require tracking or any consent (GDPR allows collecting some information without consent, same with the yet unreleased ePrivacy directive with which AMP is not compliant anyway).
What? Where are you pulling these numbers? Also, what do you mean by hot cache? I'm starting to suspect that you don't even understand that the AMP page (the JavaScript for sure, and often the entire HTML and above-the-fold images as well) is already on the user's device, while your page is not.
If the AMP version takes longer to load than the time between the search results loading and the user clicking on it, then the AMP version will still have a visible load time.
Obviously, this part is affected by the AMP js being in cache or not.
Still, often my own page can load faster than just this user-visible part of loading the AMP version.
AMP works best when the user visits almost only AMP pages (so the resources stay in cache), and the user has a high-latency high-bandwidth connection.
But that's almost nowhere in the world true, in reality most people have relatively low latency with low bandwidth.
Your claims disagree with the facts on the ground, where latency is the main factor affecting page load time. This is the driving force behind CDNs, HTTP2, QUIC, and pretty much every speed optimization that people have been working on in the past few years. https://www.afasterweb.com/2015/05/17/the-latency-effect/
Your claim that your page loads faster also reeks of wishful thinking. Pretty much every AMP page I have loaded from a SERP loads instantly, not just fast. For someone on a worse connection, the page will have started loading before the user clicks on the link from near caches versus have not started loading at all from a far server. In the rare case where the AMP JS is not in the browser cache, it will be after loading the first result.
As mentioned, I've done testing on actual devices on actual high-latency low-bandwith connections, hundreds of times. That's the "facts on the ground".
If you say pretty much every AMP page you've loaded has been instant, please post the specs of the devices and network you've been using for testing.
Additionally, if the latency between the device and the nearest server is over two seconds, the latency to a far server as well as the click latency don't even come into play anymore at all, instead the number of connections needed becomes much more important, and bandwidth also becomes a much larger factor.
Your claim that HTTP/2 would have worked towards better latency on lower connections is also false, on bad mobile connections HTTP/2 actually increases latency, which was a major reason for QUIC aka HTTP/3 in the first place.
> As mentioned, I've done testing on actual devices on actual high-latency low-bandwith connections, hundreds of times.
And as I've mentioned, you've been testing the wrong thing by not understanding the whole point of AMP (safe preloading).
> instead the number of connections needed becomes much more important
A page preloaded from an AMP cache needs at most one TCP connection, usually zero if it uses QUIC.
> and bandwidth also becomes a much larger factor.
Which also works in AMP's favor because the device doesn't need to load your custom JavaScript or potentially unoptimized images, just the tiny HTML and optimized images above the fold. The weight of this (and the associated gain) is tiny, which is why bandwidth is a relatively unimportant factor.
> On bad mobile connections HTTP/2 actually increases latency
You're mixing up dropped packets with high latency. That's neither here nor there because Google's and Cloudflare's AMP caches both use QUIC — my point was that latency is the key factor that all modern web speed technology has attacked, including AMP.