Totally anecdotal evidence, but I was in a rural NY house served by DSL for the past 6 months. The DSL has consistent packet loss between 4 and 6%. The only video service that could handle this level of packet loss well was Amazon Prime. Netflix couldn't even load its browse screen until the past two weeks, where something changed, and suddenly Netflix could handle the high packet loss as well as Amazon Prime.
Seperate annecdote - I worked on an inflight satelite wifi project and I was surprised at how well both Youtube and Netlifx worked over a medium-bandwidth/high-latency connection.
Granted, we had specific QoS/traffic shaping to improve reliability without gobbling up all the bandwidth (stream Netflix was an advertised feature of the wifi service), but it still seemed like magic.
When Plex rolled out it's auto quality/auto bandwidth adjustment it actually worked very well over airplane satellite wifi as well. I watched a few things from my own server.
I'm amazed that service allowed streaming though...
For a good few years, a lot of airlines had Netflix and Hulu throttled on their free WiFi services, but not Twitch, so I’d just watch videogames on all my flights. My theory (which I really believe is true) is that they just hadn’t heard of it, and hadn’t blacklisted it!
YouTube has gotten way better in the past couple of years. When they first launched DASH streaming, it was terrible on high-latency international connections. If a US-based content creator uploaded a video and you were the first to view it in your region, you could actually notice how it was populating the CDN and it was unwatchable without disabling DASH and using the old-fashioned buffered player. These days it's flawless for me in nearly every situation.
Things like the Delta “gogo in-flight entertainment” do store their movies on the plane, but people will want to watch their Netflix/Prime/etc content on the plane as well.
There’s “inflight entertainment” where all the movies/shows are indeed stored locally on the plane, with either seatback or custom/white label streaming app for BYOD.
But in addition they were advertising streaming Netflix and YouTube over the satellite WiFi.
This sounds like an MTU issue. TCP takes care of mere (eg probabilistic) packet loss ok. MTU issues have actually crept back up because TLS exacerbates any underlying MTU problems. IPv6 doubly so (when any hops - especially yours - don’t follow path MTU detection requirements).
TCP doesn't take care of packet loss. What TCP does is make sure your packets are not lost, even if you have 99% packet loss. On the flip-side, that means that if TCP can't deliver a single packet (say out of a billion), the whole stream stops at this one packet...
Which is why TCP is a horrible choice for any streaming service and a horrible choice for lossy connections, and I would be quite surprised if Netflix relied on it. UDP is the perfect choice for streaming, since video decoders can handle packet loss pretty well. The rest you can achieve with good tradeoff between Reed-Solomon codes and key framing.
Because they all want to run through HTML5 web browsers, re-use the same TLS as everyone else, and not write a ton of new code.
When QUIC gets big, they'll probably switch to UDP - Not cause it's better on every connection, but because it will be popular and it will be better on lossy connections. But for now TCP does work fine.
That's why youtube-dl can rip video without implementing tons of weird proprietary protocols - It's just HTTPS. Otherwise these video sites wouldn't run at all in Firefox.
I'm not sure this statement is generally true for Netflix's use case.
UDP provides no out of order packet handling which _needs_ to be handled for video streaming. UDP is by default unbuffered throughout transport and tends to cause greater stress to client systems since they need to respond per packet rather than per traffic stream (IP+port combo). As a client developer, you end up reimplementing 90-95% of what TCP gives you out of the box at great development and QA cost. You also drain battery on mobile devices with all the interrupts your causing doing UDP. The upside with a UDP-based implementation is the latency from server to client display is usually much less (tens of milliseconds vs hundreds to thousands), but the trade-offs involved are almost never worth it for a static media streaming site like Netflix.
Even dynamic media streaming sites like Twitch rarely dip into UDP server-client implementations unless there are some unusual requirements.
Aren’t MTU issues typically only up to a router? As in, even if the parent had a different MTU than Netflix uses, it wouldn’t matter since their router or the ISP’s router will transform packets between their appropriate MTUs?
And if this is true, then how could it be that Amazon works without problem and Netflix doesn’t?
It's not unusual for a server to also be a router in a layer 3 link aggregation setup. It's extremely common for IPs to be load-shared amongst servers using ECMP. If each server is connected to 2 Top-of-rack (TOR) switches and advertises the route to the shared IP through both TORs, you can very easily have ICMP probes used for PMTU take the wrong route and be dropped. The result is a TCP session with a default MTU that may not work along all traversed paths and will suffer from fragmentation.
It’s typically all HTTP requests; nowadays with HTTP3 we are back to using UDP, but apart from real-time video conferencing etc I don’t believe many streaming services use anything other than HTTP.
If I had to guess they probably had timeouts that were too aggressive. Client timeouts are a very hard problem because it is difficult to tell the difference between "working, but slowly" and "something went wrong, the best bet is to try again".
Back in the day we used to have timeouts based on individual reads/writes which will often better answer "is this HTTP request making progress". However the problem with these sort of timeouts is they don't compose well so most people end up having an end-to-end deadline.
I doubt Netflix is doing anything tricky with UDP anywhere in their stack.
QUIC doesn't count because it's not tricky.
I'd love to see a source for this but seeing as YouTube works great over regular HTTP and TCP, I doubt anyone else is out in the weeds trying some custom UDP solution and reinventing wheels.
Slightly unrelated but does the packet loss happen all the time or when close to maximum of the line.
Used to have similar problems with an ADSL line but found if I limited the line (Both up and down) I could find a magic number where the packet loss went away. (Well most of the time :))
Though it did need to be tuned for different times of they . ie high congestion times need it to be lower.
Though technically it shouldn't be your problem :(
This is normal if your router doesn’t prioritize control traffic. A rate limit allows all the ACKs to normally leave your network instead of getting queued up.
Or your router isn't responding correctly to traffic controls or the ISP isn't sending them correctly? I know with one provider I had in the deep past the allowable packet size was smaller than what most devices default to and they weren't correctly sending the maximum size their routers supported in the appropriate ICMP requests. Eventually I figured out that I could force my router to a smaller allowed packet size and that at least decreased packet loss substantially going upstream, even if whatever misconfiguration of the ISP was still confusing and eating downstream packets.
I'd believe it. When you know that there is going to be packet loss (whether from the user's spotty internet or from internal load-shedding), building your applications to be as resilient as possible to it makes sense. The infrastructure experimentation platform mentioned in the article is probably helpful for sniffing out potential trouble-spots in applications.
Any chance there weren't any line filters on the POTS equipment? I haven't had DSL in years but when I did I had to have filters on any telephone devices connected to the same line.
How did you measure 4 to 6% packet loss? Do you have scripts to ping some server and you are collecting packet loss data? I would like to collect such data for my home network and am curious.
there has been the good kind of capitalism going on between the video streaming services before. earlier on I remember netflix was way better than amazon, but amazon upped their game since.
When deciding what mechanism to employ to load shed, you should keep in mind the layer at which you are load shedding. Modern distributed systems are comprised of many layers. You can do it at the load balancer, at the OS level, or in the application logic. This becomes a trade-off. As you get closer to the core application logic, the more information you will have to make a decision. On the other hand, as you get closer, the more work you have already performed and the more cost there is to throwing away the request.
You may employ techniques more complex than a simple bucketing mechanism, such as acutely observing the degree at which clients are exceeding their baseline. However, these techniques aren’t free. The cost of simply throwing away the request can overwhelm your server - and the more steps you add before the shedding part the lower the maximum throughput you can tolerate before going to 0 availability. It’s important to understand at what point this happens when designing a system that takes advantage of this technique.
For example, If you do it at the OS level, it is a lot cheaper than leaving it to the server process. If you choose to do it in your application logic, think carefully about how much work is done for the request before it gets thrown away. Are you validating a token before you are making your decision?
You touch on the key thing that people sometimes overlook. Whatever you are doing to serve errors has to be strictly less expensive than serving successes. If your load shedding error path does things like logging synchronously to a file (as you might get from a logging library that synchronizes outputs for warnings and errors, but not information), taking a lock to update a global error counter, or formatting stack traces in exceptions, it's possible that load shedding will _cause_ the collapse of your service instead of preventing it.
+1 additionally, if you end up in a scenario where you don't even have enough capacity in a given layer to fail quickly, your only options are either increase capacity or throttle load pre-server (either in the network or clients)
A lot of websites will now fail requests early based on a timeout, forcing users to refresh the page. I have to wonder if ad-based sites enjoy this behavior because it could lead to more ad impressions. Talking about you reddit.
I think you’re talking about SPAs in specific. Many have race conditions in frontend code that are not revealed on fast connections or when all resources are loaded with the same speed/consistency. Open the developer console next time it happens, I bet you’ll find a “foo is not a function” or similar error caused by something not having init yet and the code not properly awaiting it. If an SPA core loop errors out, load will be halted or even a previously loaded or partially loaded page will become blank or partially so. Refreshing it will load already retrieved resources from cache and often “fixes” the problem.
You see it in backend code too. For example Golang's context.WithTimeout is used to time out http requests and database calls that may be taking too long. This is particularly irksome with microservices where multiple services are running timeouts that interfere with one another.
It is becoming du jour to quell 99 percentile latency spikes (i.e. 1:100 requests will take substantially longer) by terminating the requests, which may not always be in the best interest of the user even if it is convenient for the devops teams and their promotion packets.
Thanks for sharing. I wasn’t aware that was a thing, it seems to be a form of manipulating the appearance of performance rather than actually boosting it. We log all slow calls so we can find out what they’re running up against - knowing a call took more than 5ms a p99 of 5ms is a pretty poor internal signal, but being able to trace which calls took 15 or 75s (vs those that took less but would also have been killed nevertheless) is extremely helpful.
Perhaps probabilistically terminating calls would work better? I assume the decision has to be made ahead of time with timeout contexts if there anything like cancellation tokens, so even if you give just 5% of all your inbound requests a deadline 10000x as long, you’ll still get some useful info to work with.
As a user, I would absolutely hate it. I somehow frequently run into pockets of badly written or architectured code that cause some of my requests to take a minute or more to be fulfilled on an otherwise responsive server - if I had to retry “just” twenty times for it to go through, I’d lose my mind.
Well I hope it’s clear that this is just malpractice. Nobody should set their deadline to their p99 latency unless the result of the call is completely irrelevant to the success of the top-level request. Deadlines should be set to a huge amount of time, much longer than your tail latency but sufficiently less than infinity to protect your backend from running out of resources with too many requests in flight. For example if your p99 latency is 1ms you might set your timeout to 60s or something like that.
It's surprising to me how slow reddit is on mobile. If only there was a way of serving content so that the browser can start to render before the full payload has been served.
I’m wondering if it’s more of a “Chaos Injector” component/service that reads configuration data from the Chaos Control Plane on what to target, with parameters on how/when to do so. That would make the arrow make sense in my mind given it sounds like that’s a solid pattern for scaling these data/control plane flows: https://aws.amazon.com/builders-library/avoiding-overload-in...
This. It's an internal system called ChAP, Chaos Automation Platform. It has the ability to target failure down to specific RPC calls in single instances, using platform components that services consume as the mechanism for doing that injection.
Seems like pretty standard browser/app handover behaviour to me, although the app not working is a massive fail and should - hopefully - flag up automatically as a critical issue on Medium's side.
Obvious suggestion but not made in snark: uninstall the medium app? I’ve had to do that for lots of poorly developed apps or apps developed not in sync with the web frontend.
Edit: it is a bad link and I can see why this would happen if you had the Medium app installed. It’s a “branded” Medium post (i.e. appears on the Netflix-owned domain) but clicking the link redirects you to medium.com then redirects you back to the cname.
If that were easy, we’d have solved email/comment/Twitter/Reddit spam and trolling back in the 1990s.
Anyone who has run a service that allows user content of any type knows that the miscreants are endlessly creative and you can expend infinite resources but some will slip through.
I still see the daily spam and phishing email in my Gmail, despite the tens/hundreds of millions Google has invested in filtering.
How is it corporate-speak? Sounds just like standard thoughtful naming. If I was working on a module that did this I would be happy to name it this even if it never got mentioned in any corporate context.
So you can drop all that traffic and the users are unimpacted... So why not just always drop that traffic and don't even bother writing the code to implement those features that clearly nobody cares about?
It's not that nobody cares; they're just less important. Obvious example is analytics. It helps Netflix deliver a better experience to their users, (video watchers and product placement advertisers -- whoever) but it isn't mission critical to making sure users can watch their content. If you miss some log data, that's not a huge deal. That doesn't mean you want to stop logging.
Some of the things they mentioned were also user impacting, like not being able to select a video's language, but less critical. You obviously still want that feature, but it's less important than being able to watch at all.
"unimpacted" isnt the case - it makes clear that some traffic is more critical-path than others and thus the prioritization.
"Clearly nobody cares about" - what? The whole point here is "people care most about video streaming" and less about the metadata etc that they lower in priority.
How much better would the world be if the Netflix engineering team tackled real-world problems instead of making sure we can all binge watch Stranger Things? Such a smart group of people.
How do you know what a real-world problem is or isn’t? Avoid the intellectual trap of trying to socially engineer the world to your tastes. Watching stranger things might be more important than you realize.
Most companies have smart engineers. The difference is how they prioritize tasks. Most companies spend a significant amount of time building new features. Netflix is pretty low in terms of number of features. Their core product is uptime and reliability. So the result is that they _have_ to reinvent scalable solutions because the rest of us are accepting production issues for more time to build out features. What you see is the result of laser focus on uptime. Not even AWS cares about uptime as much as netflix.
A local start, given Netflix has real estate in LA: using these skills to develop better consumption services for unreliable internet users toward developing better tools for distance learning for LAUSD. Lots of students are falling behind thanks to issues with poor unreliable connectivity due to the expenses associated with having broad band internet. In march, 17% of LAUSD families had no internet at home. Today, more have bought internet plans, but given the job lossess suffered by the working poor, these are probably the cheapest internet packages available. Given how JS heavy these online education sites are, this is a poor experience for students who are already dealing with the stresses associated with being on the bottom of the economic totem pole in a high cost of living city.
So you are suggesting Netflix as a company should enter a completely unrelated location-specific government-related industry (which has its very own specific regulations and domain-specific issues), just because they have smart employees whose technical expertise domain sorta overlaps with the kind of engineers who would be useful to solving problems of that other unrelated industry?
Might as well suggest that HFT finance firms enter a business of providing fast and reliable internet service to rural areas, because their employees have an extremely high expertise in providing bleeding-edge insanely responsive internet service from the exchanges to their offices (not kidding at all, they legitimately drilled through mountain ranges[0] and set up microwave towers just to get an edge over competitors[1])
Thank you to the engineers and developers!