I like they raised the broadband minimum upload speed from 3 to 25Mbps. Spectrum's 300 Mbps broadband is only 10 Mbps up. Let's see how fast they change their statement that "an upload speed of 3 Mbps is enough for most households" [1].
Zoom recommends 3.8Mbps for uploading 1080p video. Being able to do 1080p video conferencing is an expectation of broadband. Raising the definition of broadband to meet this basic requirement is clearly a win.
I do think 25Mbps up is more than what's required for a large number of households (>33%). I have advised family that 15Mbps is adequate when they needed to do office work from home. Perhaps the only good thing about this is that it will force ISPs to upgrade their infrastructure, but I'm worried the cost of that will be passed onto consumers at a time when inflation has already taken a toll on budgets.
I've been coping with 6 Mbps upload speeds from Xfinity for years now. This morning I got an email from them saying they were raising my speeds at no cost, and after a modem restart I'm now getting ~20 Mbps up.
This whole time I've been told that Xfinity couldn't provide decent upload speeds because of infrastructure reasons, but now that's been proven a lie.
CATV was designed as a one-way pipe. There are filters everywhere in the network that block inbound traffic because there wasn't much of a reason for cable boxes to need to communicate with the wider network which meant that the bandwidth on the cable that was actually usable for upload bandwidth was orders of magnitude smaller than downloads. As Comcast basically rebuilds their network (fiber to the edge), they're bypassing and removing all of those filters which means that there's plenty of frequency for uploads.
Designed, maybe, but it's not like the cabling is directional.
I thought one of the reasons why DOCSIS was straightforward to rollout was because they only really needed bi-directional amplifiers to do it. Though in the early days of cable internet, some setups used a telephone modem for your uplink.
And if you already had Hybrid-Fibre Coaxial infrastructure (which was rolled out for noise reasons in the pre-data analog days) you were already a step ahead of the game in terms of segmenting nodes/worrying about CPE noise.
Upload is crap because they have to dedicate channels to it at the cost of download channels, and upload has more overhead because of the coordination required.
70s/80s cableTV companies basically won the lottery by being able to re-use their existing last-mile plant for high-value data. Telcos not so much.
It’s not a lie – they can’t provide upload speeds anywhere near their download speeds. 20 Mbps vs 6 Mbps is still quite small compared to 1000 Mbps. DOCSIS 3.1, the latest and not fully deployed standard goes up to 10Gbps down and 1Gpbs up.
It looks like DOCSIS 4.0 is actually a path forward to symmetric upload/download speeds over the same cables
I didn't say that asymmetric upload speeds being required was a lie, but until today Xfinity wouldn't allow me to pay for >6 Mbps upload without going up to some sort of business-tier package. Now, the day that the FCC sets 20 Mbps up as the minimum, I magically get that exact number for free.
25Mbps up may be adequate in a pinch, but it’s laughably outdated. Symmetric down/up enables a lot of great use cases including seamless backup. I briefly lived with AT&T fiber symmetric 1gbps (actually more like 940mbps at the router, but close enough). It was a game changer and losing it definitely undid a bunch of great use cases. If you WFH, it’s even more important.
I've seen urban condos in US offering 7gbps to home. The high end is there.
After I first experienced symmetrical gigabit fiber I have only lived in residences that offer it -- it is a prerequisite for me choosing a domicile similar to trash pickup and electricity. My argument isn't that we shouldn't offer faster internet. We should have more places that offer faster internet. My question is what is the minimum viable upload speed for residential service.
Yep. I've never had more than 35Mbps up from home, and it sucks. When talking about WFH, a lot of people focus on the uplink bandwidth needed for 1080p video calls (another commenter said Zoom recommends 3.8Mbps), but there's a lot more to it.
I might be building some software locally and uploading it to a cloud server to test it. The built artifact might be tens or hundreds of megabytes and take several minutes to upload.
This isn't even solely a software developer thing. Someone who does video production certainly needs to send around large quantities of data as well. I'm sure we could come up with examples from other industries.
It's pathetic how limited the coax infra is in the US these days. Supposedly this will be improving soon with DOCSIS 4.0, but c'mon, it's 2024...
It’s also a good opportunity to remind family that bandwidth is cumulative. That is nonobvious to quite a few people I’ve talked to that are not super internet savvy. What I mean by that is that if zoom uses 3.8Mbps that means they probably maximum two people can probably be on a zoom call simultaneously comfortably in a 15 Mbps household while other people are doing regular things. 3 if that’s the only thing happening in the household.
The technology to do asymmetric upload/download exists and many fiber-based companies offer it at the same rate as older infrastructure. It is not more expensive to offer symmetric service, but it may require infrastructure being brought more modern.
For example, in the same region of Massachusetts (around Worcester) I previously paid $40/mo for 300/300 Mbps to Verizon FIOS but now after moving I pay $40/mo to Spectrum for 500/15 Mbps.
> I do think 25Mbps up is more than what's required for a large number of households (>33%). I have advised family that 15Mbps is adequate when they needed to do office work from home.
Yet my experience when streaming high def video is much better in places with symmetric fiber with much higher upload bandwidth (> 100Mbps) than it ever is in places with coaxial cable bandwidth.
TCP ACKs back up are needed even for downloads and can get crowded if the bandwidth in that direction full or crowded. If those tiny ACKs don't get through downstream may stall. Linux QoS can help, but only if your router is Linux and you can run the appropriate commands on it.
>Zoom recommends 3.8Mbps for uploading 1080p video. Being able to do 1080p video conferencing is an expectation of broadband
You can't even get 1080p on zoom unless you're using "Business, Education and Enterprise". Pro and free users are limited to 720p and 360p respectively. Moreover people don't even have 1080p webcams, much less webcams where you can tell the difference between 1080p and 720p. That's not to say there are other reasons for wanting > 3.8 Mb/s up, but "video conferencing" is a poor one.
Depends what you're watching. A videoconference with a mostly fixed background and infrequently shapeshifting face or a screenshare won't chew up much.
I get, realistically, like 30Mbps up with Spectrum with their 1Gb plan. At my last place they had a smaller competitor that did 1Gb up and down, so they changed existing 1Gb plans to the same. Now I live 8 minutes away from there and it's not an option! But competition is coming to my building soon so I can only hope.
For coaxial cable, I assume they will still allocate the same 50Mbps upload between 200 houses, and then let you get a 25Mbps burst for a few seconds by taking it from others.
Sometimes the issue is legacy CPEs that couldn't access channels that newer ones could (or couldn't bond as many). That's why sometimes a modem upgrade could "unlock" higher speeds.
My Comcast gigabit internet is only 25Mbps up. I'm pretty unhappy about it, as somebody who regularly has to push large docker images and wait forever each time.
On coax/cable lines like Charter/Spectrum, the upload speed limit is due to protocol/DOCSIS limitations. Supposedly the new version of DOCSIS fixes this/allows for symmetrical service, and Spectrum has been rolling it out for a while now. Still waiting for it where I'm at...
You may want to scroll down to the “throughput” section, instead of looking at the theoretical max.
The theoretical max up and download aren’t additive. It’s like saying that an egg carton can hold a max of 12 white eggs or 12 brown eggs. Good luck putting 24 eggs in it.
The DOCSIS standards carve up the available throughput, using some of that for upload and some for download.
I got the same message, also from Xfinity. It does sort of make me wonder: If the FCC declared broadband to be 100/50, would I have 50mbps upload speeds this morning?
To me, this seems to be a clear positive outcome of the FCC's change: Xfinity had this capacity, the FCC raised their standards, and now all(?) Xfinity customers have increased upload speeds for zero additional cost. Seems like pretty much a best-case result of this metric increase.
Edit: Formerly 200/10, now 300/20. Email arrived 10am today.
I didn't get a message from Xfinity and a speed test showed no increase, but checking my plan on their website it said 300 mbps whereas it used to be 200 mbps. A modem reboot later and I'm getting 355 down/24 up on a speed test site, and 373 down/25 up on "what my ethernet card sees" test [1].
When the plan was 200 mbps I'd get around 240 down on speed tests. Xfinity has always been 10-20% faster for me than whatever the sticker on the plan says.
I'm not sure, unless they actually say it is, that this is in response to the FCC. In the almost 20 years I've had Xfinity I've had numerous speed bumps like this, much more often than the FCC bumps the speed in their definition. My plan went from 100 mbps to 200 mbps sometime in the last few months for example.
[1] Run netstat once a second to get the byte counts in and out per second, multiply by 8 to get bits.
What's different about this speed bump is that suddenly upload and download are closer together than was ever possible before. When we moved in a few years ago there were no options that provided >10 Mbps up but <=300 Mbps down. Now all of the sudden I'm on a 20 Mbps down plan with 300 up.
I'm skeptical that they just coincidentally matched the new minimum upload speed after years of insisting that residential plans could make do with 6.
In my experience, xfinity will often do speed increases and market them, but leave me on a "grandfathered" plan, which leaves me with the same speed as before, and usually some creeping "service fees". Eventually, as I get annoyed with the increasing service fees, I login and look at their new plans and end up enjoying a nice speed increase and a lower monthly bill.
Cox _just_ sent out a similarly-phrased email (Subject: Enjoy this gift of faster internet, <me>) saying they bumped my upload speed from 35->100mbps. I didn't expect that given the new minimum upload speed is 20mbps
Verizon bumped me to gigabit for no good reason a few years ago. I think I had only signed up for 50 initially. I clocked it at 800/800 but I'm not going to complain.
ISPs aren't statutorily required to meet the FCC's speed bare minimum. It is just that doing so lets them be counted as broadband and might qualify them for some incentives to deploy broadband to underserved areas.
Also I believe that an ISP is counted as broadband by the FCC if they have some plans available that meet the FCC's minimums. They do not have to meet them with all plans.
I mean, "we're chasing the easy carrot on the stick we coulda always given customers" instead of "we're avoiding being punished" isn't that much better a framing.
I actually just changed my modem today because I was using Comcast's modem/router combo and wanted to use my own... I'm still at about 50-60mbps. I guess I will contact them again.
I talked to them and they say I am still on the 50/20 plan... they replied saying I would need to pay more to get 100mpbs. I don't need it, so I won't.
The idea of usage caps was intensely controversial on sites like this a number of years back because it was mostly related to downloading "Linux distributions" from torrent sites. Today, with the download activity mostly driven by Netflix and other video streaming services, it's mostly untenable (within reason). In a market where so many households have an expectation of essentially unmetered video streaming I'm not sure there's much of a demand for 10-year-old bandwdith caps for $10-$20 less per month (which were probably more at the time even with the caps anyway). I don't get cable TV or landline and I pay less than I did 10 years ago.
ADDED: In a context, people develop expectations about what should be metered and what should be unmetered at least up to a point. You want a metered service where most people, at least in the US, have developed an expectation of an unmetered one. And I doubt you're asking for what actually passed for unmetered telecoms 20-30 years ago.
Data caps have always been a scam because they don't match the economics of providing service. Idle capacity goes to waste, and there is always idle capacity during off-peak hours (that's what off-peak hours are), so deterring usage during those times is wasteful and spiteful.
You could have on-peak data caps, but then you'd have people who e.g. watch an hour of streaming a day hitting them because they do so at the same time as everybody else does, and then the people clamoring about someone else torrenting "all day long" would find themselves hitting the cap, because their usage is during peak hours and what somebody else is doing after midnight or before 4PM is irrelevant.
The better solution is to simply sell connection tiers with different speeds. If you pay for 200Mbps, the network is designed to be able to provide you with 200Mbps during peak hours. Not that you have 200Mbps dedicated but rather that under typical actual aggregate load, there will be 200Mbps of capacity available for someone on your service tier to use. Meanwhile if you try to download something at 3AM, you might get 1000Mbps, because why not? Nobody else is using it. But someone who wants to pay less can buy less expensive slower service -- slower, but only during peak hours. If the person who pays for 50Mbps wants to download something at 3AM, they can get 1000Mbps too, but at 7PM they're getting 50Mbps instead of 200.
The actual reason cable ISPs like data caps is that it discourages people from using video steaming services (which eat up the cap even when used off-peak) rather than subscribing to cable TV (which does not).
capped data is ass, as someone who couldn't get uncapped until about a month ago when i moved, if someone tried to legislatively require data caps i would be on their ass trying to make them stop SO fast
With how many 9's of availability, and at what latency?
Frontier is advertising 3 nines average network-wide availability for their new fiber network. Three nines is completely unacceptable, but our neighborhood barely got one nine last year.
Based on their current behavior, there's zero chance of them providing the bandwidths they're advertising for the new build out, even when the network is completely up and completely idle. Currently, if we try to sign up, they say we can get <<1 mbps (bits) theoretical. However, the internet coverage maps claim they provide broadband service to our address, so they must be claiming > 25Mbps to the FCC.
Why isn't 3 9's acceptable for residential home internet? That's less than 9 hours of downtime a year, 44 minutes/month.
Sure, I'd like 4 or even 5 9's, but I don't want to pay for it -- each additional 9 is exponentially more expensive to guarantee, and few people are willing to pay for that much reliability for their own network gear.
The commercial internet services with 4 or 5 9 SLAs have multiple redundant lines to the building to guarantee this and they charge handsomely for it. Otherwise a downed telephone pole or something will automatically bring the service down for a day or more.
BTW I run a serious network at home (Proxmox cluster, all enterprise gear, virtualized router with HA, UPS power) and only target 4 nines on the LAN. System updates and things like server crashes making the router failover to another node cause around 30 minutes worth of network downtime a year. If a core or floor switch fails it'll automatically take me over an hour to swap in a spare and configure it. I don't bother trying for that fifth nine and I'm a very demanding user.
Now if one of you have a consistent 5 nine home system I'd be curious to learn more about it. I'm guessing that would practically be a datacenter setup with CARP, dual switches, etc. - or you're just really really lucky.
5 nines is reasonable for electricity, if you live in an urban area with no extreme weather or other major natural hazards. When I was still living in Helsinki, blackouts were basically an once-in-a-decade thing, and they didn't last long.
Today in California, I'm getting 3 nines in an urban area. Those living a couple of blocks further towards the suburbs are not so lucky. And many people in the mountains don't get even 2 nines.
In 2023 our availability only had one nine. We had power ~96% of the year. And it wasn’t like it was a lot of hour long outages that added up or something. Longest was 8 days.
I am not in a dense urban area, but I am connected directly to a line connecting a couple of larger towns along a major highway, so we’re generally restored fairly quickly. During the 8 day outage, quite a few people in the major city nearby were actually still without power when mine was restored.
Maybe unsurprisingly, my internet has actually been vastly _more_ reliable.
In my Eastern European country the SLA from the (state owned) power company is no more than 6 hours of down time per event, after which they need to pay compensation. Within highly populated areas it's shorter.
Admitidely the most remote place is less than 1.5 hours from the nearest reasonable sized town, so we aren't as remote as some parts of the US. The purpose of this was to force them to make their network more reliable by upgrading old equipment, burying overhead lines (lots of trees used to fall on lines during winter) and ensuring redundancy.
I don't have town sewer but being without water is very rare. Being without electricity for maybe the total of day a year? Not rare. (Though semi-rural albeit on a major road.)
ADDED: I don't. But people having backup generators in snow country is not a weird thing.
The DSL network piggybacked on the analog voice network which, at least in this country, was built to a higher standard. But I don't want to go back to the days of the 1990's where I was paying $40/month (around $100/month in today's dollars) for an analog phone line.
It was reliable, so reliable that "as reliable as dial tone" was something to aspire to, but also costly and limited in capability. Even if I could get DSL today, my fastest speed would top out around 12mbit/sec due to my distance to the CO, which doesn't even meet the current definition of broadband.
Traditionally, it was unacceptable for landline service, since it's safety critical. (You need to be able to call 911.)
I poked around to try to figure out what the current requirements are, and it looks like they've mostly been scrapped. "Central office backup power" is required, which matches my experience attempting to get terrestrial internet or cell service during power outages.
If we limit it to issues that the ISP owns, then I'd say yes.
What does the ISP have that can cause 9 hours worth of outages throughout the year? The only thing I could think of is the ISP being targeted by a massive DDOS attack.
I guess I'm really just spoiled. In the last 10 years I think I've only really experienced like 1 or 2 outages caused by my current ISP and they were resolved within the hour. My assumption is that like most people, all the ISP equipment is buried so the only things that can really go wrong are either at the colo or the home (like a router that dies). Having a line cut is simply a hyper rare event.
On the other hand, my overhead powerlines have caused most of my outages. An idiot crashing into the telephone poll has made my power less reliable than my internet.
Not every year, but Rogers in Canada managed to black itself out across digital TV, corp circuits, home internet, mobile, 3rd party internet providers that used their lines, their own employees, and cellular for the good part of a day (from like 5AM until midnight) nationwide.
Their CTO was on holiday and was unreachable because their phone went into SOS mode and they just assumed it was them.
The Nashville AT&T explosion didn't take out the data lines, but took out the electric service. Then the gas company cutoff the gas lines so their backup generators couldn't run (wiki says it was due to fire/water damage, so maybe I have that part wrong). The facility didn't have roll-up generator connections, so they had to figure out how to wire them in at a big time cost.
I live in semi-rural Canada and get better service than that, but I would be fine with more downtime.
OTOH, I frequently have scheduled power outage times that exceed that metric so it sort of doesn't matter, and that's ignoring unscheduled power outages (a necessary part of life in a temperate rain forest with above ground power). I've taken to keeping a small generator handy to power the modem.
Our Internet + phone service go down if there's a power outage, which implies the cell network also goes down in practice. The tower might be up and on backup power, but it either has nothing to talk to or it's completely overwhelmed.
I would be annoyed at losing a day or so of work, if the outage was all at once - but nine hours a year is tolerable. I wish my ISP was that reliable. Hell, I wish my power company was that reliable. Where I live it's common to have a number of outages longer than eight hours every year. Last year a moderate storm caused power outages for a couple of days in August affecting thousands.
Generously, if you work 2000 hrs/year, that's less than 25% of a year's total hours, so it's only 2 hrs of work downtime on average. If you need better connectivity than that, you should pay for a higher SLA or make sure you can tether a cell connection or something.
I doubt that's true for most jobs. But yes, there are jobs where it is, and some of us (e.g. me ;) are just finicky about network connectivity. In which case I highly recommend having a fallback connection, with a different modality than the main one. For most of us, that means LTE. Tethering is fine if you can live with <5 min outages, but sure, you can also do automated failover.
But it's not a level of service that I think is reasonable to expect from a single ISP connection. A single construction incident will burn the annual SLO budget, and there's nothing the provider can do about that.
(And of course, it'd still be nice if telcos ran their backends competently. Looking at you, AT&T)
Depends on how the 9 hrs of downtime is distributed. It could be 6480 5-second outages (packet loss). There are 3650 10-hour workdays in a year. If those 6480 5-second dropouts occur during waking hours, you're looking at 2 5-second dropouts every hour, which is going to suck.
I had a bad connection because of bad coax cable, and it's infuriating because each dropout causes long delays while TCP recovers.
Imagine all your streaming video or calls froze for a few seconds every hour for days on end. 9 hours in one continuous outage is fine, constant short outages are infuriating.
I'd be fine with 99.9% availability to my house. That means it's down for at most 9 hours per year. That's probably on par with how much my electricity goes out during an average winter.
But what they mean by "99.9% network-wide availability" and what I mean by "99.9% availability to my house" are very, very different things.
Don't live in the PNW then. Soil conditions make undergrounding utilities very expensive, and there's lots of trees near power lines. My first winter we had a 60 hour stretch with no utility power. The CenturyLink DSL remote terminal has no batteries, so if I lose utility power, I also usually lose internet (but I've had two power outages due to a downed line between the RT and my house so far). Most of the cell towers have batteries, but only 2-4 hours worth and then I've got no comms. My house has a backup generator, but it doesn't serve the well, so if I want water, I've got to drag a portable generator out there. A lot of my neighbors have generators, but not all of them.
Oh, and then there was the time a building contractor accidentally took out the underwater cable connecting CenturyLink to Seattle... turns out they don't have multiple paths for that, that was about 48 hours of no internet for me there too. I remember some cell towers didn't work, but I can't remember if it was by tower or by carrier.
I think there is a gap in awareness of this depending on where people live. My rural hometown has frequent power outages, so many people have backup generators to at least keep the furnace running.
The town got popular with city people going remote during the pandemic, many of whom bought houses. I was visiting my parents a couple of years ago when the power went out for 18 hours or so during a deep cold snap. I was shocked that none of the new neighbors–who had recently completed huge costly renovation projects–had bothered to install an automatic generator.
Yeah. I don't know the exact numbers but having power out for 9 hours a year is not a freak event where I live in semi-rural New England and my impression is that it isn't for my colleagues in North Carolina either. Branches and trees come down on powerlines. I had a big tree come down in a windstorm a few months back. Lucky it didn't hit anything of mine or my neighbor or might have been a couple days.
You can and should challenge the map then. The FCC actually has a means to do that and does respond. Ideally get an email follow up stating speeds, frontier has a habit of refusing to create a paper trail.
I did this about 18 months ago, and surprisingly, Spectrum came out and actually put a new line up my road. I'd asked them off and on for years to get broadband. Last time I'd been quoted 12k to cross the little creek they stopped at and service the dozen or so houses on my side. But for whatever reason, this time they did it for free (well, normal hookup costs) after I challenged them on the FCC site.
Glad to hear that. I've been fighting to get Frontier to service my neighborhood and the best they could tell me is that there is some hidden ticket and they have no idea what the real status is.
Frontier is labeled as a fiber provider for my address on the New York State PSC Broadband map and I made sure to provide some feedback to them about that claim.
Would love to have even just a single alternative option besides satellite internet. I'm only 10 miles outside of the downtown area.
Ah, so one with huge amounts of input from lobbyists.
It is ridiculous that nutrition labels are not forced to standardize on something like x/100 grams (even Freedom units would be acceptable). Plus a x/package. None of this 3/8 serving size nonsense.
I don't see three 9's as being unreasonable for home internet at all, getting more reliability than that is non-trivial and most people have redundancy in the form of internet on their phones or can buy a second line.
If you're able to reroute then you should almost always be able to keep Internet service running, just slower. Not many connections are so weak that they can only carry voice.
I live in the Netherlands and have a symmetric 1Gbps connection for 50EUR/month, and I had no idea how privileged that kinda internet speed really is until recently. My ISP's recently told me that they now support symmetric 8Gbps links for domestic networks for 70EUR which is tempting, but even I have to admit it's overkill for my usecases.
I've recently been on vacation back to my home country (Indonesia) where my parents have some $100/month 50 up/5 down internet package, and I was struggling with how slow that is. Things I always took for granted like pulling in Docker images or whatever would take forever compared to back home, and doing anything while on a conference call or whatever was a non-starter.
I'm always amazed when I read about US internet. With the whole tech sector being what it is there, you'd think they'd be world-class in terms of internet speed/stability/price, but it seems basically comparable to the internet you get on a remote island in Indonesia, which is mind blowing to me.
I wonder if this is the reason Comcast suddenly decided to double my internet speed today (they were forced to and then took credit because they're such good guys):
> # We've increased your internet speeds to show you our appreciation
>You can now enjoy 2x faster upload speeds and improved download speeds for smoother connections when you’re working, gaming and streaming.
>We’re excited to share this increase to your current internet package at no additional cost. Not seeing the faster speeds yet? Restart your gateway. Enjoy!
Got it too. Strikes me as extremely disingenuous. I feel like they shouldn't even be allowed to represent this as a decision they made out of the goodness of their hearts. Makes me hate them even more, which is pretty amazing because I had thought we already bottomed out.
It's always funny sad when I try to open a link on fcc.gov or congress.gov and get blocked by an impassible cloudflare wall. You'd think these services would be important enough to have a non-default cloudflare configuration.
Here's a mirror of the text I grabbed from another computer/IP for anyone else getting blocked:
FCC INCREASES BROADBAND SPEED BENCHMARK
Annual Agency Assessment of High-Speed Internet Service Deployment Establishes New Standard to Better Reflect the Broadband Needs of American Households
--
WASHINGTON, March 14, 2024—The Federal Communications Commission today adopted its annual assessment of whether advanced telecommunications capability is being deployed in a reasonable and timely fashion across the U.S. In addition to deployment, the Report considers broadband affordability, adoption, availability, and equitable access, when determining whether broadband is being deployed in a reasonable and timely fashion to “all Americans.” The Commission’s Report, issued pursuant to section 706 of the Telecommunications Act of 1996, raises the Commission’s benchmark for high-speed fixed broadband to download speeds of 100 megabits per second and upload speeds of 20 megabits per second – a four-fold increase from the 25/3 Mbps benchmark set by the Commission in 2015.
The increase in the Commission’s fixed speed benchmark for advanced telecommunications capability is based on the standards now used in multiple federal and state programs (such as NTIA’s BEAD Program and multiple USF programs), consumer usage patterns, and what is actually available from and marketed by internet service providers.
The Report concludes that advanced telecommunications capability is not being deployed in a reasonable and timely fashion based on the total number of Americans, Americans in rural areas, and people living on Tribal lands who lack access to such capability, and the fact that these gaps in deployment are not closing rapidly enough.
Using the agency’s Broadband Data Collection deployment data for the first time rather than FCC Form 477 data, the Report shows that, as of December 2022:
· Fixed terrestrial broadband service (excluding satellite) has not been physically deployed to approximately 24 million Americans, including almost 28% of Americans in rural areas, and more than 23% of people living on Tribal lands;
· Mobile 5G-NR coverage has not been physically deployed at minimum speeds of 35/3 Mbps to roughly 9% of all Americans, to almost 36% of Americans in rural areas, and to more than 20% of people living on Tribal lands;
· 45 million Americans lack access to both 100/20 Mbps fixed service and 35/3 Mbps mobile 5G-NR service; and
· Based on the new 1 Gbps per 1,000 students and staff short-term benchmark for schools and classrooms, 74% of school districts meet this goal.
The Report also sets a 1 Gbps/500 Mbps long-term goal for broadband speeds to give stakeholders a collective goal towards which to strive – a better, faster, more robust system of communication for American consumers.
Action by the Commission March 14, 2024 by Report (FCC 24-27). Chairwoman Rosenworcel, Commissioners Starks and Gomez approving. Commissioners Carr and Simington dissenting. Chairwoman Rosenworcel, Commissioners Carr, Starks, Simington, and Gomez issuing separate statements.
GN Docket No. 22-270
My parents to this day still don't have real broadband access. The best they can get is a bad 4G signal to their rural area. Even Starlink only has speeds of 40mbs in their area. They only live about a mile from a fiber line, but AT&T won't run it over because it would only service maybe 10-15 homes, and there are no cable lines. I think this is more common than people think.
What's wrong with 40mbs? I honestly don't need more than 20. The only time I even notice faster speeds is when my podcasts download.
I'm curious what types of situations require more than 40mbps, and why the inability to do those tasks would mean that someone doesn't haver "real broadband access". I was pushed from 50 to 75 a while back and seriously hardly notice it.
That's 40mbps at peak, if you're streaming say HuluTV to 2-3 devices and also running your work environment, you're straining that data. It's also just legally speaking -not- high speed internet.
Thanks for the example. We definitely would never have 3 hulu streams simultaneously while someone else was "running a work environment". At most we'd have 2 simultaneous streams, and we've never had buffering or noticeable degradation when that happened.
I used that example because it's a real-life problem my parents have faced, you're paying for DirecTV and also Starlink? That's going to be closer to $300+/mo. If you just had high speed internet and HuluTV (or similar) you'd be paying a lot less. Many people that live in rural areas simply can't afford both.
Your parents aren't able to simultaneously stream with 40mbps?
The example was 2-3 simultaneous streams plus a "work environment" (a zoom call?). My family of 4 doesn't ever have that much usage, which is why I don't enjoy being forced to pay for 75mbps — Comcast's lowest tier. We cap out at two simultaneous streams, which is probably the same for your parents. It has literally never been an issue — even when we had 25mbps.
Meanwhile, my parents, in a not really rural area (less than 5 minutes to a town), are capable of reaching ... 0.2 MBps. And that is just in theory, actual speeds may vary.
Do consumers care or differentiate “broadband” anymore?
Why do I feel like they might just pivot to not using the word “broadband” at all anymore and use some internal branding like the stupid 10g crap Comcast tried. Thankfully they were told to stop but if they are more careful…
The definition appears to be more for the government than marketing.
> Raising the speed metric is important because it helps the commission determine which areas in the country are receiving adequate internet speeds, and if more government funding is necessary. In 2015, the FCC raised the metric from 4Mbps/1Mbps to 25Mbps/3Mbps. But since then, US senators, government watchdogs, and FCC officials have urged the commission to raise the metric even higher, citing the US’s growing reliance on internet services and apps.
Most people don't need 100mbps. Netflix recommends 5mbps for HD and 25mpbs for UHD. As a country, let's make sure that everyone has the ability to watch netflix in HD on 5 TVs simultaneously in one home. Or to watch UHD on a single TV. But if there's a home out there that currently gets 50mbps but the infrastructure for 100mbps isn't in place, do we really want the government throwing money to solve that (non) problem?
The goal shouldn't be to have just enough internet. 25mbit/3 is incredibly slow for 2024. A single 4K stream saturates a 25mbit connection. 3mbit upload is super low too since a 1080p zoom call is around 3mbit/s upload. When you get close to using that upload your ping and download are going to tank. The average household size in the US is 2.5 people so 25/3 is easily saturated by 2 people just going about their day. There is no reasonable justification for 25/3 at this point in time.
> do we really want the government throwing money to solve that (non) problem?
The FCC isn't saying you can't sell internet under this requirement. Plenty of places have still offer ADSL even though it is under the old 25/3 requirement for broadband. They just can't call it broadband and receive the federal money that goes along with providing broadband. The new rule requires ISPs that offer broadband and receive federal funds to meet the new rules. The new rule forces ISPs to provide a better service to millions of Americans or lose the money they have been receiving for years. Without the new rule ISPs get to collect the money anyways while providing an objectively worse service.
>but if there's a home out there that currently gets 50mbps but the infrastructure for 100mbps isn't in place,
What situation would that be? ADSL don't meet the old requirements anyways so its not relevant, cable based systems like DOCSIS have supported well over 100/25 for 2 decades (DOCSIS 3 can do 1gbit/200mbit and that came out in 2006 lol), and any fiber based system can obviously do more than 100/25. Even if there was some situation were 50mbit was possible but 100/20 wasn't it doesn't mean they lose internet or anything. The ISP is just forced to upgrade their system or not receive federal funds.
One of the reasons the big tech companies are so big is because it is so hard to roll your own NAS backup and syncing solution.
With sufficient upload connectivity and ipv6, it could be possible to make “cloud” connectivity an appliance you can buy and put in your home, so that you can be freed from having to use an external cloud provider.
Consumers don't. Monopolistic corporations filing paperwork to get their hands on tax dollars probably do. Thankfully for them, the FCC is about as toothless and captured as you can get, so it probably even only barely matters to them.
There’s a reasonable practical difference between 25Mbit/sec and 100; in particular, 25 isn’t _really_ good enough for 4K streaming.
_Today_, most consumers aren’t going to be that bothered between the difference between 100 and 1000. But that’s today; the sensible telecoms regulator thinks on a longer scale, because otherwise they just have to revisit the whole thing in a decade.
In Ireland we’re on our third try; the first saw a lot of subsidised satellite and fixed wireless in rural areas, the second, better fixed wireless (a minimum latency requirement killed off the satellite option), and FTTC; in the third, they do seem to have finally decided to stop screwing about, and are rolling out FTTH in otherwise-poorly-served rural areas.
(From a practical point of view, I do get that a state rural fibre wholesaler probably would not fly politically in the US, but this approach of repeatedly easing the minimum standard up is ultimately kinda wasteful.)
Personally, I can't remember the last piece of marketing that tried to sell me "broadband" internet. They all say "high speed" internet.
Anyway, the 15/3mbps down/up I get in the Oregon countryside is indeed high speed compared to the 5/1mbps down/up I had smack in the middle of the Los Angeles suburbs.
IMNSHO, latency is more important than raw bandwidth, and raw bandwidth for consumer Internet is more useful as a latency improvement (because the high speed drains the buffers that much faster) than for actually using the bandwidth.
If you've got actually decent Internet, or you actually do use the bandwidth, great! My experience is that only a few applications will max out my 115Mbit/s connection, and given I have semi-decent kit and a decent ISP using the full bandwidth doesn't have a big impact on latency so I can be streaming, downloading, and have multiple people video-calling and that's fine.
Looking at my stats, there's only been a five minute period in the past 24h with sustained speeds used of over 20Mbit/s.
> My experience is that only a few applications will max out my 115Mbit/s connection, and given I have semi-decent kit and a decent ISP using the full bandwidth doesn't have a big impact on latency so I can be streaming, downloading, and have multiple people video-calling and that's fine.
My experience is that people rarely feel constrained by bandwidth because everyone is fighting to optimize every last bit, often using varying techniques that sacrifice quality. Not a single video calling app that I’ve used recently feels like it’s actually delivering good video to me. They're all bitcrushed. They work for some meetings at work, but I would really love a much higher bandwidth connection for screen sharing on Discord and practically every streaming app.
Latency is also important: don’t get me wrong about that. But the fact that I have 1Gbps and I can’t get high bitrate video down kills me.
If higher bitrates lead to higher latency, most tooling will try to keep latency low enough even if that means sacrificing quality. I've got 1/10 of your total bandwidth, but still seem to maybe get better video quality? I'd attribute that to better active traffic management.
It's not just upload speeds though, don't forget also needing a static IP address (or two...) and unblocked ports, and a decent router and other networking gear. And when residential Internet goes IPv6-only (...eventually) you'll likely still need dual-stack IPv4 to avoid problems with connections from IPv4-only endpoints.
Oh, and an SLA too...
UPnP was meant to help alleviate some of those requirements but I don't think there's been any progress on that front since the spec came out in 2001...
This is not at all accurate when considering that internet access for one home is on average for 3 people, not 1, and internet is now used for work which requires video streaming and large file downloads.
That would be the situation I find myself in, and I can indeed have multiple people in the house on high quality, low latency video calls at the same time as large downloads from Steam and folk watching YouTube.
Maybe I'd get a faster download with a faster connection, but it's the low latency applications that are of top importance.
Even better: my ISP does that for me on their side, and prioritises smaller packets too.
I've got a 115Mbit/s connection for my last mile, and AAISP limit what they'll try to send down it to 95% of the line rate, so BT should never need to buffer anything.
I oppose this. 100mbps just isn't what most people need. To most readers, this probably sounds innocuous, but I have some involvement in my city's digital equity community, so I have some insight into what this translates to for local governments and community organizations. Now what will happen is someone at the county level in charge of digital infrastructure will look at at a map and say "oh no, 40% of our population doesn't have broadband, and 20% don't even have access to broadband". This news will find its way to local politicians and organizations and eventually, a lot of money will get thrown at this problem. Naturally, Comcast and ATT have entire departments skilled at filling out RFPs, so they will get tons of money to expand their coverage to "communities that wouldn't be profitable to service without government grants". Meanwhile, the net result will be that we'll spend a lot of money on this, and the result will be that some households can now watch Netflix in UHD on three televisions at a time instead of two.
Meanwhile, the vast majority of people have $30 wifi routers that are incorrectly configured and positioned behind a concrete wall in the garage. So 100mbps comes into their house, but their laptops, TVs and phones see 20mbps.
There's also a huge amount of government money going into rural connectivity. It seems to me that StarLink is an ideal solution to this. Rather than running lots of cable to houses that are miles apart, give people subsidies to get StarLink. But if you make legislation that StarLink isn't "broadband", then the money needs to go to the incumbents, who will want billions of dollars in subsidies to lay expensive cables to service a small number of people. If I had to guess, some of those large players might have lobbied for this change, because they want some of that $18 billion pie that the FCC authorized for rural broadband expansion [1].
Anecdotally, Comcast has a 50mbps plan in my neighborhood for $20/month. If I needed more than that, I would gladly pay for it... affordability isn't the issue, I just get very little benefit from a connection faster than that, and the faster plans still have the same 10mbps upload speeds [2]. What I would 100% definitely pay for is more reliable internet that (almost) never goes down, but there's only two options in my neighborhood and I'm not sure the alternative is better. And I'm a software engineer... I'm on my computer all day, often downloading large files, on lots of zoom calls, while my kids are watching Netflix. I doubt most people's needs are substantially higher than mine.
Starlink is giving me over 150 up and 20 down during peak hours in a waitlist / oversubscribed area with a previous generation modem. They more than meet the new definition for broadband.
However, it doesn't scale nearly as well as just running fiber to the home, either in number of subscribers, or in per-home cost.
Some relatives live in an area with a phone co-op, and they ran trenchless underground fiber to each home in their service area. Population density is something like 1.2 people per square mile, and median income is ~ $20K. Monthly rates are extremely low. I watched the installers. It was two people and they were covering a good fraction of a mile per day. Equipment and material costs are negligible. There are communities in the SF bay area that are outside of telco monopoly territory. Trenching fiber here (in the mountains) runs something like $10K/house, which is $27/month when you amortize the cost out over 30 years.
The equity / subsidy programs I've seen are always just handouts for large ISPs that have established monopolies and that refuse to reinvest even a small fraction of their monthly revenue into network upgrades.
Establishing municipal broadband or broadband co-ops and giving them automatic right-of-way priority over the incumbents would fix this problem overnight. Either there would be a second fiber network that didn't suck, or the incumbents would actually invest in their networks.
> Meanwhile, the vast majority of people have $30 wifi routers that are incorrectly configured and positioned behind a concrete wall in the garage. So 100mbps comes into their house, but their laptops, TVs and phones see 20mbps.
Why would they have the routers in their garage instead of in their house? Most houses that have cable have cable outlets inside the house, since the reason cable was originally put in was to support cable TV and generally people have their TVs in the house.
For a period a lot of FTTH deployments put ONTs in the garage. As some of those networks upgraded CPE ONTs and integrating ONTs into their residential gateways, they often didn't bother actually bringing those fiber runs into the house.
I wouldn't say this is incredibly common though, it really depends on which network and when the original install took place.
Internet service providers already receive money from the government through grants, loans, and other programs. Opposing increasing the pretty low requirements on ISPs who already receive this money just doesn't make a lot of sense to me. Millions of people are getting an increase in their internet speed (including many in this thread!) at the flip of a switch. Seems like a pretty big win.
>Anecdotally, Comcast has a 50mbps plan in my neighborhood for $20/month. If I needed more than that, I would gladly pay for it... affordability isn't the issue
Nothing is stopping Comcast from still offering that plan to you. They just can't call it broadband or keep receiving money from the government. They now have to provide a slightly better service if they want to keep receiving those federal funds.
>There's also a huge amount of government money going into rural connectivity. It seems to me that StarLink is an ideal solution to this. Rather than running lots of cable to houses that are miles apart, give people subsidies to get StarLink. But if you make legislation that StarLink isn't "broadband", then the money needs to go to the incumbents, who will want billions of dollars in subsidies to lay expensive cables to service a small number of people. If I had to guess, some of those large players might have lobbied for this change, because they want some of that $18 billion pie that the FCC authorized for rural broadband expansion [1].
Sorry this just silly. We already run electricity to basically everywhere so hanging some extra fiber on the polls that already exist isn't really hard. Rural electric coops exist and have been rolling out fiber across their existing power network with great success! Starlink is not a serious solution to providing reliable, fast broadband to rural America. It doesn't have the capacity to meet the needs of rural America either. Put aside the long term future concerns of starlink (whose satellites are all in LEO and have to constantly be replaced) the price argument alone isn't great. The local electric coop that covers 9 rural counties near me offers a base plan of 100mbit up and down at $70 a month via fiber. Starlink is $120 a month for a speed that varies between 250mbit to less than 40mbit. Internet speed demands are only going to increase so we should keep in mind the ability to scale up; which is an area fiber wins at again. We already have the polls and electric coops ready to serve rural America. No need to gamble on a more risky and worse product like starlink.
Worrying about subsidies for "incumbents" while in the next sentence suggesting subsiding for Starlink/Elon is pretty funny!
>but I have some involvement in my city's digital equity community, so I have some insight into what this translates to for local governments and community organizations. Now what will happen is someone at the county level in charge of digital infrastructure will look at at a map and say "oh no, 40% of our population doesn't have broadband, and 20% don't even have access to broadband".
GOOD. They should be looking at it! If 40% of the county can't get a basic 100mbit/20mbit internet package then something should be done to address that. DOCSIS has been able to push multi gbit speeds over coax for a decade so I don't buy it Comcast or whoever in your county can't provide 100/20.
> If 40% of the county can't get a basic 100mbit/20mbit internet package then something should be done to address that.
The problem isn't that they're offering higher speeds. The problem is that they're getting rid of lower speeds. I've been repeatedly forced off my old plan as Comcast has sunset them. It appears very clear that their moves are in response to FCC definition changes. I wish I could still have my 25 down service, along with my old price ($19). But I can't! They killed that service years ago, before they killed the 40 down service and the 50 down service. The lowest tier I can buy is 75 down.
I'm all for options, but it's clear that rejiggering the definition of broadband has the (perhaps) unintended side effect of eliminating low-cost options that would be fine for many families.
That is frustrating. Although nothing is stopping Comcast from offering plans lower than the FCC definition of broadband. IIRC they did this for years where they had a $10-15 plan for certain areas/people that was 15mbit download and I don't recall the upload.
I suspect the issue you are describing has more to do with Comcast than anything the FCC has done. Comcast realized they can extract more than $19 from your household and others like it so they axed that plan. The major phone companies have done the same thing in past decades. Comcast is notoriously sleazy! This is the same company that at one point required customers to pay $25 a month to get the full speed of their internet package (in addition to their monthly bill of course). Realistically it costs Comcast the same amount of money to provide you with 25/3, 40/5, or 100/20 so a slightly higher broadband definition should have 0 impact on what they charge.
We have 400 Mbps symmetric with Hotwire, and whenever I do a speedtest, I get at least that much, sometimes it bursts to 600.
But then there's something wrong with their peering or something. Youtube videos randomly buffer even at 1080p. Downloads randomly drop from 400 Mbps to 10. Sometimes the problem lasts an evening, and then goes away. I have no idea.
Question from the uninformed here:
Is this just a change to how the statistics are tallied, or does this trigger a ripple effect compelling service providers to do something differently?
Does anyone know if this will impact WiFi at hotels? Most large brands have a free & upgraded plan available during your stay. Would this mean that the free plan has to meet these requirements?
Muddling along here at 6Mbps down 0.85 up here in central London. I'm not sure what I should call it now? Narrowband? It actually works ok for most stuff I do.
I have said on several occasions that this is getting out of hand. I was perfectly happy with 20 down, and am now forced to buy 75 because that is the lowest tier that Comcast offers. They keep pushing up the minimum rate so that they can show a high rate of "compliance" with the ever-increasing broadband standard. And of course, they push the price up with every speed increase.
The prevailing sentiment on average across HN is regulate where competition is stifled (monopolistic practices by Apple, Google) or it is a public good (utility threads on PG&E come to mind, homegrown broadband, public transportation) or the law is being broken (copyright, fair use, open source violations).
Using a wide brush to paint all of HN and similarly all of gov't regulation is what leads to cognitive dissonance where there is nuance.
We are in the 45 million. When we moved here over a decade ago we were told by TDS that they were expanding their service and should be able to provide us with 50Mbs 'soon'. 10 years later we are still 5Mbs (3 in real-world situations). With no plan to expand here ever. Fortunately, the local electric company got a grant and is rolling out 1Gbs fiber to 100% of their customers. TDS couldn't do it because it wasn't 'profitable'. The electric company is fast-tracking the build-out and we should have service less than 18 months after they started.
I got 1 gig fiber to the home just this last year thanks to a an electric company fiber co-op. Couldn't get DSL before because the lines were so degraded and no one offered it. Until now the only available internet was wireless in a highly forested area.
Despite living in a city, I was one of those 45 million until pretty recently. Due to some weird network topography, I only had one option, spectrum, and it never worked very well, and they never were very incentivized to fix it.
Now that 5G Internet exists, I am paying $25 a month for 300 Mb access and my life is better. I debated over years just because of the inability to get decent Internet at home.
I’d sure like to know this, as we’re on Optimum 100, which isn’t sold to new customers anymore.
Also of note, some uploads might actually go up substantially - a bunch of people on 100 down plans have seen uploads cut to almost nothing, e.g, 35 -> 5.
I appreciate any effort on the part of a government to exert pressure on industry to deliver a product to citizens that keeps up with the times, but I'm not sure it really matters anymore. Any connection that can stream YouTube at an acceptable resolution is 'broadband' in my eyes, while someone who's rsyncing terabytes daily would probably suck up as much bandwidth as they can get and come back for a second helping. Maybe, just maybe, everyone has learned the Nerd Words well enough that we can simply refer to connection performance by X number of megabits per second (probably not).
Good thing the FCC doesn't control Ireland, or I'd no longer be able to declare that I have broadband via satellite! Well, part of the time, anyway.
> Good thing the FCC doesn't control Ireland, or I'd no longer be able to declare that I have broadband via satellite! Well, part of the time, anyway.
I mean, you can call it what you want, but it’s under the minimum standard set by the NBP in 2012 (30Mbit, 25ms). Fortunately, if that’s the best you can get, you’re almost certainly in an NBI intervention area, so a minimum of 500Mbit via FTTH should be available to you at some point.
December 2026, pushed back twice so far. The places that have service have post-launch prices far exceeding that of Starlink (and launch prices matching Starlink). It's "National" in the sense of being funded by taxpayers, but not in the sense of being operated as a public good. Just the usual operation to funnel tax money to politicians' cousins and friends and set them up for life with a revenue stream.
I can't imagine why you'd need 4k streaming. I can't tell 4k from 1080p on my 65" from my chair, and my phone isn't 4k.
> The places that have service have post-launch prices far exceeding that of Starlink
… Eh?
“ In Ireland, Starlink costs €65 per month with a one-time installation fee of €450.”
There are a bunch of providers on NBI, but with the cheaper ones you’re looking at ~45EUR/month for 500Mbit/sec, no installation fee.
(In general, unless you’re planning to switch provider every year, you should avoid introductory offers, at least in this country; Eircom’s 75EUR after a first year at 35EUR, say. If you avoid this type of ‘offer’, though, it should be in the 45-55 range.)
This is over regulation imo. If you support this then you can’t complain about there not being any low cost housing because it’s regulations akin to this that is increasing the costs.
How much do you pay for internet? Maybe you should be directing your ire elsewhere.
I currently pay $60/month for 600 down.
An ISP that actually struggles to provide 100mbps down at a reasonable rate, today, is simply one that refuses to update their hardware. 100mbps is not hard to achieve with semi-modern hardware.
I know of ISPs servicing remote small communities with 1gbps down at $100/month. 300 down for $30/month.
It is really frustrating. I live in a US city (close to the city center) where my options are Spectrum or AT&T. AT&T still only provides ADSL (recently updated to 30/10). 5 years ago, AT&T tore up our street and laid fiber. 3 streets over, fiber is available, but still not on my street.
It’s less a hardware problem and more a medium problem; to go over 100Mbit/sec reliably you’re realistically looking at fibre (or maybe coax, but coax isn’t particularly cost-effective or practical in the rural areas that tend to have problems in this direction.)
DOCSIS has been able to push 100Mbit/s over coax for nearly 2 decades.
Hanging fiber isn't really a complex process anymore anyways. We already run power to basically everywhere people live so hanging some fiber as you go isn't that hard. Electric coops have been receiving federal funding to hang fiber in even the most rural areas for the last 8-10 years with great success.
Oh, sure, but deploying high-speed DOCSIS only really makes sense relative to fibre in _relatively_ dense areas; it’s not a particularly sensible solution for rural ones.
Fiber is not available here, and without significant extra pressure from the government, probably never will be.
We've had internet through Spectrum (and before that Time-Warner) since we moved here a decade and a half ago. 100Mbps has been available that entire time. I don't know exactly what all the available speeds have been at any given time, but in about 2017, we upgraded to 400Mbps, which we still have. It's not always the advertised speed, but it's almost always above 100.
I think your understanding of this technology and the practicalities of implementing it may be misinformed or out of date.
I have direct family members that own and operate rural telephone companies. I'm well aware of the practicalities of maintaining technology standards.
Rural telco is heavily subsidized already. To the point where I've literally personally done a run of Internet to a mountain house in the middle of nowhere.
The issue is that bigger telcos like spectrum and time-warner, simply do not care.
There's a reason Google was able to start doing urban fiber installation (which is more expensive than rural installs) and they were able to advertise $100 1gbps speeds 10 years ago. That wasn't all due to deep pockets.
I...did you miss the part where I said I have Spectrum, in a rural area, with over 100Mbps speed?
Like, I can totally believe that there are places where they are perfectly content to shrug their shoulders and ignore entire populations (I actually know of some not too far from here). But to say that as if it's a universal constant is patently false.
Regardless of what the particular geographic parts of Spectrum you're familiar with do, it is demonstrably possible to get 100Mbps+ internet service over coax in rural areas, from them specifically. Thus, to say that you're "realistically looking at fiber" to go over 100Mbps doesn't hold up.
If you want to just bump that up to 1Gbps+, instead, your statement will hold pretty true.
Is this a relatively urban area, though? Generally, high speed coax networks these days consist of a relatively short, often shared stretch of coax, connecting back to a fibre-optic node. You’re looking at something like this: https://en.wikipedia.org/wiki/Hybrid_fiber-coaxial#/media/Fi...
If the premises served are low-density enough, you may be looking at only one or two per coax section, and at that point you’re probably cheaper just using fibre to the premises anyway.
25mbps being "broadband" is a bit of a joke, a single 4k stream can exceed that. 100mbps is about what is needed for todays "broadband" applications.
Raising the floor to 20mbps for uploads is also fairly important for more practical things, having to backup a laptop at 3mbps was wholly impractical. 20mbps is a good bump if you're running some backups overnight, or uploading a 4k video file.
I don't see any reason to raise the definition of bandwidth higher for the next decade though. But I should stress, the amount of money needed to make these upgrades is pretty marginal in the grand scheme of things, we're not talking about replacing a ton of last mile wiring.
> 25->100 is more of a luxury then a necessity
> a single 4k stream can exceed that
I think this is an odd choice of a counter-example. Is 4k streaming a necessity or a luxury? How many people could even tell the difference in an A/B test between 4k and 1080p on their televisions at their viewing distances?
> 20mbps is a good bump if you're running some backups overnight, or uploading a 4k video file.
I agree that it's a good bump. Back to necessity vs luxury, how many people back up entire laptops (with or without a sense of urgency) and/or upload 4k video files?
> we're not talking about replacing a ton of last mile wiring
Of course we are, unless it's delivered wirelessly. A huge percentage of last miles are relying on DSL over lousy POTS lines. I use DSL as a backup connection because cable is so unreliable for me, and the fastest I can get in a major metro area is 25-30Mb/s against AT&T's "Internet 50" plan.
Nothing is stopping internet providers from selling internet below broadband standard and many do. Changing the exact meaning of broadband allows the government to raise the standard of service people receive and pressure ISPs to be better. Now ISPs have to be at 100/20 minimum otherwise they will not be able to receive grants or other government funding.
It isn't over regulation to mandate ISPs who take government money to meet some pretty basic speed requirements. 100/20mbit isn't even a high bar.
Agreed. I work from home, but my upload speed still kills me at time. It is not uncommon to need to share a 500M+ file with a colleague, and 30-60 minutes is just too slow. I still have to go into the office on some days when I know I'll be doing stuff like this.
Not to mentions that for my personal computers, I want offsite backups. I have to take a hard-drive and personal laptop to the office every few months to seed a full backup, since it is just too slow from home, other than the incremental backups.
Not necessarily. If your work uses AWS and only needs the smaller instance sizes 25 mbps might be plenty. Those instances do not have very high network performance. I don't think I've ever managed to get above 20 mbps downloading from my employer's little AWS instances.
I’ve worked remotely for well over a decade. I currently t have 32/8. I rarely use even half that. The occasional large file may take a few seconds to download, it’s hardly a problem to wait 10 seconds though.
It's the (IMO faulty) theory that all regulations increase prices. Sort of like "if you require all houses to meet code, you can't have a market for very low quality houses that would decrease the prices of mid quality houses".
The problem with this theory, particularly in the case of an ISP, is you need an actual free and open market with a large amount of competition before it starts making any sense. When you talk about things like ISPs which are region locked and have low or even no competition then it's meaningless. A region locked ISP can (and will) charge pretty much any price the market will bear divorced from the actual cost of service.
[1: https://www.spectrum.com/resources/internet-wifi/what-is-a-g...]