Hacker News new | past | comments | ask | show | jobs | submit login
Technologies I thought my son would never use (tomshardware.com)
404 points by CrankyBear on April 11, 2021 | hide | past | favorite | 553 comments



Taleb talks about the Lindy effect and some of the surprises in this article can be explained it.

https://en.wikipedia.org/wiki/Lindy_effect

The rough heuristic is: the longer a technology has been around, the longer it will last into the future.

That is, it's NOT the case that every piece of technology lasts roughly the same amount of time, and then is replaced.

For example, a chair vs. an iPhone. Which one will be used further into the future? Almost certainly a chair.

----

Land lines have been around a lot longer than fax machines (both in the article), so they will likely outlive fax machines.

Will HTML or JavaScript last longer? Probably HTML, since it came first.

What about ASCII or HTML? Probably ASCII.

These have a "dependency stack" issue, but it applies regardless. And I think that is part of the phenomenon -- low level technologies that are widely adopted take longer to go away. Plumbing, electrical (AC vs DC), land lines, TCP/IP, BIOS, etc.

I can't find a link now, but there was a recent Kevin Kelly article about finding farming tools from a catalog in the 1800's still in use. I think he said that almost EVERY one was still in use, or able to be purchased, which is a nice illustration of the point. It takes a long time for old tech to die, and arguably it never does.


A nit, the telephone and fax machine developed more or leas in parallel, and the first working fax predated the first telephone by 11 years!


They were sending drawings through telegram lines for newspapers during the American Civil War. Thus, the fax is older than the phone. But in terms of general population use, of course most people encountered a phone before they encountered a fax.


Interestingly, you can still send a telegram.. https://www.itelegram.com/

I guess these guys took over all the telegram infrastructure of Western Union in 2006.


Telegrams were always expensive but the price now is outrageous. I followed the link and chose Norway as the destination country and was told that it would cost GBP 22 + 67p/word! Even worse for Mexico: GBP53 + 67p/word.

It's cheaper to send flowers with a card!


That doesnt sound like fax. Or better, if you include this and fax together, you’d have to describe it as a generic technology that allows sending images remotely. Which may actually survive the telephone.


Source?


Google "fax machine history." This is incredibly trivial to verify.


Backing up the original point of the parent post's spirit, I actually see fax lasting longer. It's special cased in a lot of regulatory structures as 'secure' and has quite a bit more use than the HN crowd might think.


Fax is the cheapest way to get forecast if you sail offshore through a service called "weather fax". Alternatives works through satellite with hefty monthly subscriptions. That service is the reason I bought yet another raspberry pi with a software defined radio module.


This is, of course, silly but true, because fax has no endpoint authentication of any kind.


It kind of makes you feel ridiculous about the amount of work that we put into securing even systems that are in no way critical and then the world runs on unsecured mailboxes, homes which can be broken into by someone willing to kick hard enough, Social Security cards with no security features whatsoever, funds transfer that only requires your account and routing number to withdraw, and so on.


It's not that ridiculous. Securing stuff on computers is more important due to the scale at which attacks can occur. On the internet, you can gather information on millions of users in little time and without putting yourself in a place where you could easily be caught. And for a machine that isn't connected to the internet, it can still give you far more data than you could get by stealing paper that takes up the same amount of physical space.


I guess so, but there are many Internet accounts I would be way less upset about someone compromising than the other things I mentioned.


I'd say most of the parents' comment is applicable to the same scale. People can abuse your social security number or bank account and routing numbers across the Internet. Phone numbers being relatively easy to spoof and/or hijack makes fax trivial to mess with internationally as well.


I think you are understating the extent to which mailboxes and homes are secured by men with guns. Sure; there’s very little preventing someone from kicking their way into someone’s house, but if someone did so, they would very likely end up in a jail cell or even, in some places, dead.

Unfortunately, you can’t physically harm people over the internet, so different security measures must be taken.


Your odds are very good stealing from my mailbox and not that bad burglarizing my house. Maybe you have much more vigilant police where you live. What’s more, unauthorized access to computer systems is also a crime.


The odds of getting caught robbing a single mailbox are not high, but if you do it on a large scale, it starts to get pretty risky.

The odds of getting caught attacking a Russian computer from the US or attacking a US computer from Russia are essentially zero for any scale small enough to not have major foreign policy implications.


Homes are massively protected by people’s goodwill, not by the judicial system. In most of Europe and more than you suspect in US too, if you call the police, they come more than 30 minutes later (especially in France where they have no right to use their guns and don’t risk their own security), and if you “handle the matter yourself”, you are in just as much problems as the thieve, particularly in US where you still have to use a lawyer to prove your innocence. No, what really holds houses from being broken into, is mostly that people don’t do it (hence the use of a society where people earn enough to not be willing to risk physical fight).


Most forms of security rely on the goodwill of most people in society; the entire problem consists of handling the minority of people who lack goodwill. These people exist in every society.


> Unfortunately, you can’t physically harm people over the internet

Yes how unfortunate?


I was at a DefCon talk where these researchers demonstrated pwning a network by sending a fax: https://blog.checkpoint.com/2018/08/12/faxploit-hp-printer-f...


You could put your fax machines in a locked container


I was thinking more of SS7-based attacks for remote interception.


Regulatory structures are not forever and will change.


33 years, if you go by the patent. Lots more fun facts in Tim Hunkin's excellent Secret Life of Machines:

https://www.exploratorium.edu/ronh/SLOM/0301-The_FAX_Machine...


Tim Hunkin is recently making new videos!

https://www.youtube.com/c/timhunkin1


This message is a telegram.


Interesting! Could you expand a little on what you did?


Wow, given the answers to that thread, it seems that the HN crowd has either no notion of conditional probabilities or that their “future is better” mindset is clouding their rational judgement.

The Lindy effect makes perfect sense has an heuristic to evaluate the remaining lifetime of technologies and species at a given point in time. It’s trivial to prove that it works for anything whose survival curve is convex.

Despite its apparent love for rationality (which I enjoy), it seems like a large part of HN is just as subjective as anyone and unable to accept things that don’t fit their mental model.


This field also seems to attract (or foster) a particular mindset that we might call "I am smart and hyper-rational", which then adds an additional set of blinkers and an enhanced ability to discard and ignore evidence, and an enhanced faith in one's own powers of deduction.

All people have that skill, of course, but this field seems to have more than its fair share of such.


> Wow, given the answers to that thread, it seems that the HN crowd has either no notion of conditional probabilities or that their “future is better” mindset is clouding their rational judgement.

I mean it's a tech entrepreneurial news aggregator on a tech incubator's website. Their whole raison d'etre is that "future is better buy my technology so I can get $$$" -- that's why this site exists.

Is it that it doesn't fit the mental model, or is it failing to read the room?


I believe that scientific rationality and engineering are part of YC’s DNA. This thread seems to prove I’m partly wrong indeed.


Humans.


humans...


It is an interesting observation to compare longevity times for current widely-used IT tech:

Smartphones (since first iPhone) - 14 years.

Laptops (since first Apple Powerbook) - 30 years.

PCs (since first IBM PC) - 40 years.

C programming language will be 50 years old next year.

SQL will turn 50 in 2024


> C programming language will be 50 years old next year.

This principle implies the C programming language will outlive Rust, Go, Python, etc. I'm not sure how I feel about that. There's a good chance its right, but I'm uncomfortable imagining my grandchildren learning C's weird quirks.

There's definitely codebases written in C today which will outlive me, like the Linux kernel. And most of those codebases will probably never migrate to a different language.


I think C will outlive those languages, in the sense that it will exist. But it doesn't necessarily follow that your grandchildren will have to learn it. There will be some other language like JS or foobar 2050 that's just way more relevant and popular :) But they certainly COULD if they wanted to.

The question of whether a technology simply exists vs. whether it retains its popularity is an interesting one. Chairs and forks will both exist and be popular. iPhones will exist, but they won't be popular on some time scale. So it is interesting to think whether C is closer to one or the other :)


C is the "lawn chair" of programming languages. Sure, it is neither particularly fancy, nor most comfortable - but beats having no chair :)


While C is not perfect, it is in a sense something that most other programming languages are not. It's simple and if you program in C you probably touch upon the entirety of the language on a regular basis. Because it is simple you can write a compiler for it with some reasonable effort if need be, even for completely new architectures.

In any case, I'd argue that most of the quirks are due to undefined behaviour as implemented in the compilers. And 50 years from now the language have probably evolved some more anyway.


I'm more worried my grandchildren will still be learning/using QWERTY than they have to learn C's weird quirks :). My opinion is that it's easier to get a group of tech people together and gradually replace C (like Android Rust development) than getting people to replace QWERTY or getting the US to move to Metrics system.


what's wrong with qwerty?


Arguments could be made that it's inefficient or un-ergonomic, though I'm doubtful how much difference it makes on a modern keyboard. There's an oft-repeated story about the qwerty layout being designed to keep people from typing fast enough to jam typewriters. I can't speak to the legitimacy of the story, but I do collect typewriters and I can say for sure that there's a lot more pressure and key travel (and time!) required to slam a type bar into the ribbon than is needed to press a modern keyboard key.

I learned Dvorak a decade ago and loved it unless someone else had to use my computer or I had to use a shared computer. I'm typing this on a qwerty keyboard and I'm okay with that now.


> There's an oft-repeated story about the qwerty layout being designed to keep people from typing fast enough to jam typewriters.

Typewriter layouts are designed to avoid jamming, but not by reducing typing speed. They place letters that are pressed in direct succession apart, so that the levers don’t collide. Trained typists can write pretty damn fast on a typewriter.


I thought I remembered something about that, thanks. Dvorak does seem to cluster a lot of commonly used letters, which would definitely jam neighboring type bars pretty quickly unless they used a Blickensderfer-[1] or Selectric-style unified type element.

I've always wanted a Blick, their "scientific" DHIATENSOR layout intrigues me. They also made a qwerty version and I seem to remember a story that their salespeople would make you sign a waiver that you were choosing a less efficient keyboard if you bought one. That's certainly 90% marketing for their layout and maybe 10% fact, but I find it amusing (if true, but I can't seem to find a reference to it any more).

I'll bet typewriter-trained typists can really move. It's definitely a separate skill, at least for the manual typewriters in my collection which have about an inch of key travel and a heck of a lot more actuation force than the gateron red switches I use at work. :-)

Ultimately I find that the fastest layout is the one you know.

[1] https://en.wikipedia.org/wiki/Blickensderfer_typewriter


> They place letters that are pressed in direct succession apart,

... and that really reduces typing speed. When i switched to dworak I was amazed how my fingers just roll on a keyboard, forming sentences faster than I even form a conscious understanding in which succession should I move my fingers. It just happens as if by itself!! Just think of a word to type and it is already on the screen.


> ... and that really reduces typing speed.

That may be the effect for you, but it's not a goal. It's maybe a tradeoff, but machine writing is a trainable craft. Usual results for trained people are at about 200 - 400 characters per minute measured over 10 minutes. Championship results are at up to 900 characters.


Yeah, and all championship results are on Dvorak keyboards. [1]

[1]: http://www.recordholdersrepublic.co.uk/world-record-holders/...


There are plenty of unofficial records and leader boards for typing speeds where qwerty typists are just as capable (more so even due to sheer availability) as Dvorak typists at hitting 200+wpm, and while that alone shouldn't be used to judge the quality of a particular layout it shouldn't be ignored that there's no difference in attainable max speeds done in short bursts. That says nothing about sustained typing of course, but I strongly suspect if you're already a competent and fast typist in qwerty you'll be comparably as fast in Dvorak and vice versa.


Dvorak is actually pretty bad as a smartphone keyboard, it's advantages with two-handed typing on real keyboard become disadvantages if you type with your thumb.

Dvorak has specialised right- and left-handed layouts. Maybe it also needs specialised one finger layouts, too.


I had wondered about phone keyboards. Part of the reason I switched back was that I couldn't change the layout on my phone and got tired of switching back and forth. (I know there are probably some options now, but there weren't on iPhone 4, Kindle touch, or feature phones with physical keyboards.)

I can imagine you'd need to use both thumbs extensively or you'd be jumping back and forth. I definitely found that when I was laying on my side on the couch, propped up with one arm, qwerty was easier to use with one hand. Not that that's an ergonomic or sustainable way to type, but I was a lazy college student and I didn't care.


> I'm uncomfortable imagining my grandchildren learning C's weird quirks

What about them learning English's even weirder quirks? Does that make you uncomfortable as well?

I have come to the point that I sort of accept C's quirks, and, with the appropriate mindset, I can even love them. I certainly see myself as an old man teaching these arcane quirks to a young and innocent audience.


Also COBOL is still around and most people don't need to learn it anymore.


Most people don't need to learn C either; or any programming language, for that matter.


Sure but I think programmers is an implicit assumption here when talking about which programming languages to learn. We don't need to start splitting hairs here.

And COBOL is extremely niche, while C is still quite common and useful to know.


Isn't that the whole worse is better thing in action? People complained about C and Unix when they were new, but they picked up so much traction, and people invested enough effort into working around the flaws, that it just achieved critical mass. Same thing with JavaScript; it's never going anywhere.

Certainly higher-level languages have edged out C in many domains though.


> Laptops (since first Apple Powerbook) - 30 years.

That kind of Apple revisionism is not correct. The NEC UltraLite predated Apple clamshells by 3 years (1988), and luggables with a battery configuration have been available as early as 1981 (Osborne 1 with an aftermarket 1 hour battery add-on).


The Grid Compass was probably the first true laptop in 1982. But it ran a proprietary operating system by default and costs about $8,000 in 1982 money. The Data General-One came out in 1984 and might be a better candidate. But neither of those were really mainstream.


Don’t neglect the Model 100...

“The TRS-80 Model 100 is a portable computer introduced in 1983. It is one of the first notebook-style computers, featuring a keyboard and liquid crystal display, in a battery-powered package roughly the size and shape of a notepad or large book.”

https://en.m.wikipedia.org/wiki/TRS-80_Model_100


It wasn't a laptop in the modern sense though.


For any reason other than it doesn't fold in half?


The iPhone was definitely not the first smart phone.


Nor was the Powerbook the first laptop nor the IBM PC the first PC. Those are all, however arguably responsible for popularizing the technologies in something resembling their current form.


I disagree. I am of the opinion that those form factors were inevitable, and would have come along anyway with or without Apple's offerings.


How is that inconsistent with the parent statement? Even if the form factors were inevitable--e.g. someone would have decided that smartphones didn't need a physical keyboard--someone had to be first to popularize. (Not sure I agree on the laptop but the iPhone pretty clearly popularized the modern smartphone form factor.)


To say they are "responsible" for something can be read as "allowed to take credit for". Being the first to popularize something is a business achievement, which is quite meaningless compared to the technical achievement of being the first to build something.


Something that's a mainstream business success is far more interesting in general than progenitors that never really took off for whatever reasons. They're may still be important as technical achievements but the history books notice those who took things mainstream.

James Watt didn't actually invent the steam engine. He just came up with an invention that made it a lot more efficient.


I can see this in the case of the PC and the laptop, but proto-smartphones were around for years before the iPhone and they were not developing in an iPhone-like direction. Every iPhone-like product that came after was the result of copying. I don't think it was inevitable at all.


One thing I do think is true is that Apple developed sufficient brand permission to be able to do things that were definitely outside of a lot of buyers' comfort zone and arguably needed some iteration to really nail.

I'm still inclined to think someone would have jumped to a smartphone without a keyboard. But it's also true that the iPhone had plenty of critics early on and arguably didn't fully hit its stride until the 3GS.


No, but it was the first wide spread smartphone and the first one most people used. This is the only practical starting point.


Blackberry, Palm, and Windows Mobile phones were more widespread than you're giving them credit for, as were laptops considerably preceding Apple's PowerBook series.


The mindblowing part about these kind of numbers for me always is the sheer amount of smartphones out there.

I mean, imagine a parallel world where those smartphones weren't designed to shove ads down your throat and where they could be used to be as productive as with a laptop, and where people could help to automate their lives on their own with it.

That would be so amazing.


Smartphones are a huge productivity tool, that's why they took off in the first place. Especially Blackberry, which offered the magic technology of accessing your email and calendar from anywhere. The ads are not an obstacle to this, especially not on iPhone.


They are far more than that. For a large number of people, the smartphone is their first and only computing device. Enabling internet access is like rocket fuel for advancing socioeconomic conditions for those in developing nations.

Entire generations have been lifted from poverty due to it.


And "first and only computing device" pales to insignificance next to "first and only bank account" which they also are, in a way. That's huge.


I don't know about you, but a consumption-only "first and only computing device" sounds incredibly dystopian. I get that they have a positive impact, but it feels like we could be doing better.


> I don't know about you, but a consumption-only "first and only computing device" sounds incredibly dystopian

Smartphones aren’t consumption-only.


Doing anything but consumption and lightweight content discovery on smartphones is basically a farce. They are not usable as general-purpose creation machines.


> Doing anything but consumption and lightweight content discovery on smartphones is basically a farce.

I disagree, even when talkingn about a smartphone as a human interface device and not, as was the actual context of the thread, a computing device.

Having a, say, DeX-enabled Android (or even potentially powered by fairly traditional Linux, if one doesn’t just use a major maker device with stock software) smartphone as a computing device doesn’t preclude standard desktop I/O devices fo interact with it in manner very similar to a standard desktop PC.

> They are not usable as general-purpose creation machines.

Again, as an exclusive computing device, they are just about as much as is any computing device.

Even as an exclusive HID, there’s a giant excluded middle between “consumption only” and “not usable as general purpose creation machines”.


What makes a smartphone a smartphone is the ability to use it with just the touchscreen. A smartphone with desktop peripherals is just that, a desktop. What I am arguing is that a bare smartphone (aka what most of those people in the developing world can afford) cannot meaningfully serve, due to being a form factor with a very imprecise input method, as a creation platform for any sort of precise content.


> What makes a smartphone a smartphone is the ability to use it with just the touchscreen

You seem to be conflating the ability to be used with a limited interface with a restriction to be used with only a particular interface. I desktop PC has the ability to be used without a graphics tablet. It would be a much more limited creation device if it were restricted to being used without one. And the same is true of everything that isn’t a touchscreen, microphone, or other built in human interface and smartphones. But a smartphone as an exclusive computing device does not imply the limitations of its built-in interfaces, since having external interfaces is also a standard feature of the class.


While I agree with your definitions of what a computation device is, I also don't see Android being developed on Android in the near future. So many UX conceptual problems won't allow this, so I think that as long as computers are required to build smartphones, they cannot be described as "general purpose" machines.


Its still a poor version of the internet when compared to a PC (laptop/Desktop).


That world cant exist. A flaw of anti capitalist alternatives is that they're non-natural.

Any alternative where trade didn't follow optimal path is due to regulation / force. And in most, the force required to steer humans away from their nature also kills innovation.

Ergo: you cant have a miracle chip in your pocket with no one using it to sell you potato chips.


1. Nobody mentioned capitalism or alternatives in the parent comment.

2. Capitalism is not "natural" either and only started developing in the post-renaissance world.

3. Natural selection demonstrates conclusively that relying on purely natural processes to drive "innovation" often lead to highly non-optimal and harmful trends.

4. Most of the major scientific and technological innovations of the 20th century were either the direct result of or funded by "force" which I take to mean government.

5. Advertisement was not the only monetisation strategy that the internet and telecommunication industry could have taken. The internet could have very easily gone down the subscription route and the only reason why anybody thinks otherwise is because of the decade long marketing that has normalised getting everything for "free".


Laptops existed far before the Powerbook.


It really depends on how you define laptop. There was this sort of thing in the 80s, (https://en.wikipedia.org/wiki/Toshiba_T1100), of course. And there were a couple of laptop-shaped laptops from 1988 on, but they were either spectacularly compromised (the NEC Ultralite had a max of 2MB of storage, for instance) or spectacularly expensive or both. In terms of laptops that were shaped like laptops as we know them, and that were actually usable outside of very specialized applications, the Powerbook 100 and Thinkpad 700T could be reasonably claimed to be about the first.


There’s also the TRS-80 Model 100, launched in 1983


Every time someone brings up the Lindy effect I can't help but roll my eyes. It should be replaced with "Survivorship bias". Almost every technology that humans used that lasted for a long time and no longer used has disappeared and is no longer in used, tautologically. The Lindy effect just seems to be a list of examples of cherry-picked technologies.


Based on your eye rolls and subsequent "explanation", it's clear that you don't understand the Lindy Effect. It's not about listing examples of things that have been around for a while. It's about predicting the likelihood that something will continue to be around given how long it has already been around. This effect is well studied and just a cursory glance at the Wikipedia page will give you some solid sources for more rigorous understanding.


There are no "solid sources" there. Its a bunch of books and articles.

Well-studied? By whom? In what journals?

The Lindy Effect may be true, but based on that Wikipedia article's sources, you can't make a good scientific claim for that being the case. Even if you could, you still run into all the current problems such a nebulous branch of science must contend with, such as the peer review problem and the reproducibility problem.


I think you could mount some interesting objection to the Lindy effect, but this isn't it. I'm not really sure what you're trying to say.

It's not claiming to be a scientific law; it's a heuristic for making decisions. The rest of Taleb's books are also about making decisions, not "being right" (whatever that means).

A concrete example is if I'm writing a blog, and I want people to read my posts in 5 or 10 years. Do I go with the cloud platform that just launched or an older hosting provider? This is a decision people make every day. Of course there are many people who don't care if their blog is readable in 5 years; this isn't a judgement.

The Lindy effect is not about what's "better"; it's about what lasts longer. It's also not making statements about the present, which is what survivorship bias typically means.


Another helpful angle is to consider things that aren't Lindylike. People for example -- we expect older people to die sooner than younger ones, not later. And radioactive nuclei -- we expect their ages to be irrelevant to their expected future longevity.


Yeah there are definitely some subtleties, and they would be interesting to tease out.

Older people obviously will die sooner, but having survived does give you some information. For example, your life expectancy at age 1 is 75 years, but at age 40 it's closer to 79 years.

e.g. https://www.annuityadvantage.com/resources/life-expectancy-t...


Isn't this handled by the fact Lindy-like things have to be non-perishable like ideas or technology?


Not if you are looking at motorcycle accidents data of older and younger people.


And if you are writing a blog, it should be written in a way that it could be read 10 years ago if you want it to be read in 10 years if we're following Taleb's thinking.


Survivorship bias is when you draw conclusions about all members of a certain class of things based only on the surviving examples.

The Lindy effect is a theory about the surviving examples specifically.


I can see that, I think the Lindy effect needs some refinement.

My personal take is that there’s a an apex for a particular generation of technology, and that is good forever. A 1930s Farmall tractor is an example of that... there are improved modern replacements, but the 1930 model still does the job near optimally. I would guess that a non trivial number of those tractors will be in use in 2130.

1980s/early 90s minicomputers are similar. Many of these devices are still in use today, and probably could be kept in use for decades to come.

Modern tech is a little harder because we’ve been in a rapid growth phase and the software services based world is more aligned with production than sustainment. I’d bet that trend will change in 20-30 years.


Yeah I just watched some videos that is extremely related by this modern homesteader (and YouTuber! -- apparently he was on the TV show "Alone").

He says "one of the best pieces of advice I've ever gotten is: Don't trade a gun for a snow machine". This is exactly what you're saying, and it's backed up by a lot of experience living without power and water!

https://www.youtube.com/watch?v=BH15Kua5P1Y&t=918s

He also says "everyone one of us has to decide when to jump ship on a technology"

He says canoes peaked in the 1960's, and you can buy a used one for like $125 that's the same as what you'd buy today for thousands. Same with hand saws. He maintains old saws and chainsaws and uses them:

https://www.youtube.com/watch?v=BH15Kua5P1Y&t=746s

when you look at any kind of manufactured goods a lot of things have reached their peak and are either poorer quality than they used to be or they're just the same quality as their peak

----

I found this channel via a video about building an off grid cabin from scratch for a couple thousand dollars: https://www.youtube.com/watch?v=bOOXmfkXpkM

It's good -- a lot of it is built by hand with a hammer and nails. He even says load bearing screws are too expensive, and nails are better!

All of the advice reminds me of Taleb, because it's not necessarily trying to be "right", but rather distilling rules of thumb from practice.


My favorite example of the quality issue is the “whirly pop”, a stovetop popcorn maker.

The old one my parents had was aluminum with a metal gear. The modern version has been MBAed to death — the gear is plastic, and the lid is so thin that you could probably replicate it with 2 plys of aluminum foil. It costs more and is measurably worse in any dimension.


I recently bought a leaf blower/vac mulcher. Took me way longer than it should to figure out that the difference between the $50 model and the $100 one was that the latter had a metal mulching blade instead of a plastic one, and that the former would likely break as soon as I vacced up a stick that was a bit too big. Thanks random forum poster!


I only somewhat agree with respect to canoes.

Grummans are great and they're still being made (though not by the original company).

However, for recreational/tripping/whitewater, Royalex-based canoes were better for a variety of reasons. Unfortunately the material is no longer being made because its intended use (Go Carts) didn't take off to the degree planned. The company continued to make it on a more or less breakeven basis but upon a change of ownership the new owner decided to scrap it. There have been one or two efforts to make something equivalent, but AFAIK they haven't panned out.

There are still plenty of well-made fiberglass/Kevlar boats being made but they're much more fragile.


The tractor thing is quite an interesting one( not necessarily with just this particular one): the older tractors ended up being so reliable that people often try to get an older one instead of splashing out on a brand new John Deer and this annoys the manufacturers down deep to their bones.


Yeah and it is tragic in some ways as the thing missing from the 1930s gear is safety features.

Many preventable deaths happen every year as a result.


Of course, you'll also see people arguing that you shouldn't be driving a 10 year old car for the same reason. There's some level of tradeoff where using an older product without the latest safety features makes sense.


Actually the same video I referenced above has a section on tractors! https://youtu.be/BH15Kua5P1Y?t=829

He says it cost me $100 to get the best that has ever been made, and it's backed up by a lot of experience living off grid


You may want to specify "lawn tractor" as that's not really what I was picturing when you say tractor.


It's exactly survivorship bias, but the contextual usage is different. Usually you use survivorship bias to discredit the relevance of an observation. You should think of the lindy effect as survivorship bias as a supporting heuristic for a prediction.


Except the Lindy effect does hold even when you use it as a predictor of the future, rather than just analyzing historical data.


I think the idea is if you randomly sample a range you have weak evidence as to the size of the range. For example if you randomly sampled and got "2" it would be more likely the range had a span of 0 to 4 than 0 to 100,000 though either are possible. On average your random sample will be at the halfway point of the range.

The Lindy effect is the realization that your encounter of something is like a random sample. "How old are chairs when I exist?" "How old are iPhones?"


I think the difference is that survivorship bias applies when the difference between winners and losers is mostly due to chance. I don't think the fact that we use 4-legged chairs and not 5-legged is survivorship bias. I believe the Lindy effect's prediction that 4-legged chairs will be around a long time. Of course, whether it's survivorship or not is case-by-case.


But every standard office chair with wheels is 5-legged.


Yep, and those are a recent development and much less likely to last than 4-legged chairs. So are, for that matter, offices.


There's a good reason why office chairs have five legs/wheels where regular chairs have only four: safety.

Office chairs have a reclining mechanism. If you ever leaned back too far in an old office chair with four wheels, you would find out the hard way that four isn't enough. It's very easy to lean back comfortably and not realize you've reached the tipping point, fall back and hit your head on the floor or have a close call.

My first jobs long ago had four-wheeled office chairs, so you can guess how I found this out.

The fifth wheel goes a long way to preventing this danger.

You're much less likely to be leaning back in a regular four-legged chair so the front legs come off the ground, and if you do you'll probably be more aware that you are doing something outside the chair's normal mode of use.

And even if offices go out of style, office chairs will likely be around long past then.


This logic makes me fear for the metric system


I curse the Imperialists every time I come across the cursed measurements.


This might be obvious, but... the reason we have a special name for things that behave that way (Lindy effect) is because this is usually a terrible heuristic. Most things are "perishable", as the article puts it - past a certain age, the older a human is, the shorter their life expectancy. This is true of most things.

The Lindy effect talks about the rare cases where this isn't true.


It is just a natural case of the exponential distribution, which is widely used to model expected waiting time. I wouldn't call it 'rare'.

Absent other observations and/or priors, the best estimate of your expected waiting time is the amount of time you have already waited.

So if you've been waiting 10 minutes for something to arrive, your best estimate of how much longer you have to wait is 10 minutes.

If you've been waiting 10 years for a tech to become outdated, your best estimate for how much longer it will take is 10 years.

People usually use buses as an example. So it makes sense: if the bus is supposed to come every few minutes, and it hasn't come in the last 15 minutes, then odds are something is wrong thus increasing the prob that the additional wait will also be 15 minutes.


> The Lindy effect applies to "non-perishable" items

It's the 2nd sentence in the article. So not sure using examples of perishables shows it is a terrible heuristic.

It applies to ideas/cultures/technologies.


Well yes, but not only. I think there are examples where it applies to living organisms too. E.g. a few hundred years ago, a human who lived past the age of 5 would be more likely to grow to be 40, afaik.

I was mostly pointing this out because a lot of people in the comments seem to be talking about this as if it's some kind of universal law, or a proposed universal law, when really this applies to only exceptional circumstances.


> Land lines have been around a lot longer than fax machines (both in the article), so they will likely outlive fax machines.

I'm less sure about this one, because of the relative difficulties of replacement. You can replace someone's POTS line with VOIP and they'll likely never notice (and this is underway). They'll notice if you take your fax machine.


>You can replace someone's POTS line with VOIP and they'll likely never notice (and this is underway). They'll notice if you take your fax machine.

The question is if fax on VoIP is still fax in that sense. At least the wikipedia article on fax mentions transmission through audio-frequency tones [1]. However if you count fax over VoIP as fax, I guess you should also count phone on VoIP as "landline".

[1] https://en.wikipedia.org/wiki/Fax


I haven't used fax for at least 15 years. Might be 20, anyway so long ago that I can't remember when it was.


Fax machines seem long extinct everywhere except in Japan and in some hotels. Landlines probably only survive in business environments, I hardly know anybody who would have and use a landline at home. Also, both businesses and consumers mostly use VoIP-based "landlines" employing codecs which can't support fax.


Faxes have become a largely digital thing, as there are various digital fax services. Neither side needs to have a phone connection that can support it. The fax services just need to be able to communicate to each other.

I was in a radiology startup for a bit, and getting people to fill out forms on something like an iPad is still a problem. You need staff able to help them, and people damage or try to steal them. So then you end up with paperwork, and if that paperwork needs to move somewhere else, people fax it.


Fax machines are still widely used in healthcare, at least in the U.S. Can't speak for other countries, but I'd be surprised if that weren't also the case elsewhere.


My employer was moving us to a new office, right before the pandemic, it's not seen much action, but... there's a fax machine. I doubt any of us would use it, but it comes with our corporate really estate package. Just in case you need to send a fax to Japan... or some hotel?


15 years ago, the office I worked in received daily menus via fax from local delis and restaurants with their specials, in case we wanted to get lunch. I have no idea if this is still a common practice but it seemed to be at the time.

This even had the slight benefit over sending email to a random address because a fax can just be posted on a common board in the office space, rather than someone having to take the step of printing the email first.


Perhaps. Whatever, it sounds comforting and fun. I've heard they have a colour (!) fax machine at almost every home Japan. If I moved into an office which had one, I would actually contact somebody there and have fun sending hand drawn pictures to each other :-)


Landlines are probably more common than you think. I only got rid of mine last year. I would have kept it for backup but it just cost more than I was willing to pay. Many of us don't get great cell reception and WiFi assist isn't always perfect.


Where are the Trinitron displays and ICs made with 1 µm process?


Adding to my first reply: if Trinitron displays were gone (it appears they aren't), and some newer tech isn't, that would NOT contradict the Lindy effect.

If you already KNOW that Trinitron displays are gone, then there's no uncertainty. The Lindy effect is a heuristic for making decisions under uncertainty.

The relevant situation is if you have two things that still exist, and you want to guess which one will last longer. This doesn't override other facts about the domain -- it's SOME information in the absence of any other knowledge. It's for poker players, not scientists.

The example I gave was the new cloud blog startup vs. the old hosting platform. Which one would you put your blog on if you wanted people to read it 5 years from now? All other things being equal, I'd take the old hosting platform. But if you think the startup has a really good business model or you like the founders, maybe you choose that one. It's just common sense.

Another example might be cold-blooded crocodiles vs. a warm-blooded mouse as an example. All things being equal, you could guess that the crocodiles will survive further into the future, since they were here first. But someone with a specific theory or expertise could also argue that the bigger animal is less likely to survive, etc.

Similarly, we already know that dinosaurs are gone. This isn't a situation where you need to act or make a prediction.

Someone who understands Bayesian statistics can probably explain it better than me, but you have to take into account existing knowledge, and update it with new knowledge. Picking out something that you know is obsolete isn't relevant.


It's a heuristic. Although I bet you can find those things in use somewhere.

This article is a little different -- "things my son would use" implies that they're still popular, not just extant. Both questions are interesting, and influenced by the same principles.

The Lindy effect is one reason I'm working on https://www.oilshell.org/, because shell is now more than 50 year old, much older than Python/JS/Ruby, etc.

Concrete example from the last few days: https://news.ycombinator.com/item?id=26746280

i.e. When people want to explain a modern cloud platform, they use shell. Go would have been more obvious, but shell is clearer. Lindy prediction: shell will outlive Go :)


I think you can argue that its not true for things in which item A and item B are members of a broader class of things wherein changing from A -> B incurs no or trivial costs or little fundamental changes in fulfilling the purpose of the class of items and there exists no immediate need to stop using A.

For example nobody expects the 2004 Toyota Carolla to be forever but the gas powered car will be far harder to kill.


I run a retro games business.

Trinitrons are still to this day the best tech for displaying old games, and people will pay over $100 for even a consumer-grade unit without SCART.

For the professional grade PVM and BVM monitors, people will pay thousands for a large (21"+) one and they sell like hotcakes.


Big CRTs are pretty much gone as far as I know, but 1-micron ICs are all over the place. They're just not interesting any more.


> The rough heuristic is: the longer a technology has been around, the longer it will last into the future.

"The term Lindy refers to Lindy's delicatessen in New York, where comedians "foregather every night at Lindy's, where ... they conduct post-mortems on recent show business 'action'"."

And no more should be read into that. There are solutions to problems which are adequate, e.g. "chair", where further changes can be expected to be modest. And since the problem isn't going away (unless someday we're told that sitting kills us and that we need to stand or lay instead), the solution won't either.

Otoh, there are technologies which simply supersede and obsolete others. E.g. UTF8 has ASCII as subset and hence I don't expect to see the latter around for long.


That example proves the point. If UTF-8 exists then ASCII will exist.

It could have gone the other way: if UTF-16 was the ONLY encoding, then ASCII would be obsolete. But that didn't happen.


UTF-8 is backwards compatible with ASCII “as she is spake” but not strictly speaking ASCII as any ASCII control characters will break UTF-8. It also breaks any 8-but extensions/code pages. ASCII vs HTML is a bad example though because HTML is used globally and although ASCII is too this is more a historical artefact. It’s not hard to imagine ASCII dying out over the next few years while HTML continues to adapt to every encoding under the sun and pure ASCII becomes used less and less ...


The C1 block isn't ASCII. UTF8 is a perfect superset of 7-bit ASCII.


Nope. If you read an ASCII file with control characters in Java you’ll get an exception. Also it won’t work with the 8-bit ASCII variants. Neither are “true Scotsmen” of course, but the point still stands that HTML could yet be more durable.


I think you’re confusing Unicode and utf8. java uses Unicode but not utf8; it uses a 2 byte encoding with surrogate pairs by default.

ASCII is utf8, but it’s not utf16. ASCII will be around for as long as utf8 is.


I’m almost certain the default encoding for reading/writing files in Java is UTF-8 and similarly for the source files. I don’t think I encounter wide char data much really at all day to day ...


> Will HTML or JavaScript last longer? Probably HTML, since it came first.

Well, barely. I’d bet on JavaScript for this one; programming languages are almost immortal once they reach the popularity of JS, while HTML would be easier to replace.


Is ASCII really stîll âlîve ?


ASCII is still alive in UTF-8 and other extended encoding systems. :)


I wonder if there is any systems using 7-bit ascii in production... Or extended code pages...


EBCDIC is still in use in production.


Base64 for the win....


Technologies never truly die due to outdatedness they just become rare. Horse buggies are still used, though rarely, steam engines are more hobbies now but still around.

What truly causes a technology to die is when it's kept a secret and all those who know the secret pass away without passing it on (Greek fire, ancient Babylonian batteries, Roman stainless steel). It's the one good argument for having a patent system, to keep that knowledge from being lost.


Land lines are mostly gone where I live, Sweden. I don't really know anyone who uses then, of course they do still exist due to some alarm systems and so on.

As for ripping out the physical service entirely, I don't have any numbers but I saw the lists from 2020 and 2021 and they are huge. So it is clearly happening. And they have been doing this for quite a few years.


Where I live (the Netherlands) all landlines (copper) have been replaced by fiber, I still call them landlines. I guess OP does too?

If you mean landlines for Phones: The fiber can handle that, and does, but people are indeed dropping their telephone-number-for-a-house subscription in favor of individual cell phones (and my Phone is generally on WLAN-call when I'm at home, so it use the fiber as well). My parents in law were the last ones I knew with a house-based phone number and they dropped it this year. That said, I think you still get a house-telephone-number for free with many internet subscriptions and the ISP's modems have ports for Phones on them, so indeed the landlines as defined still exists, but it's a matter of definition.


Where I live (the Netherlands) all landlines (copper) have been replaced by fiber

Do you have a source for this? All I can find is this quote [0], which is a long way off from "all landlines":

> nearly 1 in 2 homes in the 25 largest cities will have KPN FTTH access by the end of 2021.

[0] https://glasvezelnieuws.blogspot.com/2021/04/kpn-op-schema-m...


Maybe I should have specified, "in my city", indeed, when the concentration of people drops, people are still on copper or on satellite or other wireless alternatives. The point is, the country is investing in fiber: Landlines. Allthough this may change when we blanked the country with 5G towers. Already I have 60/60 mbit in my home via 4G, approaching my current 100/100 fiber subscription. Not sure which is going to scale better in the future.


I guess time averages the satisfaction humans have around a thing. Fads come and go and attracts towards new sensations but over time.. that old thing might be the only one that has the right blend.


Let’s apply this to the future: so just like fiber came before 5G so we are going to lay way more fiber after 5G fizzles out.


Is it also true for copper, though?


Yes, because that's how to transmit electricity.


Even for local network cabling, PoE is working to keep copper around.


A sextant (1731) vs. GPS (1973, fully operational 1993). Most large ships still have sextants as backup.


And the US Navy, after a hiatus has started training officers and crew on how to use a sextant since modern warfare will probably result in GPS being either jammed to oblivion or shot down.


Calculus will die before algebra, will die before geometry.


> The rough heuristic is: the longer a technology has been around, the longer it will last into the future.

This doesn't make any sense because it maps trivially to "any technology still around will last forever."


When can we expect horse-drawn carriages to replace automobiles again, then?


It's a heuristic. A single counterargument does not disprove it. It just shows that it's not a law, which was obvious to begin with.


When? If we experience a civilisation collapse.


It's the same thing. horse-drawn carriages are automobile. Today's car is a more advanced version. In a sense, today's laptop is a spaceship comparing to the first laptop.


Technically, a car is a spaceship compared to a carriage.. one Tesla in particular.


Now I’m wondering if we should differentiate between “spaceships” and “space boats” by whether or not they hold air…


His #1 can not and will not ever happen. The radio spectrum is a shared resource. The total information capacity of the usable spectrum, say from 100 KHz to 100 GHz, is massive but most of it has terrible propagation and all of it can only be used once at a time. Massive MIMO helps in dense city cores with lots of independent paths reflecting everywhere but it's still just one spectrum in practice.

Whereas with physical transmission lines, be they cables, fiber optics, or whatever, each run can re-use the entire spectrum.


I'm surprised his #1 is still TBD. To me, ethernet lines have become more important in the last 5 years as competitive gaming/esports has completely taken off. Latency is a far more prevalent in gamers minds today, I'd argue moreso than bandwidth and the first networking related advice a gamer receives to make sure you are on ethernet.


I see frustrated WiFi users all the time, but I've never seen a frustrated Ethernet user.

If I had a dime for every WiFi user in a video call who's "breaking up", I had a whole bunch of dimes.


Exactly. I have a suspicion that a large part of people complaining about "Internet issues" are faced with WiFi issues. Apparently many struggle with the concept of separate links that make up a connection (LAN vs WAN).

But the reverse mix-up also happens: Hotels, restaurants and the similar businesses like to boast their "included free WiFi" when they really mean complimentary Internet access, provided with an AP.


I'd take that as a given for 99% of cases. As fibre takes over from piggybacking off copper, IME actual internet issues seem to be pretty rare: [UK and] I haven't noticed that kind of service issue for a few years now, whereas used to be relatively common.

Happy to keep the pretence up though -- what else is a as neutral and widely understood as "internet issues"? Whoops, seems as if my connection has dropped out! Yes, I was very interested in that discussion over sales targets that's been going on for the past half an hour, please email me a summary! Bloody internet, always dropping out!


I always laugh when I see Comcast commercials for "fastest in-home Wifi". An 802.11ac router isn't going to help anything when you can only get 10/10 bandwidth on a good day.


I love cables while I'm using them. I hate cables while I'm cleaning. It seems almost impossible to have a clean cable setup that doesn't become a rats nest behind the desk.


My cable management is based on velcro strips nailed to the desk's edges and undersides in order to keep cables off the ground. It keeps the cables off the ground (and also mostly off the desk surface) and looks reasonably clean from eye level: https://imgur.com/a/tRamJ65


Velcro cable ties can help.


I use a set of G.hn ethernet over powerline adapters in my (small, rented 2 bed) flat. I can get 200 Mbps with no latency, packet loss or drop outs from the modem in the living room to the office upstairs.

This is a distance of about 10 meters with 2 brick Victorian internal walls in-between. No matter how much you spend on WiFi equipment you can't get more than 20 Mbps with high packet loss here.

Even downstairs, with no walls to worry about, only 5 Ghz is usable. 2.4 GHz is completely occupied by neighbours, and the lower 5 Ghz channels are all crowded out because of compatibility too.

I hate WiFi with a passion.


I hate high rate data links over power lines. Power lines are not impedance controlled. At every bend, every approach to some metal in the building, they are going to radiate interference and accept/receive interference from their environment. Using powerline networking is irresponsible and rude. The fact that any are approved by the FCC at all is entirely due to contrived testing setups that are never replicated in real building wiring.

If you use real transmission line like ethernet cable (~70 ohm twisted pairs) or coax the impedance remains constant and they might even have a bit of shielding. You'll get faster, more reliable speeds and pollute the radio spectrum significantly less.


Sometimes you can't do that if you don't own the tenement you're living in.


That doesn't prevent you from running cable along the walls and under/over doors.


- It's fiddly and ugly compared to hidden cables. I wouldn't do that in my own home, I'd run them in the walls.

- It's disruptive and time consuming. It would likely take me a whole day

- I don't see why I should invest further in improving the property when I will likely move out in 1-2 years time and won't reap the long-term benefits. I've done that already in other areas.

- It's hassle. My landlord could protest that I've done a shitty job when I move out, or, even if I haven't, make me remove it.

- I need multiple WiFi adapters anyway to cover both upstairs and downstairs effectively.

- It's not as flexible (in terms of moving things around) as plugging an adapter in to any electrical outlet.

- I needed a quick solution when I moved in so I could work.


Because ethically you're almost certainly negatively affecting the people around you and legally you're almost certainly violating part 15.

You're both breaking the law and being an asshole for aesthetics and your convenience. That's why.


I'm not in the US.


Your country almost certainly has rules about emissions in the HF frequency ranges too. Even milliwatts of radiated interference at these low frequencies will bounce around the world and interfere with everyone.


Mind sharing which adapters you are using? WiFi works for my situation at the moment but would be useful to know in case I need an upgrade.


I have four adapters in my house, I have one TL-WPA7510 and three TL-PA7010. They are sold as pairs so when I wanted the version with an access point built-in, I had to buy another wired version. But, you can add them individually to an already existing network. The Homeplug protocol is a standard so as long as you buy the right versions, you get max performance. These things are great. I have one in the basement for the Verizon router to plug into, one on the second floor for my PS4, the AP version behind the 4K TV in the family room, and another at a desk in the guest room. The Verizon router has a lot of interference nearby so quality is poor in some spots in the house. Quality is good, it sometimes reaches gigabit speeds but it's way more robust than wifi extenders. The apartment I lived in previously was built like a brick shithouse so wifi dropped off pretty fast once you got to the other end.



Video calls too. I just got back from visiting my in laws since everyone is now vaccinated and one of my projects while I was there was running ethernet lines to their work areas. They're both on daily video calls now and poor wifi performance was driving them crazy.


> since everyone is now vaccinated

You don't need to justify yourself. I see this more and more and it's honestly just annoying.


Beam forming, phased array antennas, sophisticated coding (CDMA) and "time slots" (TDMA) will provide a lot more availability than purely looking at bandwidth available.

I still agree wired/optical is best for most fixed installations both LAN and WAN, but people are getting more out of wireless than I would have predicted. And the "last mile" capacity of today's technology far exceeds what people seem to want even looking forward a decade...which paradoxically suggests that wireless might be adequate in the interim for some use cases.


None of these technologies allow you to exceed the available bandwidth. They just make use of the shared bandwidth more efficient. It's still a shared medium.

Where I live, a lot of people have 3G internet because mobile data is pretty cheap and the companies advertise it as an alternative to cable. And now they all have really crappy internet. In the evening when everyone watches youtube you get a fraction of the advertised bandwidth.

With fibre, every customer gets the full spectrum. And since the frequency of light is a lot higher than radio frequency, you also get a lot more bandwidth. At radio frequency we're already hitting the physical bandwidth limits; with optical transmission there's still a lot of bandwidth left.

Thinking that radio frequency transmissions are an alternative to cable / fibre is pretty short sighted thinking. Data usage is going to grow, more devices are going to use data, and wireless transmission is going to seriously limit us.


> None of these technologies allow you to exceed the available bandwidth. They just make use of the shared bandwidth more efficient. It's still a shared medium.

The beam forming and such slice bandwidth availability spatially (that's what the cell network does too) so more people can use the shared medium by...not sharing it! It's not like AM radio that gets sent in all directions regardless of whether there is someone in a given direction to listen.

> With fibre, every customer gets the full spectrum.

Well by definition that's true whether fibre or copper, but that fibre or copper is itself aggregated into connections to upstream provider, so you're just kicking the problem down the road. You don't really get the full bandwidth end to end.

As I said I think fixed wireless is an acceptable transitional technology but wired makes more sense longer term for most locations.


>The beam forming and such slice bandwidth availability spatially (that's what the cell network does too) so more people can use the shared medium by...not sharing it! It's not like AM radio that gets sent in all directions regardless of whether there is someone in a given direction to listen.

Nope. In theory yes, but unfortunately diffraction is going to get you every time if youre not transmitting in deep space.


Level 3/CenturyLink/Lumen CEO reassured panicked investors and employees scared that their company would be worthless with 5G and beyond by basically saying that the last mile will increasingly become the last tens or hundreds of meters and that fiber is still the infrastructure on which these increasingly dense base stations depend on. And that the “edge computing” will likely live on the cabinets owned by the fiber provider.

It makes sense to me. Just add more fiber and let people access them over whatever.

My biggest gripe is we could choose to do away with licensing fees and spectrum auctions and open mm wave 5G to be something like Wi Fi but we are shortsighted as usual.


A bit of a nitpick, but most residential fiber deployments are PONs. With a PON, a single fiber gets split into a lot of separate wavelength channels with prisms. It's still tons more usable bandwidth available than wireless.

https://en.m.wikipedia.org/wiki/Passive_optical_network


>Where I live, a lot of people have 3G internet.

Well, yes 3G is Shared Spectrum.

>None of these technologies allow you to exceed the available bandwidth

Exceed available bandwidth of what? Per Spectrum? Shannon–Hartley theorem?

The whole point of 4G, and 5G, mentioned in the GP as Massive MIMO was that we could workaround those limits with more Antenna. Everything we are doing today and aiming to do in 3GPP Rel 17 in a few years time are literally impossible to even infer about in the early 2000s. When Massive MIMO, or it was known as Very large Array of Antenna was first published people called the idea "crazy". And CoMP, whether the marketing decide to call it 5.5G or 5.9G along with distributed antennas being worked on in 6G.

There are no fundamental technical reason why we cant have a fully wireless Internet. Although there are many business and economical reason why this may never happen.


>>Where I live, a lot of people have 3G internet.

>Well, yes 3G is Shared Spectrum.

Sorry, I think I meant 4G, not 3G. It's marketed as LTE here.

> Exceed available bandwidth of what? Per Spectrum? Shannon–Hartley theorem?

No, not Shannon-Hartley. That's just a mathematical model.

When EM waves propagate, they are subject to diffraction, which limits both the amount of information that can be transmitted per time interval, and also the spatial resolution of the transmission. Even with antenna arrays or distributed antennas you can't get past diffraction limits; you can just get closer to them.

To get around diffraction limits, you need to use higher frequencies / shorter wavelengths. (Which has the side effect that you loose the ability of signals to go around / through obstacles, so you need a lot more cell towers.)

We're at a point where new technologies just make different trade-offs (eg. shorter wavelengths for areas with lots of wireless clients vs. longer wavelengths for sparse areas).

With a fibre connection, you don't need to make these tradeoffs; you just need to dig up the ground and you can have as much bandwidth between two points as you want.


You are dead on. We are still transitioning from HD to 4k with 8k coming eventually. The difference in HD to 8k is 17x the bandwidth.


>The difference in HD to 8k is 17x the bandwidth.

The pixel is 17x, not bandwidth. Even Compressed RAW size dont scale linearly with pixel count. I dont have any experience with 8K, but compressing / encoding 4K with HEVC or AV1 tends to easier with fixed VMAF score than a comparatively low pixel count of 2K / 1080P. I would imagine the same if not better for 8K. And that is discounting the use of much better video codec like VVC which brings another 40 to 50% reduction in bitrate.


Good point.


It remains to be determined.

Quadrophonic sound did not replace the "good enough" stereo even though humans can move their heads. Of course an 8K TV is just as easy to set up as a 4K one, unlike a quadrophonic stereo. And the functionality of quadrophonic records became available as "surround sound" which some people do have.

It's possible there are enough people who will appreciate the difference for 8K to become established. Personally I doubt it, but it's certainly possible.

Another possibility is that your TV watches you and provides just a 4K or 4K-ish image unless you walk close to your TV at which point it displays a higher resolution image where you are looking. Possibly with some hint-driven mix of AI upsampling and more image data.

The opposite could be true of course with 8K VR rigs in every home. I also doubt it, but it's quite possible.

The 8K-production -> 4K result is well established and will remain even if my guesses above turn out to be correct. That needs a lot of bandwidth but not at the point of viewing.


Unless you're displaying on movie theatre sized screens, 8K seems like a waste of space/bandwidth. Even 4K is generally overkill for the typical living room.

I think we're hitting the point with video resolution that music CDs hit with audio, where improvements in fidelity are largely outside the range of human perception. It's one of the reasons the music DVDs and SACDs never really caught on.


1080p at 65 34ppi

4k at 65" 65ppi

8k at 65 135ppi

This is well within what someone with good vision can see at for example 6-8 feet

For a personal reference I could tell the difference in clarity at 8 ft between a 1080p 24" monitor and a 28"4k monitor. That is 157 vs 92 ppi on a screen a fraction of the size 5 minutes ago.

I must imagine people making such claims have poor eyesight or are using optimum viewing charts as a proxy for distances wherein human vision was sufficiently acute instead of looking for themselves.

For reference someone with good vision ought to be able to distinguish up to about 115ppi at 8ft according to this source.

https://www.quora.com/What-is-the-highest-resolution-humans-...

The source seems valid despite the mixed bag that is quora. Especially since I and many others can verify this in 10 seconds.

Edit: I wrote 43 instead of 34 by accident. 65" and 1080p = 34ppi


> Especially since I and many others can verify this in 10 seconds.

Try it with a high resolution video you've transcoded locally (so you're not dealing with streaming quality differences) at various resolutions. The human visual system is a lot less precise with video than it is with still images. A paused DVD frame often looks like a blocky, terrible mess, but it's perfectly fine when displayed as video.

I won't argue that if you pick your content carefully, and know exactly what to look for, you can tell the difference. But, practically speaking, most people watching a decently encoded video won't see a huge difference from 1080p to 4k at typical screen sizes and viewing distances, and will see even less difference going to 8k.

I honestly can't tell you if I'm watching 720p or 1080p content on our TV without looking at the source file info.

You're confusing "What can be perceived" with "What actually matters." I'd argue 4k was a bit of a stretch to sell new TVs, and 8k is going to be even more so.

We've hit, as someone else said, the 44kHz/16bit point of video. Encoding quality and bitrate matter a bit, just like mastering does with audio, but we're past the point of diminishing returns at this point.


If you can't tell the difference between 34ppi and 65ppi and typical viewing distances your eyes just don't work very well and you just literally don't realize most people see better than you even if they need glasses to do so.

We aren't talking about near the limits of human perception we are capable of perceiving 115ppi at a typical 8 ft viewing distance. 8K for a 65" screen is about the limit. 4k is itself about 1/4 the resolution we can perceive and 1080p is 1/16. You are saying that people can't on average distinguish between 1/4 the resolution we can perceive and 1/16th.

The link I posted was the informed opinion of an electrical engineer that worked in display tech. If people really can't tell the difference between 4k and 1080p shouldn't there be scientific research by this point to show this? You can't because there isn't.

You are like a color blind person exclaiming there is no difference between green and red! There is.


... wait, what? 8 foot viewing distance with a 65" screen? Ok, we have very, very different definitions of "typical" here. You're using a 50% larger TV from half the distance.

I just measured our living room. We have a 43" (4k) TV at about 16' viewing distance. Just about every chart I can find on TV resolution vs viewing distance (assuming 20/20 vision) indicates that I'm in the "720p is fine" region, on the border of "You really won't notice a difference from 480p." Which is consistent with my experience. I can see the difference between 720/1080/4k if I'm right on top of the TV, but once I'm back where I actually watch from, anything I notice is more a factor of bitrate than resolution - I can certainly tell the difference between a low bitrate MPEG2 source and a high bitrate h.264 source, but that has nothing to do with the resolution.

I don't really have the motivation tonight to pull out my trig tables (I'm about to watch a movie instead, of what resolution I honestly don't know), but... at least for my case, yes, I'm saying I can't tell the difference. And it's entirely possible my glasses need updating (quite likely, honestly), but the "TV size vs distance" charts I've found say nothing of what you're claiming either. And they're by people trying to sell TVs!


Samsung says suggested diagonal size = viewing distance /2. So at 16 ft 96".

To get a more precise fix you can break out a calculator or use this one based on THX recommended viewing angle.

https://myhometheater.homestead.com/viewingdistancecalculato...

It suggests that your tiny tv ought to be optimally viewed from 4.8 feet away and at 16 feet away you really ought to have a tv greater than 100 inches.

Of course most people don't have 100+ inch tvs but they also don't watch tv from 16 feet away unless they are watching a 45-60 FOOT theatre screen. Play with some values in the calculator if you want to see how other people use tvs.

People normally sit 8-10 feet from 60-75 inch tvs and use 40 inches for smaller rooms like bedrooms where they are 6 foot away.

To be clear about the whole picture you are a fellow with bad vision watching a small tv from nearly 3 times further back as the rest of planet earth.

At that distance your tv literally does already exceed average visual acuity and for your usage you are absolutely correct.

Meanwhile the rest of us will benefit from up to 8k in the future.


> To be clear about the whole picture you are a fellow with bad vision watching a small tv from nearly 3 times further back as the rest of planet earth.

My vision is perfectly fine, thank you very much, as you seem to insist on going on about how horrid it must be. The difference is entirely explained by the fact that I watch a reasonable sized TV from a perfectly sane distance in a living room that isn't a home theater room and has space for plenty of other activities. No idea how you get a TV 6' away in the bedroom, though, unless it's literally at the foot of the bed. I've never had one in there and never intend to.

Your linked calculator suggests that the "optimum" distance for my TV is about 6' away, which I find quite absurd (having just tried it). I'm watching movies, not programming on it.

We appear to have rather divergent views (and social group priorities, if yours has 60-70+ inch TVs at your suggested 8-10 foot viewing distance) on the nature of television viewing, and, as such, there's not an awful lot more to discuss.


I don't think its a matter of divergent priorities. The argument up thread was that we had reached the zenith of resolution based on visual acuity. What we established is that this is almost entirely true or false based on the persons vision, size of screen, and viewing distance. Based on how the majority of the public uses TVs we have provably not reached such a point.

In point of fact a medium sized living room is 12x18 with a TV on either a stand or the wall along the longer wall and seating that is around 2 feet deep and off the wall by at least a foot. This means that in an medium size living room there is 8-9 foot between seat and screen. In a large living room 15x20 there is still only 11-12 feet.

People actually are putting 60-75 inch screens therein so we can support the position that the figures given by the calculator represent how people actually use TVs and supports the position that HD -> 4k -> 8k will be perceptively still beneficial given the constraints of human eyeballs.


I'm interested in sources that establish "how the majority of the public uses TVs". I've personally never seen anyone use TVs in line with the recommendations from TV manufacturers screen size and viewing distance. Maybe this is a USA cultural thing (or, contrary, maybe my own observed viewing habits are a cultural AU/NZ thing).


What about a house size and falling TV prices thing? Half of the US couldn't sit 16 feet from their TV if they tried.

1/3 of the US lives in apartments and 80% live in urban area where housing is comparatively more expensive. Even for slightly larger houses its common to have a living room and a family room and give up a lot of space to the individual bedrooms. It's supper likely that the TV is in a room that is between 12-18 and 15-20. With the TV on the long wall.

Also for some reason sticking your TV on a stand is still pretty popular vs putting it on the wall. This means that in a 14' room your tv is between 0' and 1' off the wall and your rear end is 3' off the wall. This puts as little as 8' between viewer and tv in 12-18 and as much as 12' in the larger room if they hang it on the wall.

How the US population is distributed is kind of interesting.

https://www.washingtonpost.com/news/wonk/wp/2015/09/03/how-s...

Regarding the question of average living space in the US people in the US almost universally prefer single units detached housing but as the song goes you can't always get what you want. Many for example would prefer to be close to urban settings with more jobs. Others can only afford to rent.

Among renters, around 1/3 of the population, half live in 111 sq meters or less

1/3 of owners live in domiciles of 167 sq meters or less and and 60% of owners live in domiciles 222 sq meters or less.

Combining renters and owners 77% of Americans live in 222 sq meters, 54% live in 167 sq meters or less, 25% live in 111 sq meters or less. Some rounding and conversion between sq ft and meters.

This means about a 1/4 each live in

<= 111 sq meters

112-167 sq meters

168-222 sq meters

> 222 sq meters

https://www.frugalfringe.com/calculators/compare-your-homes-...

The people paying lots for tiny apartments in the city think your 16' of open space in your living room is as unusual as you think they are with their relatively giant TVs in smaller spaces.


I'm not the person 16' from their 43" TV ;)

Interesting, but doesn't tell me much - I live in an apartment, well under 111sqm, and have a 55" TV around 12' from the couch - so still too small / too far away, according to recommendations. People with even smaller living space (at ~80sqm, my apartment is reasonably large for a two bedroom unit where I live) may simply get smaller TVs to compensate, still being "too small".

These are anecdotes. I'm curious about what people actually do, I don't know, and (contrary to your assertions about what most people do) it doesn't sound like you do either.


I showed with data how many people live in relatively small spaces and asserted but didn't prove that Americans often devote much of their space budget to bedrooms and houses which have both a living and a family room wherein they had more budget to spare. This last shouldn't require a ton of proof.

My assertion is that due to the above asserted space constraints the majority of people in America are 8-12' from their TV. I also assert that the same people are buying 60"+ TVs

I linked a source that suggested that people could distinguish up to 115ppi at 8' If I understand correctly that means they ought to be able to distinguish about 90ppi at 12'.

If these assertions are correct most people would benefit from 90-115ppi and based on THX recommendations would benefit from the increasingly popular and cheap large screens that people are provably buying despite living in not so huge spaces. In fact tv size purchases are trending upward based on what people can afford.

https://www.consumerreports.org/lcd-led-oled-tvs/tv-trends-t...

A 60" 4k screen is only 73 ppi meaning we aren't beyond human visual acuity yet and 4k at 75" we are down to 59ppi.

The majority of consumers will based on good visual acuity be able to distinguish between 4k and 8k in the room sizes and screen sizes where such products would be relevant and applicable and a median American would benefit.


> Samsung says suggested diagonal size = viewing distance /2. So at 16 ft 96".

Now I wonder what Samsung’s motivation for this could possibly be?


You've a reasonable point but the other poster said

>but the "TV size vs distance" charts I've found say nothing of what you're claiming either. And they're by people trying to sell TVs!

The people selling TVs actually say you ought to buy a big ass TV.


I agree with Syonyk here. And since you bring up quality of vision, I'll note that I do have 20/20 vision because I'm wearing prescription glasses.

> You are like a color blind person exclaiming there is no difference between green and red! There is.

You're arguing against a complete strawman here. The key point that Syonyk was making is this:

> I won't argue that if you pick your content carefully, and know exactly what to look for, you can tell the difference.

If you're watching a nature documentary or anything else that puts an emphasis on the visuals, people are going to notice the difference by themselves. But most of the stuff we watch doesn't focus on visuals, it focuses on narrative. If I'm watching a legal drama, I don't really care if I can count the hairs in the lawyer's nostril or not. Again, I'll probably be able to tell apart the 4K version from the 1080p version from the 720p version of the lawyer's face's closeup, but even if I watch just the 720p version, it won't significantly impact my enjoyment of the narrative.


Although its not one singular person arguing I can't help but feel like the goal posts are pretty mobile here.

First goalpost high dpi video is already outside the range of human perception.

> I think we're hitting the point with video resolution that music CDs hit with audio, where improvements in fidelity are largely outside the range of human perception.

Second goalpost ok its not but its really close

> I won't argue that if you pick your content carefully, and know exactly what to look for, you can tell the difference. But, practically speaking, most people watching a decently encoded video won't see a huge difference from 1080p to 4k at typical screen sizes and viewing distances, and will see even less difference going to 8k.

Third goal post ok its actually clearly discernible but I mostly care about the story anyway.

> But most of the stuff we watch doesn't focus on visuals, it focuses on narrative. If I'm watching a legal drama, I don't really care if I can count the hairs in the lawyer's nostril or not.

Believe it or not but some people said the same about HD vs SD towards the beginning. It was complete bullshit then too. I too enjoy the story and psychological drama but its not books on tape. Film and TV are a visual medium wherein being able to see the world more clearly adds to the experience.


Well of course presenting three seperate people's ideas, some of whom are refuting each other, as one continuous argument will come out looking ridiculous.


The first guy is a gentleman with bad eyesight watching a 43" tv from 16 feet away. If he watches it from any further away he wont be able to tell if its on. The second is inaccurate. The third is a complete cop out.


Not sure about that. With large screens 55" at home and over I can see a difference between FHD and 4K, enough to know I am having network problems when the streaming service starts to downscale the resolution in a 4k video.


that could be the bitrate more than the resolution. streams are highly compressed. would be interesting to see whether you prefer 1080p bluray over a 4k netflix stream (I suspect yes).


> The difference in HD to 8k is 17x the bandwidth.

Not practically. NetFlix will just compress it down to 3Mbit anyway like they do with 4k, but be able to call it "8k content!" and (probably) charge a premium for it.


This comment betrays a lack of understanding of some of these multiplexing technologies. They minimize wasted bandwidth due to signal collisions and interference, but they don't increase the overall bandwidth available.

Let's take TDMA as an example. TDMA means that instead of using the available bandwidth continuously, each participant only gets to use that bandwidth for a fraction of the available time. Saying that TDMA helps us increase the available bandwidth is like saying queueing up at the restroom will "provide a lot more availability than purely looking at the number of stalls".

CDMA is more complicated but it's still a similar story. Look at this diagram of a CDMA signal:

https://en.m.wikipedia.org/wiki/File:Generation_of_CDMA.svg

Notice how the "data" actually being transmitted is only a couple of bits, but the CDMA signal includes many more transitions. CDMA is essentially using N bits of transmission bandwidth to send a signal bit of data signal, the benefit being that if multiple signals interfere it's possible to extract one of them using some complicated math. It's like if 10 people were sharing a phone line, and instead of taking turns talking they all spoke at the same time, but repeated themselves 10 times so you could pick up enough snippets from one speaker to understand what they were saying if you concentrate hard enough.


TDMA/CDMA don't bypass information theory, though. Shannon always wins in the end.

You can improve the situation with spatial beamforming, polarization isolation, mmWave bands, etc, but there eventually comes a point where the added complexity doesn't win over just hiring a few guys with a backhoe to run fibre down the street.


Not sure why this is responding to my comment, but I was basically making that same point.


Oops, I meant to respond to the grandparent post. Sorry!


No worries!


You can add in quadrature amplitude modulation. Reusing bandwidth is a a fascinating field. It's not as simple as 1 bit per Hz


Current laser com systems are getting multiple bits per photon - it seems that the communications field will keep pushing the limits until there is simply no business case, and then push some more. Really, the only limit is Shannon's limit, but that's only concerned with data rate. The modulation schemes that sit on top of the raw data is where all the capacity magic happens.


I assume you're referring to something like this: https://www.sciencedaily.com/releases/2017/02/170203102740.h... but this isn't a scalable transmission system since it's based on single photon detection, and single-photon detection is not a high bandwidth communications ability because you have to discriminate the photons. It's a far cry from "current laser communications".

AFAIK the fastest IP network speed at the moment if 44.2 Terabits/sec for a single light source (https://www.sciencedaily.com/releases/2020/05/200522095504.h...).


I'm talking about the laser communication satellites that NASA et al have launched into orbit, which are supposedly around 3-4 bits per photon.

These are hitting upwards of 10Gbit speeds (not quite 44.2 Terabits) but still impressive nonetheless.

https://en.wikipedia.org/wiki/Laser_communication_in_space https://en.wikipedia.org/wiki/Optical_PAyload_for_Lasercomm_...


> the only limit is Shannon's limit, but that's only concerned with data rate.

But that limit, the capacity, is also the maximum possible limit of information transfer on the channel. Modulation schemes don't increase it or change it, they just push the achievable information rate of the system closer to the capacity.


Modulation is definitely the wrong word in this context. I was thinking about channel coding and compression algorithms that increase the efficiency of the system without requiring additional capacity.


A lot of confusion in the replies below. Here is an illustration of this point: Suppose you’ve saturated your communication bandwidth between two points, sending bits through the entire 500 Thz spectrum. If you have a wired connection, you can lay another set of cables between these points and send more bits. With wireless, there is nothing more you can do. No amount of beam forming helps you.


Intel was so pushing WiMAX at one point. I really irritated someone there when a wrote something critiquing their efforts. We are seeing wireless technologies (including satellite) starting to handle some use cases where it's hard to wire. But I do expect denser areas to remain mostly wired.


Yup. I’ve always questioned people who think wireless can replace a wire at scale. It just can’t.

That being said I can browse most sites and do 90% of my internet just fine either on my phone or tethered to my phone using my data plan. So maybe I’m wrong!


Well wireless won't ever replace wired for every use case, I don't see why we can't get to a point where 90% of households are provided internet service via cellular technologies, either to a MiFi-like device or with cellular built into the computer. If my iPhone, iPad, and Apple Watch have cellular built in, why not my Mac too?


But why would you want to have that? Simply put, with wireless you are trading the convenience of having no cables for essentially everything: Wired has better throughput, lower latency, lower bit-error rates resulting in less retransmissions (better goodput).

I'll admit there is one aspect where wireless might be interesting: Comparatively lower cost of initial set-up, especially when infrastructure is lacking. In other words, if you're trying to provide Internet access to a developing country (little existing cable/telephone infrastructure), wireless might be an interesting intermediate option. But if you're looking at a industrial nation where (rough guess) more than 90% of households already have some kind of cable/telephone/fibre infrastructure, wireless would be a downgrade.


This sounds like a very "faster horses" kind of comment.

Simply put, with SSDs you are trading the convenience of having faster boot times for essentially everything: HDDs have (a lot) more space, lower prices, longer lifespans.

Simply put, with electric cars you are trading the convenience of charging at home for essentially everything: Gas has higher range, maintenance is easier to find, filling your tank takes 2 minutes instead of hours and that tank's capacity doesn't diminish over time.

Simply put, with screens you are trading the convenience of not having to deal with paper for essentially everything: Printers have better resolution, paper is easier on your eyes, it's more portable.

I guess people just like convenience.


Wireless is a worse product than wired, but you trade the worse product for the convenience of not having the wire. None of the examples you give match that pattern.

SSDs are a better product than HDDs, they trade capacity for speed, but if you need capacity it's likely you don't really need the extra speed, and you're likely running into other bottlenecks anyway (a HDD can still saturate 1G Ethernet)

Electric cars are a "worse" product, but for most use cases they're functionally equivalent. Most people don't drive farther than the range in a day, maintenance is at the same place as ICE, slow recharge times are easily negated by just plugging in every night and charging while you sleep.

Paper is a worse product than a screen because a screen can be updated. It's about as easy to read, the resolution is about the same, they're equally portable (see a cellphone), but you can put just about anything on a screen and update it many times a second. You can't do that with paper, which is why screens have mostly replaced paper.


Wireless makes sense in the country side, because it's expensive to pull optical fiber for just a few houses.

In cities however, where most of the population is:

- Bringing optical fiber is cost effective because of the population density

- Wireless gets clogged very quickly (also because of the population density). Additionally 5G is even worse than 4G at going through walls.


You connect literally all those things to a wifi hub in your house most of the time, and the likelihood of big data consumption on all of them is quite low.

It's a huge difference to think you're going to be servicing all rather then some user workstations data requirements with just todays celltowers - which isn't to say you couldn't make it feel similar, but at some point you're cramming a 5G tower into every apartment and those still need fiber backhaul.


The radio is the most trivial part possible in fact computers with cellular radios have been a thing for a very long time.

The little micro cell tower alternative arrangement exists to support devices that expect to communicate with a cell tower where cell service is poor. You normally actually plug them into your wired router so they are actually for areas where relying on wireless would be the worst possible experience.

Fiber is able to provide Gbps to for example all homes in a square mile where they each get Gbps and can indeed fairly heavily use it. Cellular internet isn't actually wireless you run fiber to the towers and then everyone in that square mile.

Urban areas where 80% of people live have a high density so for example in New York City that 1 sq mile contains 27000 people.


Does energy cost come into play here?

Quality of Service is definitely one of the metrics ISPs track, wired is a more reliable way to guarantee service than wired in a fixed setting, upto the house.


Directional antenna completely break those bandwidth limitations. It’s not currently practical for hand held devices to make significant use of it, but ultimately everything is point to point.


As someone working on 5G, I disagree strongly. 5G supports up to 1 million devices per square kilometer (the most dense city, Manila, in the world has about 50k people per square kilometer). My understanding is that you can have about 100k devices per square kilometer before speeds drop below ideal (20 gigabit). So even if everyone in downtown Manila wanted stream seven movies at once they could all do it. As other commenters have said, that’s because of clever techniques like beamforming and OFDMA.

But cities aren’t really the area where 5G home internet provides an advantage. Sure, the protocols more efficient than WiFi - the tower allocated bandwidth and time blocks to coordinate transmissions instead of having everyone’s router blast away on the same hardcoded WiFi channels. But it still achieves those high speeds and great connectivity through having a bunch of small cell towers - hardly much different than wired internet.

The real advantage is suburbs, small towns, and rural areas. You solve the last mile problem for cables, eliminate outages, and you do it all with a couple large cell towers.

Yeah, people (or more likely businesses) with high throughput needs will still use wired connections. But it will be because the electricity costs of transmitting from a 5G tower are high enough that there’s always a base cost per GB, not because there isn’t enough bandwidth to go around.


> eliminate outages

Why do you expect 5G to eliminate outages when we can't eliminate them with existing LTE towers, with home wifi, or even with direct ethernet connections to modems?

As of now, my least reliable network is always my cellular network.

> you can have about 100k devices per square kilometer before speeds drop below ideal (20 gigabit)

When, though? We're far, far from this at the moment. What is going to fix it and when?


I think you’re imagining some centralised MMDS type architecture but with cellular architecture this is a reality in many places. Even in modern settings if you have WiFi you need never even be aware that there is a wire there feeding it. Indeed to many of my younger contemporaries it can take a moment to explain the difference ...


I understand there's a horizon for cells (vhf and up isn't going to be reflecting off the ionosphere) but a whole lot of people can exist within a single cell. My argument is about the informational capacity per cell.


Well cells are getting smaller all the time and currently there is enough capacity available in an optimally configured network to provide all the services one could possibly need. Of course you’ve got physical cable tying it all together but the “experience” is wireless.


Each cable node shares ~1GHz of bandwidth among all the separate runs to individual houses. Relative to that, 5G has similar local bandwidth, given the low propagation of mmwave.

It's not going to completely replace hardwired internet in suburbs/cities, but it might just kill last-mile fiber rollout short of new construction.


Not trying to prove you wrong, just asking. Can a service like starlink not deal, in some hypothetical configuration, replace cabled internet?


Can it exist and provide a usable service to some? Yes, probably.

Can it replace cabled internet, period? Can it replace terrestrial cellular? Unlikely:

Mobile internet is constrained by the capacity and coverage of the cell. "Capacity" being the practical sort of what the maximum achievable information rate is, given the available receivers and transmitters. Satellite links are a bit complicated, but essentially the Shannon-Hartley theorem holds and you either have to increase bandwidth or increase signal-to-noise ratio to provide better data rates within a cell. Bigger cell, more users, less capacity per user, worse data rate per user.

I don't know what the cell sizes Starlink will use are, but I've read numbers around 200 km^2. My hometown of 10000 homes is 21 km^2. If they were to provide 100 Mb/s just to my hometown, they would need a 1 Tb/s capacity in a much smaller cell. Of course, not everybody will use 100 Mb/s all the time, but it illustrates the problem.

This is why SpaceX is mainly focusing on low-density areas in places where the infrastructure is less-developed/monopolized. The capacity / coverage problems works much better to their advantage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: