Hacker News new | past | comments | ask | show | jobs | submit login
The future’s so bright, I gotta wear blinders (roughtype.com)
145 points by apress on Oct 15, 2018 | hide | past | favorite | 125 comments



From about 1900 to 1970, aviation was like that. Aircraft got bigger and went faster and flew higher. Everyone would have a flying car.

In the late 1960s, the Boeing 747 and the Boeing 737 flew. The Concorde was flying. The X-15 had flown. There were many experimental transonic aircraft. Men had landed on the moon, even.

Even the flying car looked close. There were lots of VTOL aircraft in the 1950s and 1960s. Small helicopters like the UH-1 were working well for the military.

Then, suddenly, it was over. The last 50 years of aviation have produced fewer innovations than any single decade in the previous 50 years. Versions of the B-737 and B-747 are still in production. The Concorde was retired and not replaced. As Scott Crossman, the test pilot, once said, everything that ever flew substantially faster than Mach 3 is now in a museum. No one has been beyond low earth orbit since the early 1970s. If you want to see the VTOLs of the 1950s and 1960s, they're in the Hiller Aviation Museum in San Carlos. Aviation hit the limits of what you can do by burning fuel. In the early 1970s, the aerospace industry had huge layoffs as the space program wound down and not much was happening in aviation.

Computing may be there. Clock rates for mainstream CPUs have been in the 3-4 GHZ range for a decade now. That never happened before. We've hit fundamental limits. Atoms are too big. Line widths are measured in atom counts now. Electrons are too big. The number of electrons in a flash memory cell is in two digits. Photons are too big. Optical lithography has had to go to soft X-rays ("extreme ultraviolet") to get better resolution, using an insanely complicated technology that just barely works. Fabs cost way too much and there are fewer leading-edge ones. Moore's Law is over.

As with aircraft, computing may shift to doing slightly better within the limitations of the physics we've got. The happy time of better hardware every year is over.


> The last 50 years of aviation have produced fewer innovations than any single decade in the previous 50 years.

Between 1968 and 2014 the average fuel burn of new aircraft fell approximately 45% [1]. We also move more people more reliably [2]. That's innovation in my book.

I guess we're just past the "wild designs" phase, and there's an (inherent ?) bias to only think of these as innovation.

[1] https://www.theicct.org/publications/fuel-efficiency-trends-...

[2] https://uk.reuters.com/article/uk-aviation-safety/2017-safes...


> Between 1968 and 2014 the average fuel burn of new aircraft fell approximately 45% [1]. We also move more people more reliably [2]. That's innovation in my book.

When I was in college, that statistic was used to remind people how depressing it was to be a propulsion engineer. It amounts to just 0.8% improvement annually, in the face of billions of dollars in investment each year from P&W, RR, GE, etc.


While I am not an expert in the terminology I would argue that what you describe is a direct result of the commoditization of aviation, namely optimization and making it economical and safer.

Usually new tech goes like this: it is exciting and shiny and new and nobody cares how does it work internally as long as it delivers. But when it gets commoditized, the attention turns inward.


> The last 50 years of aviation have produced fewer innovations than any single decade in the previous 50 years.

This is just plain wrong. You could also say that there's been almost no innovation in cars since the Model T, after all a car like that is a much larger jump a horse drawn carriage than the jump from a Model T to a Tesla Model S.

The huge amount of innovation has been in the likes of reducing cost, increasing safety etc. Now everyone can fly, only the rich could fly in the 50s and 60s.

> Versions of the B-737 and B-747 are still flying

Just because it has the same model number doesn't mean it's the same plane. Here's a comment of mine on that when it was discussed a year ago: https://news.ycombinator.com/item?id=15514161


Measuring innovation as a quantity seems virtually impossible. The form of the modern stove was more or less set in the early twentieth century. But I'm sure someone could find a way to show innovation has increased even here, in terms of numbers of transistors or something.

The form and function of planes and even automobiles has been more or less set for a while (self-driving cars would be a fundamental change on the other hand). The form and function of computers conceivably might wind-up being fairly fixed at some point in the future but I'm less certain here.


The form and function of Unix-based operating systems hasn't really changed much since the 70s, but even then the most significant investment in their development (Linux et al) has been in the last 20-30 years.

As a sibling comment pointed out[1] better than I could there's perception bias in thinking that just because something outwardly looks similar, that changing it hasn't been just as hard or harder than initially getting to that point from earlier technology that looked quite different.

It was much harder to go from a 737 in the 60s to a modern 787 in the 2010s, than from something like the Douglas DC-3 in the 30s to a 737 in the 60s, even though the 737 and 787 outwardly look about the same, and quite different from the DC-3.

1. https://news.ycombinator.com/item?id=18223205


As a sibling comment pointed out... better than I could there's perception bias in thinking that just because something outwardly looks similar, that changing it hasn't been just as hard or harder than initially getting to that point from earlier technology that looked quite different.

Sure, I think in general once the form and use of a given product or technology is set, much more labor is going to be invested in perfecting and polishing it than was originally invested in creating it.

This is entirely logical - once X is perfected, we know X is going to sell (or be used) so money spent making the most desirable version of X is well-spent. Whereas if X is still in flux and you spend a lot of time and labor on small changes to X, you might find out some other version of X appears, making your investment nil.

Here, there's also the factor that the perfection and polishing of a given technology can mean that it stands in the way of changes that would otherwise improve things. And network effect increases this effect as well.

Edit: and remember, all this is talking about why revolutionary changes to products and technologies stop.


    > [...]why revolutionary changes to products
    > and technologies stop[...]
Yeah, but maybe we're just impressed by different things.

I'm more impressed by the number of air passengers carried yearly around the globe being equivalent to around 50% of the world population, v.s. around 8% in the 1970s[1].

I'm more impressed by aircraft being reliable enough that most modern twin-engine jets in service can get an ETOPS[2] rating. There's been an amazing growth in the amount of direct flights, both transoceanic and otherwise.

It's like how nothing fundamentally new really happened in E-Mail since the 1970s, but in terms of making it a useful consumer technology iteration was everything.

1. https://data.worldbank.org/indicator/IS.AIR.PSGR

2. https://en.wikipedia.org/wiki/ETOPS


> I'm more impressed by the number of air passengers carried yearly around the globe being equivalent to around 50% of the world population, v.s. around 8% in the 1970s[1].

This sounds more like innovation in how to run an airline than technological innovation though.


No, it's mainly technological. Here you can see the breakdown of operating costs of different wide-body aircraft: https://www.planestats.com/bhsw_2014sep

As shown there majority of the cost is fuel, and as hcarvalhoalves's comment notes fuel economy has improved by 45% since the 60s.

Not having to operate a 4-engine jumbo jet for transatlantic flights or make multiple landings in Ireland/Iceland/Nova Scotia just to fly between Paris & New York is also a huge reduction in cost. That's what ETOPS in my comment is referring to. The operating of having more engines is also higher.

Flights that today cost $400 would cost something like $1000 today if it wasn't for these sorts of technological improvements, and $1000 flights would be $2500 etc.


> The form and function of Unix-based operating systems hasn't really changed much since the 70s, but even then the most significant investment in their development (Linux et al) has been in the last 20-30 years.

Investment is of course different than results. New generations of chip fabrication are exponentially more expensive, but advances in processor speeds (especially for single threaded code) have plateaued.


> This is just plain wrong. You could also say that there's been almost no innovation in cars since the Model T, after all a car like that is a much larger jump a horse drawn carriage than the jump from a Model T to a Tesla Model S.

The huge amount of innovation has been in the likes of reducing cost, increasing safety etc. Now everyone can fly, only the rich could fly in the 50s and 60s.

And yet, the one innovation is written down in history books and the myriad of other innovations aren't.


> The huge amount of innovation has been in the likes of reducing cost, increasing safety etc. Now everyone can fly, only the rich could fly in the 50s and 60s.

This is much more about regulatory and financing changes than technology. Air frames are more expensive than ever. Planes are somewhat more efficient, but fuel is also more expensive.


One relatively small correction: CPUs aren't getting faster clock speeds, but not because we stopped being able to put more transistors per square inch. GPUs and ASICs are still seeing exponential growth in their key performance metrics. Mobile CPUs are also still seeing exponential growth along their key performance metrics too. Compare the latest iPhone's CPU-bound benchmarks against its predecessors.

It's an oversimplification to say that Moore's law is over. Some important metrics have stopped improving. Others are still going, for now.


> It's an oversimplification to say that Moore's law is over. Some important metrics have stopped improving. Others are still going, for now.

It's not over, but we're seeing trade-offs that we didn't see in the past decades, like clock rate stagnation and exotic cooling but most importantly, architectural changes.

Architectural changes present the biggest difficulty IMO because they require a change of mindset from the user's perspective. Multi-core was supposed to be an easy transition but it's still underutilized, GPUs for general purpose computing is doing well, but it's based on a popular 20 year old technology (and it's still not as accessible as CPU programming).

However the move to heterogeneous platforms will require substantial effort from a developer's perspective (however good the tooling might be to abstract things). Programming a CPU is one thing but programming a CPU with embedded GPUs,FPGAs,AI dies, etc. will certainly move a lot of people out of their comfort zone


« exponential growth ».. No, just no. GPU gains ~30% general performance each year it isn't increasing at a Fast rate, if it is increasing at all.

An exponential function is many order of magnitude off of the reality.

IPC increase are today more significative than clock increase, but most of the computational increases nowadays come from scalability innovations like SPARK.

But it is true that some few specialised ASICS make big leaps.


A function whose growth is proportional to its size is growing at an exponential rate. 30% per year might not be a doubling every 18 months, but is nothing to sneeze at.


Clock speeds are not and have not been the limiting factor in social impacts of computing since the 90s.

The limiting factor is software architectures, and organizational architectures, which are extremely primitive.

Consider the basic notion of contractors fulfilling simple tasks, a la Lyft/Uber. That has only been around for a few years and we can only do it for the simplest and most unambiguously evaluable tasks like “drive to here and then here”.

Also consider that the absolute simplest programming tasks, like “make a parameterless function that calls several other parameterless functions”. This is a task any literate person could do and yet it’s technically out of reach of 99% of computer users. Not because of clock speeds, but because we don’t understand software architectures or human tool use well enough to design it.

Those technologies will someday be widely available across a broad range of industries to all levels of employees. But we are not close to that time. There is an incredible amount of basic research required to accomplish it.

And an utterly trivial amount of CPU resources.


http://idlewords.com/talks/web_design_first_100_years.htm

This article makes the same arguments using the aviation analogy. I think about it often, it's shaped a lot of how I think about technology.


In both cases the difference was energy use. Airplanes could be a lot faster and more powerful before the oil embargo, when everything shifted to save more fuel.

With CPUs the story is similar, we could have kept going on to 6/7 GHz but the energy constraints, both of supplying that energy and dissipating the resulting heat, would have been prohibitive. Even now, things are still changing, we have better liquid cooling and even more commercially viable phase change cooling. These allow 5 or 8 GHz, respectively. But most of all, parallel computing is in much better shape, both theoretically and practically, it's a much more viable approach now than it was 15 years ago.


And it’s been even longer since RAM latency got much faster.


> The number of electrons in a flash memory cell is in two digits.

I had worked on one of the last TLC NAND designs before the transition to 3D. You could only afford to leak 1 electron per year to meet the data retention spec... TLC did not make it market on that process/node.


Umm...we have pilots at a base outside Las Vegas flying missions over Afghanistan, we can get reasonable WiFi access on trans-oceanic flights and we're looking likely to see direct flights from Sydney to New York in the next couple of years. All of that represents progress and significant engineering achievements.

What's likely changed is that the pressures that drive the engineering are different from the ones that drove early aviation and the space race. Faster just has very little value now, so less dollars flow to making mach-multiple planes. The same thing happened in the CPU market where the GHz race took a back seat to power consumption and parallelization.


None of those represent engineering achievements handled by aerospace engineers, however.


> "The last 50 years of aviation have produced fewer innovations than any single decade in the previous 50 years."

Ummm, what?

There are metrics of value and interest beyond raw speed or altitude. Fly by wire. Digital design. Improved efficiency. Improved safety. Autopilot. GPS and ADS-B. "Free flight" air traffic control. Composite construction. Remotely operated drone aircraft. And on and on. Aircraft have improved markedly and by leaps and bounds over the last 50 years. That's reflected directly in the growth of air travel over that time period, which has grown by more than a factor of 10.


All things are like this. Hyper-growth hides excess and screwups. Then you retrench and clean up.


Oh I am looking forward to the days when the business people are pushing for less bloat


> everything that ever flew substantially faster than Mach 3 is now in a museum.

I find it quite difficult to believe that this is true for military aircraft.

I mean, Lockheed is publicly admitting that they're developing a hypersonic successor to the SR-71.

https://www.extremetech.com/extreme/170463-lockheed-unveils-...


I don't know of any current planes that go above Mach 3. I'm not even sure if the F-22 goes above Mach 2. Outside of experimental unmanned craft, nothing goes "substantially" faster than Mach 3.


Depending on your definition of "current" the MiG-25 can fly at up to Mach 3.2 (at risk of damage to the airframe) and even though it was introduced in the 70s is still in service.


Wiki says only Libya, Syria, and Algeria operate it currently. I am skeptical it has flown above Mach 3 in decades.


That's because the Soviet/Russian Air Force upgraded to the MiG-31 in the late 80s. The 31 has the same performance envelope as the 25.


The issue with cruise above Mach 2 is that you need a lot of fuel, exotic materials and involved processes. None of which are compatible with the civilian airline industry.

From a military perspective, high cruise reduces range, involving multiple refuelings. It also doesn't help in a head-on approach, but missiles do travel at Mach 3.

The costs of Mach 3 outweigh the benefits unless you have an intercept or recon mission.

So the discussion should be ... "we are here, where should we go next?"

I would say re-examine the X-planes results, look at where we are in materials and CFD, and do another X program.

For example, all modern jets use area ruling. What is the next step in supercritical airfoils for airliners? (The current record is around Mach 0.89 for an airliner at cruise.) NASA has a great track record at basic research.

How we can use fuel efficiently and inject a minimum of toxic gases into the upper atmosphere? The SR-71 burnt most of its exhausts, resulting in no soot, can airline engines do the same?


I dunno. I don't have any sort of inside info (of course), but it just seems weird to me that the US designed aircraft in the late 1950s (XB-70 Valkyrie) that could cruise at Mach 3+, but now our stealth capabilities are so good (or drones so effective in replacing reconnaissance aircraft) that those high-speed capabilities are no longer required.

I can only offer speculation. The subject of mysterious military aircraft is of course a rumor mill and field of conspiracy theories like no other. :-)

https://nationalinterest.org/blog/the-buzz/rumors-secret-war...


> (or drones so effective in replacing reconnaissance aircraft)

It's a combo of this and satellites.

The only credible rumor I believe is that a even stealthier variant of the F-117 was used in the "gap" between the SR-71 and modern drones. But since then almost all ISR is done by drones, with the rare exception when the U2 is still flown.

Otherwise satellites had gotten to the point in the 90s that they could provide real time feeds so it destroyed the need for a high end recon platform like the sr-71 and all other ISR could be done by cheap drones.


Improvements in satellite imaging have reduced the need for cruising at mach 3+; the same job can be accomplished with very different hardware.


It can just make Mach 2 with afterburners (supposedly).


Here is a small book "Sled Driver", a collection of stories from an SR-71 pilot.

https://www.dropbox.com/s/yhosqpzghkwx5xe/Sled%20Driver%20%2...


We have mach 3+ designs, but the issue is that there aren't many uses for them. The problem has moved from raw engine power and reducing shocks


Operations per second per dollar are still increasing merrily it seems if you include GPUs eg https://aiimpacts.org/wikipedia-history-of-gflops-costs/


I wonder if we were at a plateau with aviation that we are now ascending out of. Look at drones and multi-copters and advancements from SpaceX for example.

I agree with the main argument that I think you were making. We cannot assume that the growth we have been seeing continues indefinitely.


Also some concepts of electric jets come to mind.


Two things:

- where is mother nature github repo so I can file a pull request to patch atoms size

- I'm happy about the constraints actually, I'm pretty sure there's plenty of space at the software layer to improve use of hardware/


This is why I scoff at those who believe a technological "singularity" is coming. Progress is not exponential. There will be no post-scarcity society, no AI overlords, no geek rapture.


Your brain is an existence proof that 20 W is sufficient for a full human-level general intelligence.

Even if all computer progress stops hard at a thousand times more power consumption, a simulated mind would only cost $3,500/y — and Einstein is no more energy intensive than a village idiot.

Heck, the main thing stopping us now is that we don’t know how our minds work, not the hardware to run it; the iPhones sold last year could collectively run simulations of approximately 177 full human brains simultaneously and in real time if we had a connectome mapped out, and the new models with the same sales would take that number to about 35,500. Given Apple’s margins, high paid individuals are already at the level where they should worry about being replaced with scans of their own brains as soon as that becomes possible.


There is zero evidence that stimulating the connectome would be sufficient to produce an AGI. Let me know when someone simulates the equivalent of a mouse brain. Until we see some tangible results it's all idle speculation and blind faith. Certainly not worth worrying about.


If I’m reading this right, set a reminder for early next year: https://www.humanbrainproject.eu/en/brain-simulation/whole-m...


FUD, I‘m sorry. I have heard the same thing 15 years ago and worried about it. There is no technology to scan a brain. There won‘t be anytime soon. If there would be - which is not happening - it will affect everyone equally, so nobody should worry either.


Right now it would require a destructive process. Such a process is being used to test the effectiveness of cryonic freezing on rabbits. [1] [2]

Obviously I wouldn’t want to sign up for such a process any time soon. However, it’s almost too obvious that the test would be:

1) train animal to do a thing

2) freeze, slice, scan, process data with existing AI [3] to turn into connectome

3) simulate, see how simAnimal reacts to trained stimulus from step 1

4) repeat with more complex animals, perhaps even ones which would have been euthanised and see if they still know their owners, until you reach humans

And while that assumes cryonic doesn’t turn brains to mush, that’s without relying on Elon Musk turning SciFi into reality al-la Neuralink/Neural Lace (which, again, I’d be terrified of at this point if anyone managed to make one).

[1] http://www.brainpreservation.org/asc_rabbit_fulleval/

[2] https://www.natureworldnews.com/articles/19877/20160211/cryo...

[3] https://ai.googleblog.com/2018/07/improving-connectomics-by-...


Exponential progress is not required to get to post-scarcity and AI overlords. Linear progress gets you there just as surely.


But an S-curve approaching an asymptote won't.


A single S-curve will not.

However, completion of one line of technological development often unlocks new possibilities. Over the last several centuries, we've ridden many S-curves of varying scales; it has certainly hurt when a particularly large S-curve, such as energy extraction or sequential computing speed, has saturated, but there have always been more than enough promising underexplored possibilities to keep humanity's brightest minds occupied. Both the known laws of physics and the amount of biological activity we still haven't replicated indicate that this state of affairs won't end within the next century. (We might stop exploring due to some sort of apocalypse, of course, but that won't be because there was nothing meaningful left to discover.)


> By perpetually refreshing the illusion that progress is just getting under way, gadget worshippers like Kelly are able to wave away the problems that progress is causing. Any ill effect can be explained, and dismissed, as just a temporary bug in the system, which will soon be fixed by our benevolent engineers.

These were the key sentences to me.

I agree that we can't put our head in the sand about the problems technology produce _today_. Part of those solutions will be technical, and part of those will be cultural. Technology is never operates in a vacuum.

Regarding the "we're at the beginning" argument... I think it depends on the time horizon, and if you're a futurist (like Wired tends to be) or if you're responsible for immediate technical/policy decisions. If you view the potential impact of technology over the next 200 to 300 years, then, yeah, we're beginning. If you want to clean up the mess we've made over the last 20 to 50 years, then, no, we're not at the beginning.


Well put. No matter where you stand, you're always at the beginning of the next x years and the end of the past x years at the same time. 'Technological development' didn't begin with computers and won't end there. Ethics should matter at every point.


I've been around long enough that I'm realizing we're closer to the end than the beginning. The late 90s were so bright we had to wear shades - then the whole thing came crashing down and set progress back about a decade in the 2000s. Innovation happened by forcing us to pull ourselves up by our bootstraps, not by cultivating our potential.

Now here we are again, having caught up to where we were in 1999: VR and AI are taking off (it was 3D gaming and online support then), billions of dollars of VC money are being thrown at mundane ideas that a couple teenagers could pull off in their parents' basement (if they weren't stuck working 40 hours a week at call centers), and vacuous claims are made daily by people who won the internet lottery.

This is not the future we're looking for. Or is, when you think about it. The wool has been pulled over our eyes. The rat race has become a marathon, the fruits of our labors are no longer ours to utilize. AI (especially) will be used to maximize profits rather than free us from labor. And unfortunately I'm not seeing acknowledgements of how badly we've flown off the rails, especially at the highest levels of the tech industry.

My predictions: another crash around 2020, populist policies that do the opposite of what's needed (austerity, tax cuts for the wealthy, suppressing the Other, etc etc), little done to reduce consumption or help the environment as a percentage of GDP or even at a personal level, and a revolutionary reaction to all this in the form of protests and cooperatives that have too little capital to mount a meaningful resistance. Playing out over 1 or 2 more lost decades until the endgame in 2040 when corporations finish merging with AI and human beings have lost all autonomy.

Disclaimer: I may have been watching too much Mr. Robot and Handmaid's Tale..


> policies that do the opposite of what’s needed...

The jury is definitely out if democracy can actually “do what’s right” with regard to social issues, the environment, income inequality, etc. In fact it is mainly showing the opposite - that those with $ can buy elections using “weaponized” mis-information directly targeted at emotional humans using Facebook and then do whatever they want.

I am rooting for a real Mr. Robot to take down Facebook et al.


I'm concerned about democracy issues as well (tyranny of the majority, tragedy of the commons, etc) but can't endorse targeting an ever-present Evil Corp, whoever that might be in these times.

Plus I don't think it would work in the long run. I've been having a hard time lately coming to terms with the fact that there is no quick fix. There's probably not much that even Mark Zuckerberg could do at this point to take Facebook in a more equitable direction. It might require a leap of insight even greater than starting it in the first place.

All of the important positive changes in history happened through organizing. It's really up to the employees of Facebook to do their part every day to make things better. If the system doesn't allow that, then they need to work to change the system.

On a personal level, I'm going to vote and donate to positive causes. But I'm concerned that won't be enough to counter the decline in education and civics that I've witnessed over my lifetime. There are fundamentals that we could work on though. For example, the US was founded on public education. I'd ask anyone reading this to ask themselves why they might favor private education over public, and how that basic idea might lead to college that is more expensive, or dogma in our political discourse. Some ideals are timeless and needed now more than ever.


You raise an important point. Society is the sum of all our individual actions and relations. For societal change to happen, each and everyone has to change. I cannot expect a 20% increase in salary every couple of years for myself and at the same time blame society's demise on Zuckerberg, politicians, or the greedy, immoral bankers.


>I cannot expect a 20% increase in salary every couple of years for myself and at the same time blame society's demise on Zuckerberg, politicians, or the greedy, immoral bankers.

yes you can


I was surprised to learn as a grown damn man this past week that Democracy, capital letter political system, was designed and architected to be a more stable political system less prone to systemic ripples, rather than a system designed for the people to govern themselves.

"For the people, by the people" is basically a marketing slogan.

Note that I'm not passing judgement on whether it's a good thing or a bad thing, just that it is, and that I didn't realize it until now.


I think it's silly to think taking down Facebook or any other big player would do anything. This is a self-sustaining system, and people will resist change unless it comes strongly and abruptly. On the other hand, this is something we built ourselves so maybe we like it this way the most.


The critical fact often overlooked by techno-utopia folks is what you've put very well.. 'AI will be used to maximize profits rather than free us from labor.'.

this is my main concern given our current political & economic setup today. as I see it, UBI can only be implemented before AI is realized. Its because once it is there no 'winner' is gonna share the fruit of their hard work with unwashed masses. essentially then there would be no difference between poor people in your own country and rest of the world. So we can use our leader's attitude toward rest of world as a guide for what they are going to do here.


The labor force participation rate is at multi-decade lows and so is the number of hours worked weekly. people are already being freed from labor


Labor participation rate is low because of long term demographic trends. Also the labor wages have been stagnant for last 30 years (adjusted for inflation). overall the benefits of increased productivity from automation has only gone to the upper decile of income. to make ends meet people are just working longer hours.

Also, there has been a fair bit of gaming of how the unemployment numbers are reported so some of those stats dont really represent what they intended to originally. if interested, John Williams of shadowstats does a good job of compiling numbers old style.


> The late 90s were so bright we had to wear shades - then the whole thing came crashing down and set progress back about a decade in the 2000s.

I don't think you'd have to look hard to find people who lived through the explosion of optimism about human progress in the late '60s who would have seen your '90s as a retreat. Or people who lived through the defeat of nationalism by internationalist, technological democracy in the '40s who saw their '60s just as dimly.


I think one of the big mistakes that we make is to assume that our current/future was destined to be what it is, and that an alternative, any alternative, did not take place because it was "less efficient", "not as good", etc. This completely ignores the effects of chance and fashion.

The new series "Maniac" is a good example of this (forget the merits of the show, I'm just referring to the backdrop). It's set around the same time as today, but everyone is (still) using CRTs and older-style office equipment.

Also, we so desperately want the "future" to be different from "now", that we often latch on to novelty with both hands, simply to distinguish "our generation" from the prior one. In reality, things change, but not as much as we'd like to think that they do. It's one of the things that freaks me out so much as I get older: everything is pretty much the same as what it was when I was a kid. Yes, things are different, but they're not that different.


I saw Maniac's setting as a conscious attempt to emulate the future as envisioned by Philip K Dick, but with modern twists. The idea that you'd get free services by allowing a person to sit next to you and make a sales pitch was especially poignant.


Something I really love about the Fallout universe is that it poses a fundamental question: What if the transistor was never invented, what would that world look like in its future?

It's not necessarily an anachronism if it's on a different timeline.


I was struck reading about the Amish and their relationship to technology. We tend to think of the Amish as hopelessly backwards, but every year they convene and discuss new technologies and which ones should be allowed. As you can imagine, the process is highly selective and considers the impact both individually and for the entire community.

Negative effects from modern advancements are so common and globally threatening that it is easy to see how the cultural obsession with "new" could be replaced with "stable" as we face more and more of the fallout from this mindset.


I will always recommend the 'Amish Hackers' article by Kevin Kelly [0] which describes the sophisticated relationship the Amish have with technology.

[0] https://kk.org/thetechnium/amish-hackers-a/


At least the Amish use GMO crops, whereas many in the "global techno elite" are still afraid of modern plant breeding.


> For much of this year, I’ve been exploring the biases of digital media, trying to trace the pressures that the media exert on us as individuals and as a society. I’m far from done, but it’s clear to me that the biases exist and that at this point they have manifested themselves in unmistakable ways. Not only are we well beyond the beginning, but we can see where we’re heading — and where we’ll continue to head if we don’t consciously adjust our course.

Reminds me of a few lines I read months ago from Daniel Pinchbeck's Breaking Open The Head: "In contemporary life we do whatever we can to deny intuition of the invisible realms. We clog up our senses with smog, jam our minds with media overload. We drown ourselves in alcohol or medicate ourselves into rigidly artificial states with antidepressants. Then we take pride in our cynicism and detachment. Perhaps we are terrified to discover that our "rationality" is itself a kind of faith, an artifice, that beneath it lies the vast territory of the unknown."

Further reading: https://www.artofmanliness.com/articles/manvotional-thoreau-...


There are two kinds of people: those afraid of their own death, and those who desperately avoid thinking about their own death


Seems deeply cynical. Where is this coming from?

Death is the only certain part of life. Who can seriously fear it? It's the part that immediately precedes death that scares me.... if I were to get Alzheimers I would rather put a bullet in my head.


I agree there are things worse than death, such as Alzheimers.

But still, and this is just my own belief, I think anyone who has given the matter some thought and introspection, is afraid of the end of their own existence. There may come a time indeed when death is better than remaining alive, but that doesn't imply one has stopped fearing death.


"Be afraid of something" is a very loaded sentence that covers quite different things.

I'd want to draw a line between fear (as a very specific emotion) and aversion/dislike/anti-preference, as those are quite different things but both can be referred to as "being afraid of X". There are many unpleasant, unwanted things that I don't prefer at all and would actively work to avoid or prevent them, but I'm not afraid of them in the emotional sense. And there are a bunch of trivial things that I am afraid of (i.e. I experience that emotion when thinking about them) but that are actually good for me and I'm going to intentionally do them despite the fear.

There is a natural aversion to death, however, I'd argue that a great many people (though probably not a majority) don't experience any significant active emotion (including but not limited to fear) when contemplating their death, and that includes "anyone who has given the matter some thought and introspection", especially among people who have had lots of exposure to actual death. Experiencing fear and all the physiological effects of fear when being in clear immediate threat of death is a natural reflex, but that doesn't necessarily happen in rational contemplation about the end of ones existence. There is such a thing as acceptance.


I gave it some thought and introspection, and concluded there's no real "me" to have a finite existence to begin with. I'd rather not keel over tomorrow, given the choice, but I'd say I fall somewhere in the middle between "afraid" and "indifferent".


given there is no real "you" to begin with, who is it that falls somewhere in the middle between "afraid" and "indifferent"? :-)

I understand what you're referring to though, and assume that the very few that have actually fully realised this (and not just on an intellectual level) are not really bothered with death, and not due to lack of awareness of it.


It seems the only thing to fear is cessation of experience. Why bother?


For those who believe in an unpleasant afterlife there is much to fear. (Not saying it's rational.)


I recommend you read "The Denial of Death" by Ernest Becker. It is at length a book on death anxiety and how humans attempt to cope with the subject.


Thank you for the recommendation - so far, it’s intriguing. What’d you think of it?


You forgot the third: those that alternate between the two...


And the fourth the suicidal - even if they don't take action for one reason or another such as a sense of duty. Although that may technically be their suffering outweighing fear of death like people leaping from burning skyscrapers.


Seems odd that fear of death would make you suicidal. Especially considering all you have to do to die is wait.


Schrödinger's death?


Is there any kind of context to this statement? Does it come from somewhere? It certainly doesn't ring true to me.


There's a third group: those who already ARE dead, for all practical purposes.


That's a mighty sweeping statement, isn't it friend?


> Is there an overarching bias to the advance of communication systems? Technology enthusiasts like Kelly would argue that there is — a bias toward greater freedom, democracy, and social harmony

Technology itself has its advantages, but scrutiny is needed for how it is put to use by large corporations with lots of power. A healthy discussion needs to happen around the role of platforms which are gaining influence into people's emotional lives. But is that a technology thing or a business/ethics thing?

I used to love the communities on USENET; and many IRC channels still exhibit some of that vibe. Even web forums centered around special interests are great communication tools and (can) bring people together.

But building centralized communication tools (1:1 tools or group/tribe tools) around advertisement and making those the primary mode of communication for people is something new for society and I'm not sure this is for the best. The large companies that run them answer to shareholders and have fiduciary obligations to them. Instead of "provide a communication tool and monetize with ads, hoping they don't degrade the experience too much", they are doing what we expect of any large business: "maximize shareholder value and hope the communication tools don't cross too many ethics boundaries".

I work in technology and have greatly benefited from it. I also see positive and negative impacts around me. To me, the real question is whether technology is improving the lives of ALL people or just those like me who happen to have a vested interest in it.


Great post. My wife and I have conversations about this all of the time, especially the FB vs. good 'ole web forums discussion.

I think the issue with centralized communication tools is that we haven't figured out the "shame" thing yet. Or, perhaps, it isn't even possible to scale up shaming. There's a lot of evidence that scaling up shaming simply results in its weaponization (the internet doesn't do nuance...). Watching Roseanne on Joe Rogan this week was simply heart-breaking. The speed with which her entire career and body of work was tossed in the garbage really makes me think that it isn't possible.


I think either the Internet will self-regulate by developing a "hey let's not lynch people" norm, or the brick-and-mortar crowd will slowly develop some backbone against Twitter mobs.

I can't predict the future. But I'm a great fan of that quote: "If something can't go on this way forever...it will stop."


Yes, that is a great quote - a lot of insight packed into a short sentence. I cannot seem to remember where I saw this said (it was a video, probably Sam Harris's podcast), but it was effectively the same idea: we're simply going to adapt in a way that allows everything to keep moving forward.

I must say that, as a developer, this answer was completely unsatisfying. It doesn't sound like a "flip a switch" kind of solution... ;-)


> The speed with which her entire career and body of work was tossed in the garbage really makes me think that it isn't possible.

Are you saying it shouldn't have been thrown in the garbage? I don't pay much attention to the entertainment world, including that incident, but I just lookup up the tweet that triggered the whole thing. I can't think of any context in which I would have any desire to show any support at all for a person who would tweet such a thing. If you feel differently, could you expand a bit on why?


You can watch the Joe Rogan episode on YouTube to see her side of the story in its entirety.

But, regardless of what she gives as the reasons for this particular tweet: no, there is no way that someone should be judged entirely by the contents of a single tweet. A tweet is a very poor summary of one's character or who they really are, especially if the judgement of the author was impaired. Everything that I've seen indicates that Roseanne is actually very liberal, has been for many years, and that she is in no way a racist person.


Neomaniac don't understand that everything new is necessary unstable and opaque. Older systems, cultures, technolgies survived for a reason. By obsessing over the new and dismissing the old we are throwing away knowledge and learnings.


Really I think the impact of technology with communication is twofold and contradictory - transparency and anonymity. It is highlighting all of the ugly truths we would rather ignore - including scarcity of critical thinking. Societal hypocrisies start to slowly fall apart - like expectations to be both social and "career friendly" or looking down on those we depend upon. Even the lies and manipulation are oddly transparent - there is nothing stopping anyone but time from making several accounts to promote absolute nonsense. As opposed to traditional sources which make claims to authenticity and can betray the trust. Lack of trust is a growing trend related to manipulators, not wanting to understand the uncomfortable and genuine untrustworthiness.

Like always we need to adapt to changes as a society for good or ill.


> In his books Empire and Communication (1950) and The Bias of Communication (1951), the Canadian historian Harold Innis argued that all communication systems incorporate biases, which shape how people communicate and hence how they think

it seems that there were many thinkers which have been ignored at this time. I'm currently reading Jaques Ellul who made very similar conclusions in 1954 in his (IMHO) monumental: "The Technological Society".


Discourse around technology is interesting. I've been reading a lot of essays on technology, digital media, and the digital humanities over the past few weeks.

The upsides of the future seem to get perpetuated a lot more than the downsides. In one of the papers that's at the forefront of my mind, the New London Group's "Multiliteracies for a Digital Age" essay, they call out the good -- more collaboration, more opportunities for a utopia, etc. While they do mention downsides, they kind of get drowned out.

People want to be optimistic -- there's a bunch of core human ideals and one of them is that things will improve. As such, technological downsides and pitfalls either get called out and responded to by tin-foil-hat types, or just fall away until it's a bit too late (thinking randomly of Atwood's "Oryx and Crake" here...)


There's this belief, very common on HN, in technology as the solution to any and all problems facing humanity. In a way, technology has become a religion - "any sufficiently advanced technology is indistinguishable from magic".

Technology was in many ways the force that shaped the last century - it changed warfare, transport, communications, agriculture, medicine. Technology promised us a future of unlimited growth, with ourselves as masters of the universe and of our destiny.

But by submitting ourselves to the cult of technology, we've removed ourselves from the world, and thus lost our connection to the world, with its myriad living creatures and endless mysteries. In building ourselves castles of concrete and steel and experiencing our consciousness through 5" screens, we have become isolated and desensitized.

It is deeply ironic that the age of ubiquitous communication is also the age of alienation. And it is deeply distressing that so much time and effort goes into developing and manufacturing space rockets, mobile phones, new cars, while so little is being done, it seems, about ensuring our future on this planet.

I have become convinced that for us to have a better future we actually need to give ourselves a break from technology, in absolutely all facets of life: the food we eat, our personal health, the way we participate in our community and in political life, and our place in relation to this earth that is our home.


I find this post to mainly explain a worldview in a very broad manner without getting into specifics.

I doubt it can lead to a constructive discussion. People that also have a worldview with a negative view of the effects of technology on society will mainly just agree in mostly broad ways.

People that have an opposing worldview will state their own beliefs, mostly in a broad way.

But because this is not a specific issue but rather a worldview, there is almost no possibility of constructive discussion where someone really gains insight or changes their mind. Everyone's already made up their mind on this broad aspect of their worldview. That's just the nature of beliefs.

Another aspect of this is that any perspective is largely about seeing farther and farther forward into the future. It is quite literally like looking into the distance. Since it is impossible to clearly see things that are far away, it is quite easy to have a disagreement about what's on the horizon or beyond in any particular direction.

But the landscape is so complex that blanket statements about good or bad in any direction have little utility. In my opinion.


The effects we've observed so far of the wide-spread social media with regards to our social and political systems seems akin to a 5-yr old walking in on his parents having sex; old enough to be traumatized, but not old enough to successfully deal with it.

Jury's still out on the long-term consequences of this exposure. Suffice to say, it doesn't look particularly healthy.


It's the paradox of the infinite horizon. Dictators in the past have justified genocide under the idea that the pain in the short term can yield infinite rewards by being the foundation to a utopia.


I really don't get this piece. Improving machine connectivity and improving human communication are interrelated but mostly exclusive concepts. Of course there are major issues with the "improvements" to human communication (Facebook) notwithstanding actual improvements like Wikipedia. But that has little to do improvements made to machine connectivity.

The power of APIs to automate away the tedium of actually applying human-decided policy is barely realized. Too many services do not expose APIs; too much necessary tooling (monetization, security, documentation, publication, monitoring) is either proprietary, non-standard, poorly publicized, or too complex to easily apply. Too few regulators and legislators appreciate the potential to kickstart adoption by mandating adoption and standards publication in government services. Too few citizens are adequately educated in ways to take advantage of the relatively few APIs which are available.

The hardware race may be on its last legs, but claiming that we've exhausted what is possible with APIs is flat-out laughable and ridiculous.


I'm really getting tired of people continually bashing Kevin Kelly. Kelly's book, New rules for the new economy, was what gave me the final push to quit my job and become a software entrepreneur. But he got few things wrong in that book and the criticism was so harsh he stopped writing books for a few years.

Kelly is a cheerleader. But he ties disparate threads together and gives you a mental model where he sees the future going. He's been far more right than wrong. But some people I guess are natural born pessimists.

George Gilder has a similar approach. Kelly wrote in Wired in a famous essay after the 2000 crash that the Internet was at the beginning of a 200 year boom and I believe that to be true.


This bit from the article:

> Any ill effect can be explained, and dismissed, as just a temporary bug in the system, which will soon be fixed by our benevolent engineers. (If you look at Mark Zuckerberg’s responses to Facebook’s problems over the years, you’ll find that they are all variations on this theme.)

Seems unfair in light of this:

> Ezra Klein: That is one of the ways Facebook is different, but I can imagine reading it both ways. On the one hand, your control of voting shares makes you more insulated from short-term pressures of the market. On the other hand, you have a lot more personal power. There’s no quadrennial election for CEO of Facebook. And that’s a normal way that democratic governments ensure accountability. Do you think that governance structure makes you, in some cases, less accountable?

> Mark Zuckerberg: I certainly think that’s a fair question. My goal here is to create a governance structure around the content and the community that reflects more what people in the community want than what short-term-oriented shareholders might want. And if we do that well, then I think that could really break ground on governance for an internet community. But if we don’t do it well, then I think we’ll fail to handle a lot of the issues that are coming up.

> Here are a few of the principles. One is transparency. Right now, I don’t think we are transparent enough around the prevalence of different issues on the platform. We haven’t done a good job of publishing and being transparent about the prevalence of those kinds of issues, and the work that we’re doing and the trends of how we’re driving those things down over time.

> A second is some sort of independent appeal process. Right now, if you post something on Facebook and someone reports it and our community operations and review team looks at it and decides that it needs to get taken down, there’s not really a way to appeal that. I think in any kind of good-functioning democratic system, there needs to be a way to appeal. And I think we can build that internally as a first step.

> But over the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

https://www.vox.com/2018/4/2/17185052/mark-zuckerberg-facebo...

You might question whether Zuckerberg's suggestions are sincere or practical, but they certainly don't amount to dismissing the scope of the problem.


I don't think it's unfair at all. As I read it, Zuckerberg's response is exactly what Carr is talking about: It's a problem now, but it's fine because we are going to make Facebook so much better; it will be awesome. It will even have a Supreme Court! Facebook as it exists now is just the beginning.


The article is specifically saying that people with "Borg Complex" say that engineers and technology will solve all our problems. Zuckerberg is proposing something much different than that- his Supreme Court doesn't include Facebook employees. He's admitting that tech has not and cannot solve these problems. I think Zuckerberg's answer is as far from "Borg Complex" as it could possibly be, within the constraint that he'll never propose blowing up Facebook.


People on social networks who talk about US politics, to a foreigner, are a complete shitshow. Like monkeys in a zoo hurling turds at each other. You just have to see the responses to any tweet by Trump or Clinton to see that. I wonder if this new or if this was always the level of politics debate.


I think it's definitely a bias of those who feel the strongest about an issue feel the strongest that they have to post. I don't love the term silent majority, because basically anyone can summon one up to make their point seem more valid. But I honestly to believe that there's a majority of people who just don't want to step into the political arena and get yelled at by people who are more passionate than them.


Consider that in 1856, one Senator attacked another with a cane on the floor of Congress and beat him nearly to death. https://en.wikipedia.org/wiki/Caning_of_Charles_Sumner


There's always been a fringe of people willing to say horrid things but the scale of it is much larger today.

You must have missed out on Brexit, Marine Le Pen, and many other things if you think this is unique to the US.


The reach is much larger today.

It used to be that the troll would be the loudmouth at the bar. He might get in a fight with someone else at the bar, or he might just annoy the other patrons, but it was a pretty small set of people.

Now the troll is saying the same things, but on Twitter, and the whole country can hear his trolling. He's no different, he just has a bigger reach.


Totally agree. I am always surprised how much attention a single person can get for saying something stupid. It would be much healthier if they just got ignored.


To be fair, there's also the case for US having a huge population compared to other countries. And all of them are English-speaking, which means that a foreigner sees way more of that.


Please link to tweets showing that other countries have a superior level of debate on Twitter.


I didn't mean to say that at all, it's just that I don't use Twitter much but I sometimes get to see some tweets about American politics (especially tweets about Trump) and they are shameful. I expect them to be the same for any other country of course. I have edited my post a bit to make it clear.


There is plenty of serious, polite political discussion on social media, but the people who are actually interested in a discussion know better than to try to respond directly to a tweet by Trump or Clinton. Responding to one of those tweets is a waste of time, shouting into the wind.

Try following some journalists, authors etc. rather than searching by topic.


So...

If you haven't already read this: https://www.gq.com/story/sperm-count-zero.

So - the science tells us that plastic is killing us. The news stories say it's in our water supply and food. The keyboard you are typing on is probably made of plastic (mine is). And that's only one of the many things that is currently set to wipe out the human species due entirely to our own stupidity (climate change being another one, but AI/automation and it's affects on capitalism could be another).

In my own humble opinion, the human race was better off when were not so smart, although we may not have been so comfortable. If you have children that are under the age of 10 they'll probably be old enough to fight in the wars that are coming as a result of food shortages and mass migration that will probably happen in the future.

Think about that a minute. In all likelihood our children are going to fight and die because we are too short sighted to stop buying so much and throwing so much away. Humans are a disgusting species.


Should I bother to read this article?

> utopians who believe that computer technology is an unstoppable force for good and that anyone who resists or even looks critically at the expanding hegemony of the digital is a benighted fool

This is such an absurdly straw-many straw man. Is the rest of the article just a takedown of this nonsense position? Or is there a good faith discussion of techno-optimism?


Should I bother to read this article?

I found it to be a quick read, which is good because I also found it to be very shallow.


> This is such an absurdly straw-many straw man.

Never heard or Ray Kurzweil before? The article mentions a couple other techno-utopians.


Ray Kurzweil says anyone who looks critically at the expanding hegemony of the digital is a benighted fool?


That's a pretty good summary, yes.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: