Hacker News new | past | comments | ask | show | jobs | submit login

I never found his argument convincing.

The main tension in his manifesto is motivated by an underlying assumption that isn’t accurate. He sees technology as being a distinct force in opposition to humanity rather than an extension of humanity and that is the source of his fears. There are a lot of reasons to believe that technology is not an opposing force, like the idea of embedded cognition and embedded action.

I think technology is better understand as something connected to human intention and we can evaluate technology by how well it maps on to our unique quirks and needs. So for example when you look at a bee hive do you see it as a diabolical prison made by bees for bees or a complicated construction connected to a set of intentions that fulfill the needs of bees?




Hmm, I’m not sure I buy your contention. You can point to an individual instance of applying a specific technology and say “it is an extension of a particular human intention”, sure. But this is not commutative; Person A’s application of Technology X to Situation 1 is “extension of human intention”, but to Person B who is also in Situation 1, Technology X is “counter force in opposition to (Person B’s) humanity”. See for instance social media recommendation algorithms. I know a girl who can’t use TikTok because she loves pets and can’t look away from families grieving their pets - even starting from a blank slate, the algorithm quickly detects that grief videos engage her, and so inevitably fills up her feed with dead dogs. She absolutely experiences this particular technology as a distinct, and distinctly anti-human, counter force.

Even if every instance of technology has its origins as an extension of human intention, it is still valid to look at the unintended consequences that are anti-humanity, and it is valid to filter for just these negative experiences and say “technology is a counter force”.

That’s not disagreeing with your contention that it is an extension, by the way - it’s saying “also”.

It’s been a long time since I read Kaczynski’s manifesto and I only skimmed it at the time, so I can’t recall whether he makes this claim, or whether I had this thought myself as an obvious patch to his claims. But I think he did say this.


The issue with that argument is that it is way too general.

IE, suppose one lion has shaper teeth. It uses those teeth to catch antelope more efficiently. The other lions see those sharp teeth as inherently anti-lion because those teeth deprive the rest of them of delicious antelope.


This is exactly how it plays out with humans too, except 1) we can develop ways to sharpen our teeth, and 2) we can organize opposition to individuals who, according to the majority, behave unfairly. The resulting dynamics are obviously much more complicated than with lions, but the phenomenon is there.

I mean, imagine that the lion went on to catch more antelope than it needs to eat, and started to sell the surplus in exchange for cleaning its beard or some other services. The other lions then figure out this is not a bad strategy, and start doing their best to also capture a surplus to trade - perhaps going as far as inventing tricks and tools that let them hunt even more efficiently than the original lion with sharper teeth. This continues for a while, until antelope population crashes and most of the lions starve.

Survivors, should they be smart enough, pass on the story of those events - aptly calling it the Tragedy of Commons. Perhaps a couple generations later, when a lion with sharper teeth starts setting up antelope carcass trade again, they wisely band together to form a lion government, and dispense hunting quotas.

Thus is the story of civilization and technology, in a nutshell.


Tech can start out as an extension, but what people fail to understand is that it can become slowly become autonomous over time, kinda like how a party gets out of control and people start putting holes in walls.

I think technologists rarely, if ever, talk about this because a lot of what enables our blind pursuit of technology is to say that technology must be an extension of humans and so it can never become autonomous. When ppl talk about AGI and paperclip maximizers, I think it's a way of pushing the problem far into the horizon and ignoring that the boundary is fluid and pushing.

Imagine if all the people at Google and Facebook stopped thinking that they're making the world a better place?


I think you really nailed the ultimately question: Can technology have agency?

If not, then it's just another tool for humans. We are excellent tool users, and leverage everything we can to expand our senses and abilities. We already successfully wield tools of unimaginable power.

If technology itself can have agency, then it truly is a paradigm shift for the millennia. There has never been an entity that is better at tool-use than us humans. All bets are off.


I think this is all a red herring. At least until we crack GAI, at which point paperclip maximizers and other lethal agents of pure technology come knocking.

Point being: technology, so far, has never been autonomous. But technology also doesn't grow on trees, nor does it stick out from the ground like a valuable rock. Technology is actively invented, and requires costly reproduction and maintenance. It only sticks around if enough people deem it worthy to have enough resources allocated to birth and propagate some piece of technology.

In other words: there is always someone commissioning the technology. Someone with use for it. When considering the gains and ills of progress, it is IMO wrong to focus on technology itself. Especially when talking ills, it's a good way for the actual cause of suffering to remain hidden. Every technology that ever harmed anyone was commissioned and deployed by somebody. Perhaps commissioned with ill intent from the start, or perhaps only repurposed for evil. But it's not technology that does the damage, but people - and these days, organizations, which is both government branches and businesses.

Going back to agency and autonomy - technology doesn't have agency, but people do, and importantly, large organizations seem to have separate agency of their own. Sans of GAI, no tech will turn on all humans on its own - but a corporation might, and corporations wield the most powerful of technologies.


I think Nassim Taleb used the analogy of studing an ant or a bee colony: it is not sufficient to study the ant or the bee in isolation, as it is the interactions between them and their respective colonies that shapes the behaviour. Shifting the level of analysis makes counterintuitive behaviours at the individual level (i.e. bees sacrificially stinging attackers) make sence when we shift the level of analysis up.


A corporation is just a group of humans. There's also clear governance. The CEO makes the decisions and the board of directors has oversight. The shareholders elect the board members.

It's ultimately still a group of humans making the decisions, and they are almost always rational decisions, may just not look that way from the outside with only a partial view.


> A corporation is just a group of humans. There's also clear governance. The CEO makes the decisions and the board of directors has oversight. The shareholders elect the board members.

This is true in the same sense that a human is just a group of cells. There, too, is clear governance. The brain cells together make the decisions and the endocrine system provides oversight. Or something.

A corporation is a dynamic system. There are roles with various responsibilities, but no one - not even CEO - is truly free to make decisions. Everyone is dependent on someone else; there are feedback loops both internal, and those connecting the corporation to the rest of the economy. Then there's information flow within, and capability of various corporate organs to act in coordinated fashion. All that is mediated by a system called "bureaucracy", which if you look at it, is nothing but a runtime executing software on top of human beings[0]. There are some good articles postulating that corporations are, in fact, the first AI agents humanity created. They just doesn't feel like it, because they think at the speed of bureaucracy, which isn't very fast. But it is clear that corporations can and do make decisions that seem to benefit the organization itself more than any specific individual within it[1].

--

[0] - You send a letter to a corporation, it is received, turned into a bunch of e-mails or memos traveling back and forth, culminating in the corporation updating some records about you, and you getting a letter in response. That looks very much like a regular RPC, except running on humans instead of silicon.

With that in mind, it shouldn't be surprising that the history of software is deeply tied to corporations, enterprise systems, office suites, databases, forms - all kinds of bureaucracy that existed before, but was done on paper. Software slots into these processes extremely well, because it's mostly just porting existing code, so it runs on silicon instead of humans, as computers are both faster and cheaper than people.

[1] - Compare a common observation about non-profit orgs, where lack of profit motive makes it clearer that, at some point, the whole organization seems to focus on perpetuating itself - even if it means exacerbating the problems it was trying to solve. C-suites and workers both come and go, leaving to be replaced by new hires - yet the organization itself prevails.


Humans are tool users. That's technology.

But the Luddites of the 19th century didn't like being devalued by tech. Like the Hulk, Luddites smash!

San Francisco's Great Fire of 1906 would not have happened if the city hadn't been plumbed for gas - new tech at the time that apparently did not have human safety in mind.

In the early 20th century there was an uneasy relationship with electricity, easily demonstrated by Thomas Edison's gruesome staged electrocution of animals and also by the the more whimsical cartoons of Rube Goldberg.

These stresses have been ongoing and increasing, the Unabomber just being one more example.

I think it has to do with people not handling the amount of new complexity all at once. And that seems like a wonderful design problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: