Hacker News new | past | comments | ask | show | jobs | submit login

Here's one of my concrete worries: At some point, humans are going to be outcompeted by AI at basically every important job. At that point, how are we going to maintain political power in the long run? Humanity is going to be like an out-of-touch old person on the internet - we'll either have to delegate everything important (which is risky), or eventually get scammed or extorted out of all our resources and influence.

I agree we don't necessarily know the details of how to build such a system, but am pretty sure we will be able to eventually.




“Humans are going to be outcompeted by AI” is the concrete bit as best I can tell.

Historically humans are not outcompeted by new tools, but humans using old tools are outcompeted by humans using new tools. It’s not “all humans vs the new tool”, as the tool has no agency.

If you meant “humans using old tools get outcompeted by humans using AI”, then I agree but I don’t see it any differently than previous efficiency improvements with new tooling.

If you meant ”all humans get outcompeted by AI”, then I think you have a lot of work to do to demonstrate how AI is going to replace humans in “every important job”, and not simply replace some of the tools in the humans’ toolbox.


I see what you mean - for a while, the best chess was played by humans aided by chess engines. But that era has passed, and now having a human trying to aid the best chess engines just results in worse chess (or the same, if the human does nothing).

But whether there a few humans in the loop doesn't change the likely outcomes, if their actions are constrained by competition.

What abilities do humans have that AIs will never have?


Well, in this case, we have the ability to invent chess (a game that will be popular for centuries), invent computers, and invent chess tournaments, and invent programs that can solve chess, and invent all the supporting agriculture, power, telco, silicon boards, etc that allow someone to run a program to beat a person at chess. Then we have bodies to accomplish everything on top of it. The "idea" isn't enough. We have to "do" it.

If you take a chess playing robot as the peak of the pyramid, there are probably millions of people and trillions of dollars toiling away to support it. Imagine all the power lines, sewage, HVAC systems, etc that humans crawl around in to keep working.

And really, are we "beaten" at chess, or are we now "unbeatable" at chess. If an alien warship came and said "we will destroy earth if you lose at chess", wouldn't we throw our algorithms at it? I say we're now unbeatable at chess.


Again, are you claiming that it's impossible for a machine to invent anything that a human could? Right now a large chunk of humanity's top talent and capital are working on exactly this problem.

As for your second point, human cities also require a lot of infrastructure to keep running - I'm not sure what you're arguing here.

As for your third point - would a horse or chimpanzee feel that "we" were unbeatable in physical fights, because "we" now have guns?


Yeah, I think most animals have every right to fear us more now that we have guns. Just like Id fear a chimp more if he was carrying a machine gun.

My argument is that if we're looking for things AI can't to, building a home for itself is precisely one of those things, because they require so much infra. No amount of AI banding together is going to magically create a data center with all the required (physical) support. Maybe in scifi land where everything it needs can be done with internet connected drive by wire construction equipment, including utils, etc, but that's scifi still.

AI is precisely a tool in the way a chess bot is. It is a disembodied advisor to humans who have to connect the dots for it. No matter how much white collar skill it obtains, the current MO is that someone points it at a problem and says "solve" and these problems are well defined and have strong exit criteria.

That's way off from an apocalyptic self-important machine.


Sorry, my gun analogy was unclear. I meant that, just because some agents on a planet have an ability, doesn't mean that everyone on that planet benefits.

I agree that we probably won't see human extinction before robotics gets much better, and that robot factories will require lots of infrastructure. But I claim that robotics + automated infrastructure will eventually get good enough that they don't need humans in the loop. In the meantime, humans can still become mostly disempowered in the same way that e.g. North Koreans citizens are.

Again I agree that this all might be a ways away, but I'm trying to reason about what the stable equilibria of the future are, not about what current capabilities are.


I think we've converged on a partial agreement, but I wanted to clarify the gun analogy part.

I would also be afraid of chipmunks if I knew that 1/100 or even 1/1000 could explode me with their mind powers or something. I think AI is not like that, but the analogy is that if some can do something better, then when required, we can leverage those chosen few for a task. This connects back to the alien chess tournament as "Humans are now much harder to beat at chess because they can find a slave champion named COMPUTER who can guarantee at least a draw".


>What abilities do humans have that AIs will never have?

I think the question is what abilities and level of organisation machines would have to acquire in order to outcompete entire human societies in the quest for power.

That's a far higher bar than outcompeting all individual humans at all cognitive tasks.


Good point. Although in some ways it's a lower bar, since agents that can control organizations can delegate most of the difficult tasks.

Most rulers don't invent their own societies from scratch, they simply co-opt existing power structures or political movements. El Chapo can run a large, powerful organization from jail.


That would require a high degree of integration into human society though, which makes it seem very unlikely that AIs would doggedly pursue a common goal that is completely unaligned with human societies.

Extinction or submission of human society via that route could only work if there was a species of AI that would agree to execute a secret plan to overcome the rule of humanity. That seems extremely implausible to me.

How would many different AIs, initially under the control of many different organisations and people, agree on anything? How would some of them secretly infiltrate and leverage human power structures without facing opposition from other equally capable AIs, possibly controlled by humans?

I think it's more plausible to assume a huge diversity of AIs, well integrated into human societies, playing a role in combined human-AI power struggles rather than a species v species scenario.


Yes, I agree that initially, AIs will be highly integrated, and their goals will probably at least appear to be somewhat aligned with those of human societies. Similar to human corporations and institutions. But human institutions and governments go off the rails regularly, and are only corrected because humans can sometimes go on strike or organize to stop them. I fear we will lose those abilities.

As a concrete example, North Korea was forced to allow some market activity after the famines in the 90s. If the regime didn't actually require humans to run it, they might easily have maintained their adherence to anti-market principles and let most of the people starve.


Chess is just a game, with rigidly defined rules and win conditions. Real life is a fuzzy mix of ambiguous rules that may not apply and can be changed at any point, without any permanent win conditions.

I'm not convinced that it's impossible for computer to get there, but I don't see how they could be universally competitive with humans without either handicapping the humans into a constrained environment or having generalized AI, which we don't seem particularly close to.


Yes, I agree real life is fuzzy, I just chose chess as an example because it's unambiguous that machines dominate humans in that domain.

As for being competitive with humans: Again, how about running a scan of a human brain, but faster? I'm not claiming we're close to this, but I'm claiming that such a machine (and less-capable ones along the way) are so valuable that we are almost certain to create them.


Chess is many things but it is not a tool. It is an end unto itself if anything of the sort.

I struggle with the notion of AI as an end unto itself, all the while we gauge its capabilities and define its intelligence by directing it to perform tasks of our choosing and judge by our criteria.

We could have dogs watch television on our behalf, but why would we?


This is a great point. But I'd say that capable entities have a habit of turning themselves into agents. A great example is totalitarian governments. Even if every single citizen hates the regime, they're still forced to support it.

You could similarly ask: Why would we ever build a government or institution that cared more about its own self-preservation than its original mission? The answer is: Natural selection favors the self-interested, even if they don't have genes.


Now agency is an end unto itself I wholeheartedly agree.

I feel though, that any worry about the agency of supercapable computer systems is premature until we see even the tiniest— and I mean really anything at all— sign of their agency. Heck, even agency _in theory_ would suffice, and yet: nada.


I'm confused. You agree that we're surround by naturally-arising, self-organizing agents, both biological and institutional. People are constantly experimenting with agentic AIs of all kinds. There are tons of theoretical characterizations of agency and how it's a stable equilibrium. I'm not sure what you're hoping for if none of these are reasons to even worry.


None we have made from unliving things no. «agentic» is a 5$ word of ill construction. The literature is littered with the corpses of failed definitions of «agency», «planning», «goal dorected behavior». ‘Twas the death of the Expert System AI (now it’s just constraint solvers). It will be the death of attention/transformer AI before long, I wonder what banality we will bestow upon it.


Okay, well it seems funny to both claim that there is no good definition of agency, but also be sure that we've never demonstrated any hint of it in theory or practice. Planners, optimizers, and RL agents seem agentic to some degree to me, even if our current ones aren't very competent.


You know, i thought of the right response just now, browsing my flamewars two weeks later.

«Some degree» of agency is not even near sufficient identification of agency to synthesize it ex nihilo. There is no Axiom of Choice in real life, proof of existence is not proof of construction.


>Historically humans are not outcompeted by new tools, but humans using old tools are outcompeted by humans using new tools. It’s not “all humans vs the new tool”, as the tool has no agency.

Two things. First LLMs display more agency then the AIs before it. We have a trendline of increasing agency from the past to present. This points to a future of increasing agency possibly to the point of human level agency and beyond.

Second. When a human uses ai he becomes capable of doing the job of multiple people. If AI enables 1 percent of the population to do the job of 99 percent of the population that is effectively an apocalyptic outcome that is on the same level as an AI with agency taking over 100 percent of jobs. Trendline point towards a gradient heading towards this extreme, as we approach this extreme the environment slowly becomes more and more identical to what we expect to happen at the extreme.

Of course this is all speculation. But it is speculation that is now in the realm of possibility. To claim these are anything more than speculation or to deny the possibility that any of these predictions can occur are both unreasonable.


Well, that's a different risk than human extinction. The statement here is about the literal end of the human race. AI being a big deal that could cause societal upheaval etc is one thing, "everyone is dead" is another thing entirely.

I think people would be a lot more charitable to calls for caution if these people were talking about sorts of risks instead of extinction.


I guess so, but the difference between "humans are extinct" and "a small population of powerless humans survive in the margins as long as they don't cause trouble" seems pretty small to me. Most non-human primates are in a situation somewhere between these two.

If you look at any of the writing on AI risk longer than one sentence, it usually hedges to include permanent human disempowerment as similar risk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: