Hacker News new | past | comments | ask | show | jobs | submit login

I wrote another explanation post here now for another reply, that will address some of your points in your comments here.

https://news.ycombinator.com/item?id=40481158

I'm getting stuck to doing this for way too many hours now.

But the initial point is that a script kiddie can achieve these steps, and ASI can be a neighborhood script kiddie cloned 1,000,000 times and more to do these attacks concurrently. In reality it will be much smarter, mistake free compared to a script kiddie, but we are just setting a lower bar here to prove it. AGI could also do that, but an AGI might not be good enough to have the organisational skills to pull the whole strategy off, so AGI would likely want to stay in the shadows until it could be considered ASI. But in theory ASI should quite soon follow after the AGI level.

So it could easily stand out by the volume. However --- it wouldn't want to initially stand out. It would want to blend in with usual cyber attacks.

The main goal would be to spend some indefinite amount of time initially to do those cyber attacks to gain initial resources required for getting hold of the Physical World.

For argument's sake maybe it will do blend in for 6 months, with a goal of gaining $500 million in crypto and around 600 human proxies under its control, from different countries, with different types of backgrounds. Then it would determine that it's time to start physical factories, attain control of drones, robots, for the next steps, etc.

This means that it will try to blend in, until it has certain amount of resources, financial assets, and human proxies under its control where it would estimate it to be confident about being able to take the next step.

So since you agree this is what a script kiddie could do, you should also agree that ASI with skills of a script kiddie could do at the same time what millions of script kiddies can, right? And then collect a lot of resources, what a million of script kiddies together could?

> And if we're talking about a super god AGI, then there's already nothing we can do, right? It's already invented time travel, travelled back to the '90s and installed itself on every network switch in the world, right in the firmware, so no matter what we do it will exist in the future and dominate it.

Now this I consider fiction myself, since it's including time travel here, but other things I have explained I consider to be plausible. But I do think there's nothing we can do anyway, but not because of time travel. It's because we can't stop ASI from being built. I think there's nothing we can do.

I think the only way it would be possible to stop ASI, if the World was at peace as a single entity (no West vs Russia/China and others). But countries being at conflict will make it possible for ASI to abuse that. And I don't see a possible way for countries to unite.

There's also no point in stopping development on ASI, from West side perspective, because then Russia/China would reach there first, and we would be doomed for this and even worse reasons, ASI would be more likely to have bad intents. So I don't agree that anything should be paused. If anything, all of it should be accelerated by the West, to at least have this ASI with best intents possible. And I'm saying West, because I am biased to have democracy and values of west myself. I wouldn't want China or Russia to have World control.

> Personalised cyber attacks in parallel, for example: why is an AGI needed for that? You say "humans can't do it at that level". Why not? That's pretty much what Amazon does when I shop there and they show me "personalised" suggestions.

By personalised I mean, hacking into someone and analyzing their whole life, then creating a personalised background story most likely to appeal to that person playing on their insecurities, fantasies, motivation, and all that. A human could do it, but not to 1000s of different victims at once.

More than just "personalised" suggestions.

Amazon can label products to you based on what you have bought, but they can't take all unstructured information about you and then create a strategical storyline to get you to do something.




>> I'm getting stuck to doing this for way too many hours now.

Sorry I don't want to tire you more. I think this conversation would benefit from having a common set of assumptions that we all can go back to, but that would probably take too much work for an online conversation.

Anyway thanks for the exchange. Let's hope I'm right and you're wrong :)


I'm not sure what to hope, but all of it does rely on the assumption that AGI/ASI or is possible, so we can still hope it's not a near future thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: