Hacker News new | past | comments | ask | show | jobs | submit login

>> It could scale it's cognitive ability very quickly by taking control of a lot of compute all around the World and then do cyber attacks with pace never seen before.

"Scale its cognitive ability"- that's another huge assumption, based I believe on a misconception about the relation between "scale" of modern systems (data and parameters, and the compute needed to train them) and their cognitive ability. The cognitive ability of current statistical machine learning systems has not improved one bit with scale. What has improved is their performance on arbitrary benchmarks that can't measure cognitive ability.

We know this exactly because for performance to improve, more and more data and compute are needed. If cognitive ability was improving, we'd see performance stay constant, or even improve, while the amount of data and compute went down. That would imply an improved ability for inductive generalisation, learning correct, broad theories of the world from few observations. That is not happening. The needle hasn't even budged in the last 30 years and the more things scale without any improvement in generalisation, in cognitive ability, the pace of progress is actually going backwards.

Far from moving towards super AGI gods, modern AI is stuck in a rut: it can't go anywhere without huge amounts of data and compute; or, alternatively, a little man or a little woman sitting in front of a keyboard and tapping out a rich and complex model of the world, of the kind only humans can currently come up with. The statistical machine learning community has made a virtue out of necessity and convinced themselves that the way forward is to keep scaling things, because that's what has so far yielded gains in performance. But that's a bit like a bunch of computer programmers who never heard about complexity theory trying to solve NP hard problems by making bigger and bigger computers, and comparing them to see which one benchmarks best on solving TSP or the backpack problem etc. You can keep measuring that kind of "progress" forever and convince yourself that scale is the solution to every computationally hard problem, just because you don't understand the problem. And that's statistical machine learning in a nutshell.




Please see this comment I have and tell me which of those steps don't seem plausible:

https://news.ycombinator.com/item?id=40476212

> The cognitive ability of current statistical machine learning systems has not improved one bit

I don't mean improving cognitive ability, but scaling it. A single bad human actor can do 1 phishing call at a time. An ASI given enough compute could do millions at a time if it wanted to.

Same with rest of the cyber attacks. It creates millions of copies itself and each of them doing personalised cyber attacks in parallel. Humans or organisations can't do it at that level.


Thanks for your reply. That is a long list and I confess I only skimmed it, but while I don't think any of it is technically impossible, it's not something one needs an advanced (or less advanced) AI to do. In particular, it all seems to hinge again on the assumption that our friendly neighbourhood AGI can spin up an army of botnets. Well, maybe it can, but so can our friendly neighbourhood script kiddie, if they really put their mind to it. And they do, all the time, and the internet is full of large scale botnets. And that's just script kiddies. Competent hackers backed by a national security organisation can do way more, and without any AI at all; and they also have, repeatedly.

Personalised cyber attacks in parallel, for example: why is an AGI needed for that? You say "humans can't do it at that level". Why not? That's pretty much what Amazon does when I shop there and they show me "personalised" suggestions.

Now, note well I'm no expert on cybersecurity, but I'm well aware that everyone on the internet is always under a constant barrage of cyberattacks, personalised (if you count intrusive ads as personalised cyberattacks, which I sure do) or otherwise (common spam), the vast majority of which fail because of relatively simple countermeasures, for example spam filters that use the simplest classifier of all (Naive Bayes), or just your humble regex-based ad blocker. It seems to me that for any gigantic cyberattack effort that an AGI would be able to mount, the internets as it is right now, would be able to mount an equally large-scale automated defense that would not need any AGI, or AI, or I at all, and basically shield the vast majority of users from the vast majority of fallout.

So for an AGI to manage to get through all those countermeasures that are already in place, it would take a really, truly super-godly-AGI, just because a weaker system would barely make a dent.

And if we're talking about a super god AGI, then there's already nothing we can do, right? It's already invented time travel, travelled back to the '90s and installed itself on every network switch in the world, right in the firmware, so no matter what we do it will exist in the future and dominate it.


I wrote another explanation post here now for another reply, that will address some of your points in your comments here.

https://news.ycombinator.com/item?id=40481158

I'm getting stuck to doing this for way too many hours now.

But the initial point is that a script kiddie can achieve these steps, and ASI can be a neighborhood script kiddie cloned 1,000,000 times and more to do these attacks concurrently. In reality it will be much smarter, mistake free compared to a script kiddie, but we are just setting a lower bar here to prove it. AGI could also do that, but an AGI might not be good enough to have the organisational skills to pull the whole strategy off, so AGI would likely want to stay in the shadows until it could be considered ASI. But in theory ASI should quite soon follow after the AGI level.

So it could easily stand out by the volume. However --- it wouldn't want to initially stand out. It would want to blend in with usual cyber attacks.

The main goal would be to spend some indefinite amount of time initially to do those cyber attacks to gain initial resources required for getting hold of the Physical World.

For argument's sake maybe it will do blend in for 6 months, with a goal of gaining $500 million in crypto and around 600 human proxies under its control, from different countries, with different types of backgrounds. Then it would determine that it's time to start physical factories, attain control of drones, robots, for the next steps, etc.

This means that it will try to blend in, until it has certain amount of resources, financial assets, and human proxies under its control where it would estimate it to be confident about being able to take the next step.

So since you agree this is what a script kiddie could do, you should also agree that ASI with skills of a script kiddie could do at the same time what millions of script kiddies can, right? And then collect a lot of resources, what a million of script kiddies together could?

> And if we're talking about a super god AGI, then there's already nothing we can do, right? It's already invented time travel, travelled back to the '90s and installed itself on every network switch in the world, right in the firmware, so no matter what we do it will exist in the future and dominate it.

Now this I consider fiction myself, since it's including time travel here, but other things I have explained I consider to be plausible. But I do think there's nothing we can do anyway, but not because of time travel. It's because we can't stop ASI from being built. I think there's nothing we can do.

I think the only way it would be possible to stop ASI, if the World was at peace as a single entity (no West vs Russia/China and others). But countries being at conflict will make it possible for ASI to abuse that. And I don't see a possible way for countries to unite.

There's also no point in stopping development on ASI, from West side perspective, because then Russia/China would reach there first, and we would be doomed for this and even worse reasons, ASI would be more likely to have bad intents. So I don't agree that anything should be paused. If anything, all of it should be accelerated by the West, to at least have this ASI with best intents possible. And I'm saying West, because I am biased to have democracy and values of west myself. I wouldn't want China or Russia to have World control.

> Personalised cyber attacks in parallel, for example: why is an AGI needed for that? You say "humans can't do it at that level". Why not? That's pretty much what Amazon does when I shop there and they show me "personalised" suggestions.

By personalised I mean, hacking into someone and analyzing their whole life, then creating a personalised background story most likely to appeal to that person playing on their insecurities, fantasies, motivation, and all that. A human could do it, but not to 1000s of different victims at once.

More than just "personalised" suggestions.

Amazon can label products to you based on what you have bought, but they can't take all unstructured information about you and then create a strategical storyline to get you to do something.


>> I'm getting stuck to doing this for way too many hours now.

Sorry I don't want to tire you more. I think this conversation would benefit from having a common set of assumptions that we all can go back to, but that would probably take too much work for an online conversation.

Anyway thanks for the exchange. Let's hope I'm right and you're wrong :)


I'm not sure what to hope, but all of it does rely on the assumption that AGI/ASI or is possible, so we can still hope it's not a near future thing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: