Hacker News new | past | comments | ask | show | jobs | submit login

I am asking for a tangible technical piece of proof.

It's clouds in the sky if you don't actually give tangible step by step or empirical proof of how XYZ can arise

This is just bad science fiction without that.

(Edited to remove meanness and snark. Sorry about that)




Suppose members of an advanced alien civilization arrive in Earth orbit. Their ships are tens of miles long!

"If they wanted to, those aliens could kill us all," I say. "Now that they are here, there's probably nothing we could do to stop them."

You reply, "I don't see it. Give me a tangible step by step or empirical proof of how we might be killed. No generalizations!"

That's hard to answer because there are many ways they might kill us. The aliens could for example announce that they are friendly, then help us by curing cancer, but the cancer-prevention elixer they distributed to us makes everyone infertile, then they wait for the last of us to die of old age. Of course, some people will refuse to consume the elixer, so the aliens would need a separate way to handle them, but it is much easier to handle a few million people than all 8.1 billion of us. Alternatively, they might alter the trajectory of the moon so that it (eventually) crashes into the Earth. Alternatively, they might remove all the oxygen from Earth's atmosphere. Maybe the aliens aren't even trying to kill us, but they don't care about us and they have some other reason to want to remove all the oxygen from the atmosphere. (Maybe they want to station some machines on Earth, and the oxygen is corrosive to the machines.)


The main problem with all that [1] is a hidden assumption of super aliens, omnipotent and omniscient, just because they have space travel and big ships. That's a science fiction trope but in reality space travel does not imply the ability to e.g. alter the trajectory of the moon or any other such world-ending capability. It just implies space travel.

As a for instance, humans could travel to Alpha Centauri using nothing but modern technology if we were more resistant to space radiation and lived (a lot) longer. So maybe the aliens live a thousand years and they are shielded against radiation. Maybe their ships are ten miles long because of all the radiation shielding, or maybe their tech is bulky and they haven't figured out e.g. how to miniaturise electronic components so they need big ships for all those mainframe-sized computers. Or maybe the aliens are a few miles long each, themselves, so they need space to wiggle their tails.

Just because "aliens" doesn't mean "super omnipotent godly aliens". And the same thing goes with future "AI". A lot in this kind of discussions hinges on the assumption that an artificial intelligence would be some kind of super computer god with no end of capability. Says who?

________

[1] Apart from the fact that a minority of humans have cancer and would need that elixir. Also: "we never defeated the bugs".


>Apart from the fact that a minority of humans have cancer and would need that elixir

The elixer only works if you take it before you get cancer.


I described what I think could be a plausible way, but of course ASI would presumably be far smarter than me and so would have a much better plan yes.

I have other comment with those steps, but roughly ASI would have to

1. Be able to clone itself, create botnets all over the World to make sure there's enough redundancy. I think that's very plausible and would be easy for it to do if it gains network access even for just a while.

2. Gain means to have control over some set of humans through blackmailing (hacking for compromised information), finance (ransomware, currently $1 billion+ market) or just simple persuasion/ideology. It would use human proxies to form physical companies to establish stronger physical presence in many countries, plant devices for hacking, etc. Probably co operate with criminal organisations, etc. The money it has made can go long way there to achieve things.

3. Once it has enough human proxies it would need to make sure that it doesn't need to rely on human proxies and then it would need to either establish its own robotics companies through proxies or hack existing robots/factories to get control over them.

And during all that it might not even be clear to the proxies what they are really dealing with. The AI could make up a different background story and a pattern for each of the proxies, where each proxy wouldn't even know about each other.

If ASI was to happen today, and it was truly ASI, I don't see how it can be stopped, honestly. No one would be even able to tell what is going on, since it can use encryption techniques for everything it does that differ from what humans would use, and it can diligently switch up the pattern so it wouldn't be clear that if there's a rise in cyber attacks or scams, that there's a single entity behind it. So I think no one would even know for a while what is going on, and when they do, by then it would be too late.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: