From: Is building intelligent machines a good idea?
> If one accepts that the impact of truly intelligent machines is likely to be
profound, and that there is at least a small probability of this happening in
the foreseeable future, it is only prudent to try to prepare for this in advance.
If we wait until it seems very likely that intelligent machines will soon appear,
it will be too late to thoroughly discuss and contemplate the issues involved.
If we're not too late, then surely we're waiting til the last possible moment. There's a Fermi Paradox hanging over our heads, and all we hear from the LLM crowd is "you're being silly, there's nothing to worry about here".
I don’t get how AI can be a possible solution to the Fermi Paradox.
If an AI is intelligent enough and capable enough to displace a biological species, then the paradox remains. The question just becomes, why hasn’t the galaxy already been colonized by robots instead of biological organisms?
Maybe the species that create advanced AI use it to extinct themselves before the AI is fully autonomous and self-sustaining? Presumably "help an ~average intelligence but crazy person make a bioweapon" would come before AI capable of self-sustaining and colonizing the galaxy?
The scenario you described is just suicide. An AI that is acting on behalf of a controller and has no ability to make autonomous decisions is just a tool. To me that's no different conceptually than a race destroying itself with nuclear weapons, but simply replacing nukes with some sort of drone or automated weapon. It wouldn't be AGI.
Absolutely wrong. Heroin is just a tool to get high, so obviously a drug addict can choose whether to use it, right?
The problem is that technology like AI provides short-term, economic advantages to those who use it, regardless of the long-term consequences. It is a prisoner's dilemma situation however, because if everyone uses it, the situation gets worse.
It is through a series of these prisoner-dilemma type situations that technology advances. Yes, individuals can choose to use it or not, but instinctually we do use it because each step provides an improvement for the individual, even though as a society we get worse off. Thus, as a society, we cannot choose to use it.
The problem is that individual choice for individual gain does not equate always to an emergent societal choice for societal longevity.
Basically "if they did already, we wouldn't be here, so we exist before the universe gets colonized." (Also they colonize fast, ~0.3c, so you don't see it coming.)
This is an example of a non-falsifiable theory. Since it will not produce evidence, it deserves the same credibility as the infinite other such theories, like the idea that god strikes down civilizations which communicate beyond their star.
I think that's a little harsh, at least grabby aliens theory is based on certain assumptions and logic that can be falsified, argued against, etc. It's better than Russell's teapot.
This is one of the questions of AGI: If AGI and autonomy, then what might be their motivations and goals? Which might be the obvious avenues for advancement that we haven't cared about or even noticed? If goo mortality is not a concern, then what changes?
The questions of "galaxy colonized by robots" is only one possible direction.
Maybe, maybe some other galaxy is already colonized and died or their light still not reached us. Or we can't detect it with our toys. Our time to look for real things in universe is like literally nothing compared to billion years.
It is because AI causes humanity to self-destruct before AI can be self-sufficient, and therefore the AI self-destructs also. And this is rather plausible: AI further cements us on the path towards technological development, which inevitably uses unsustainable practices of resource extraction. It helps us continue the unsustainable economy that is necessary for the development of AI.
In other words, we simply cannot stop now because we are too addicted to technology. AI will get better, but at some point we will suffer a massive catastrophe because instead of focusing on how to use less energy and avert climate disaster, we focus on developing more AI.
It's not exactly that AI is the root cause, but our ever-increasing technological development of which AI is the apex, the apex of its destructive power.
Intelligence by its nature is self-destructive unless it is tempered with wisdom that is almost impossible to develop in a society that was built with intelligence not tempered by wisdom.
Nuclear saltwater rockets seem pretty feasible to me. There won't be any Star Trekking going on, but hitting the next stars 4-5ly out doesn't seem completely out of the realm of possibility. Our biology's a little screwed, but even on Earth there are organisms with the correct lifespan/fertility that they could colonize such worlds as they found habitable.
> LLMs are pathetic.
Perhaps. But is there anyone here who believes that if we do eventually come up with an artificial mind, that LLMs won't be at the very least, a component of such an achievement? Insufficient on their own, but likely necessary.