What is the probability that this will result in a scenario where we are playing God? And if that probability is even moderately nonzero, like >= 5%, can we really afford to take that chance?
There is no royal we here. There never is. So, the right question is can you (indidual, country, company, etc.) afford to have your rivals, competitors, enemies, etc. gain the upper hand here? What these have in common are that they take decisions independently from you. Basically, no matter how hard upset AI skeptics stamp their feet in California (or wherever) insisting on whatever convoluted doomsday scenarios, somebody is going to go ahead and ignore them and go right ahead anyway. Given that, AGI is basically going to happen. Possibly sooner rather than later. The only real question is who gets there first and where and how.
AI safety is a bit like nuclear arms safety. You don't get any safer by not having any nuclear weapons; you just put yourself at the mercy of others with nuclear weapons. The reason nuclear war fare so far hasn't happened is mutually assured destruction. That's why lots of countries seek to have them. Non proliferation treaties have slowed that down but there are probably at least ten or so countries with nuclear weapons at this point.
With AI, it's a lot less clear cut. It's basically going to be about dominance and outsmarting the other side. The downsides are basically more hypothetical / ethical / moral. And when it comes to ethics and morals, there definitely is no such thing as royal we. Countries like China and Russia are likely to choose their own path. Plenty of other countries that will want to get ahead here.
I agree with your analogy to nuclear weapons to some extent. Preventing AGI is going to be hard. The incentives are too massive.
The difference between having nuclear weapons and building ASI, is that you can chose to not use the nukes as long as nobody else does, while the ASI by it's nature needs to be deployed to some extent to exist at all.
Not only that, in the kind of competitive environment you describe, not only will the ASI's be deployed, they will most likely be in competition with each other, which creates the precondition for Darwinian evolution to start happening.
My personal belief is that while it may be possible to develop safe ASI as long as it is NOT subject to evolutionary forces, it probably still isn't if they are.
As far as I can tell, this is by far the greatest existential threat we're facing at the moment. Finding a way to deal with, while working around the game theoretic challenges you identify, may be orders of magnitude more urgent than for instance stopping global warming.
If we do manage to steer AI development into a good (for humanity) outcome, I'm pretty sure ASI's will be able to find technological "solutions" to the problems caused by global warming (an maybe stop the warming itself). On the other hand, if self-replicating ASI's run wild, they would not only be likely to exterminate humanity, but also most other advanced life forms Earth.
The outsmarting will look a lot different once it is achieved and then continue to evolve. The probability of peace, tranquility and humans living forever is 0% though.
The probability of humanity continuing to exist forever may be 0. But I hope we last more than a few more decades or centuries. And when we are gone, it would be great if could make sure that our descendants carry forth the stuff we (when it happens) consider the most valuable aspects of humanity.
Whether that's the ability to create and appreciate art, beauty or the ability to experience emotions such as love and purpose, or maybe some abstraction, generalization or expansion of these that we may not comprehend.
Second this! We started playing with fire long ago. One culture can never own AGI. AGI then to super intelligence. We had better hope our superAGI children are also far more emotionally intelligent than we are and we tolerate us and even teach us.
Maybe for safety we need more than one ai to balance each other like with nuclear weapons... It's just a stupid short guess about what you're saying. However, it looks very doubtful..
Might work until the AI starts self-improving. When it does, it's a race, and most likely the first one will win.
In general, nukes aren't a good analogy to AGI/ASI. The x-risks of AI are such that you can't use them in MAD fashion; AI is more like increasingly potent engineered pathogen - you can't do MAD with bioweapons, and you're one lab accident away from ruining the day for everyone on the planet.
It's really hard to know in advance what will happen when AI starts self-improving. It may be that it will cause a hard singularity. But it is also possible that access to compute resources, minerals, energy or something similar will constrain the rate of advance enough that latecomers can catch up.
Also, if a single entity gets too far ahead, it may be constrained politically or even militarily.
Let's say for instance that OpenAI created an AI able to suddenly design GPU's that outperform Nvidia'. That would be a sign of a quite hard takeoff, as it would quickly lead to MS/OpenAI gaining full control of both software and hardware markets.
This would be huge incentive for the US government to either nationalize this AI, make it's IP open source or use some kind of anti-trust measures to slow them down.
> Also, if a single entity gets too far ahead, it may be constrained politically or even militarily.
The problem is, the time between when you notice and when it's too late to act, might be days, or hours, or seconds, or even negative.
> Let's say for instance that OpenAI created an AI able to suddenly design GPU's that outperform Nvidia'. That would be a sign of a quite hard takeoff, as it would quickly lead to MS/OpenAI gaining full control of both software and hardware markets.
That's solving the wrong problem. If OpenAI creates an AI able to design GPUs better than Nvidia and is general enough, the risk is that it'll arrange for more compute for itself and train an even more capable version of itself. And you might not notice until couple iterations of this happen.
> This would be huge incentive for the US government to either nationalize this AI, make it's IP open source or use some kind of anti-trust measures to slow them down.
When that happens, the US government better be sending men with guns to forcefully shut the businesses involved down, and getting ready to lob some cruise missiles at some data centers for good measure.
> The problem is, the time between when you notice and when it's too late to act, might be days, or hours, or seconds, or even negative.
I agree. Especially if there is a hard take-off where AGI goes to 100x ASI in hours , days or weeks. Or the AGI could be really hard to detect, even if it took longer to develop. There is certainly a significant existential risk associated with AGI/ASI.
However, if we imagine a scenario where AGI was ONLY constrained by algorithms and code, and NOT at all from compute, then this kind of scenario would be much more likely. Lately, though, increases in AI strength have come more from increases in compute than improvements in algorithms.
If (which we may hope is the case), an AI explosion requires hardware improvements to take off (even if the AI can design the hardware itself), then we can HOPE that an explosion would be easier to detect.
> When that happens, the US government better be sending men with guns to forcefully shut the businesses involved down,
All the actions I described are ultimately backed by the government's guns.
> and getting ready to lob some cruise missiles at some data centers for good measure.
This could be an option if the data center was outside the US. Inside the US, the government can shut off the power, if needed.
We've been playing God since we invented language and decided to make a religion that starts the universe with "In the beginning was the Word".
(That this new origin story was added to the religion somewhere between two centuries and a few millennia (and at least one radical theological split) after the oldest part was written down, doesn't change much).
As for "will understanding how minds work endanger everyone?" sure, in million ways to Sunday both by our own hands and the hands we make, and yet at the same time it might also solve all our problems and create a synthetic mortal heaven that lasts for 10^32 subjective years before the fusion fuel from the disassembled galaxies finally runs out.
The only way to even guess at the odds of these outcomes is to learn more.
The problem is that we can "get good" at things but we have hard limits that are pathetic in comparison to the complexity of the universe. We are slightly advanced monkeys. We poke around at things and claim to understand them but time and again there are unintended consequences. This is hilarious to watch in the context of AI. Monkey invents machine based on high school mathematics. Monkey scales up the machine and pokes around at it but can't predict what it will do or even explain how it did it in retrospect.
So what? 'AI' is just computer science finally growing up to match the complexity of innumerable other human endeavors such as biology. The normal state of human technology development is poking at things, just with progressively more calibrated sticks.
It's not the point of all advancement. The Amish for instance have accepted certain developments but with caution, and with a centralized system of governing technology so that it doesn't get out of hand. Or consider the Mayans, who tempered their advancement with their limitations in terms of resources.
We shouldn't do it, and we should stop and be happy with what we have and learn how to live better with what we have instead of inventing more trash like AI.
Amish attitude towards technology isn't about technology per se, but about desire to keep their style and size of communities working - something new technology tends to disrupt.
The Mayans didn't "temper their advancement with their limitations in terms of resources" - they were limited by available resources and time, therefore they did not advance as much and as fast as the others.
Advancement is driven by the basic need to make things better, whether for yourself or other. And it stacks.
I'm not sure taking a cult like the Amish is a prime example, because if we'd all live like that child mortality would still be enormous and we'd fall short on a myriad of things that make our lives easier and better, because there'd be no scientific discovery.
I don't believe scientific discovery is making much better. Some yes of course, but the majority of inventions like smartphones, social media, AI, 5G, fast transportation, fossil fuel extraction, global meat industry, are making it worse.
Technology does make certain TASKS in life easier, but I'd argue that it's NOT a logical consequence that technology makes life BETTER.
One could argue that the most important benefit we get from technology, is that allows more of us to survive, and for longer.
Eugenicist may argue that this is not really a good thing, since it leads to a larger and larger fraction of the population having poor health, some of which will be hereditary.
Agreed. People of the past lived happier, more in peace and more fulfilling years of life than modern people despite worse material conditions, violence, diseases and child mortality. Technology has been only harmful to humanity, and this is by principle, it’s not a side effect. Human body and mind is literally designed to live in a no-organization-dependent[1]-technology world, whether you believe in creation or evolution. We are literally not designed to live in a world of convenience.
1: Kaczynski’s term. Read his manifesto or even better the book Anti-Tech Revolution.
People of the past are not well recorded in history.
We don't know much about miserable medieval peasants, because they as a whole didn't know how to write, and the little the rare educated one wrote is unlikely to have been preserved. We know mostly what the rich or at least decently well off of the times have left, a lot of it biased or self-aggrandizing. Some of those wrote on things that they themselves didn't experience.
When you imagine happy peasants you should question who wrote on them and why, and if in those times there was any likelihood of anybody caring about the plight of suffering people, being allowed to write about it, and it being preserved to this day.
But besides that, even with the luck of having a life with a comfortable existence with work you enjoy, you still could one day die from something like an accident with a farm animal, a bad tooth, or childbirth.
> People of the past lived happier, more in peace and more fulfilling years of life than modern people
That's a pretty broad claim. How far back is "the past" that was so much better? Which people did you have in mind? The Irish peasants starving because of potato blight? European villagers during the Black Death? The slaves building pyramids in ancient Egypt?
Won't we have to do so anyway at some point in order to stop suffering from stupid diseases like cancer and solve other hard problems? Can we not mess it up? If yes, fine. If not, that's too bad but we have to try anyway IMO.
We should not attempt to solve all diseases. Many people live a long life without much disease. Yes, it's sad that some people get diseases and of course I've had friends with diseases -- but I am not naive enough to think that it will be a good thing overall to solve all diseases because the end result of a very long life for everyone will be massive overpopulation.
Death is a natural part of life, and we should not attempt to avoid it too much because it makes us more machine-like and less human. How many people have questioned the 9-5 lifestyle because they realize they could die at any time?
The ultimate logical end of advanced science and AI may be an immortal life, which we are not prepared to handle.
For you to have that it’s fine, maybe there are more and you can volunteer to die. For me, I am not so interested in disease or death for me, so I would like it fixed.
If I could give you a million upvotes I would. Heck I could somehow give you a billion upvotes, I would, even if would cost me my account.
Comments like yours restore my hope in HN whenever I start feeling like the community is being taken over by people whose thought processes are flirting on the edge of lunacy.
Imagine someone saying some diseases shouldn't be cured. I'm sure the millions of people all over the world who right now are suffering heart-rending pain from those "shouldn't be cured" diseases would be just thrilled to read the parent comment.
I never said we should NOT attempt to solve any diseases. My fear is that if we approach the logical conclusion of solving all diseases THROUGH SCIENCE, it is getting too close to solving death and encouraging overpopulation.
You are obviously having a knee-jerk reaction, because I am questioning the very core of society which is the very thing making you comfortable. I am just trying to have a reasonable discussion.
Now, there are PLENTY of ways of solving diseases without science. Many diseases are caused by our modern western diet, being sedentary, living in large cities, and being lumps all day whose only purpose is to further more technology.
Perhaps it would be a better idea to solve diseases by dismantling this system and thereby making us more healthy so that we don't NEED science as much?
The problem is, science has convinced you that disease is just natural and that you need it to solve everything.
Again, I am advocating for a healthier lifestyle AWAY from technology, which is certainly possible, instead of relying on technology to solve the very health problems that are created by technology.
If the capability exists, someone will pursue it. Maybe the G20 will pass a resolution to put safety measures in place, but whoever decides to flaunt those safety measures stands to gain an asymmetrical power advantage
That sets up a competitive dynamic where players are incentivized to try to get there first, no matter the risks, because if they don't someone else will
Similar dynamics played out with the nuclear arms race
"We" refers to human civilization, and is not just one CEO but the sum of all society that acts as an organism through the emergent behaviour of our social existence.
If "we" are doing something wrong, then "we" had better fix it. And that could mean governments or even counter-actions by individual people, or even actions by hacker groups....
Yes I understand that, but the problem is there's no structure to enforce decisions at the level of human civilization. So even if something is agreed and enforced by 99% of jurisdictions, that doesn't mean it won't happen
And it also doesn't mean individuals can't revolt against it either. It could happen, and 1% might try and make it happen, but we can ALSO try and stop that 1%, with a mob if necessary.
It's 0. Long before evil AGI can escape and do any damage it will be confined to big datasenters. Where it will be observed for a long time. Mobile capable terminators will come much later. We don't have capable mechanics, only some short running. And we don't have small capable AI brain. So there is plenty of time and options to stop and correct. Dumb robots, we have them, can be a danger, but they are far from AGI and can't run for long. Corrected: with humans support they can do a lot of damage. That's something to worry about.
Slightly off topic question, but does anyone know if there is any school of thought that holds both sides as being necessary? It seems the traditional Christians hold the physical Yahweh god as being the "true" god whereas the gnostics take the ideal non-material god as being the "true" god, but is there some school of thought that deems both necessary for existence in a co-dependent style relationship?
How does this glib remark contribute anything? There are much more useful ways of defining playing God than being so inclusive as to have all of human existence in it. (See my definition below.)
Playing God: creating technology or inventions that are far beyond our control, or that will have unintended consequences that have the potential to cause significant material or psychological damage to a large proportion of humanity that cannot be reversed. Such technology is of the type that despite attempts to control it, it proliferates, and it would proliferate under a variety of economic and political models so that there is hardly any way to stop it except complete physical destruction.
Solution is simple: take it step by step, so that we deploy technologies/inventions that are only ever so slightly beyond our control. Learning means making mistakes; we can control the magnitude, but ultimately, the only way to not make mistakes is to never do anything at all.
> We have never applied this type of deployment so far. For example, fossil fuels -- we still can't control it.
Let's not forget that pretty much everything you consider nice and beneficial about modern existence - from advanced healthcare to opportunities to entertainment, even things as trivial as toilet paper or tissues or packaging or paint - is built on top of petrochemical engineering. Sure, we're dealing with some serious second-order consequences, but if we overcome them, we'll end up better than when we had "simple life more connected to nature".
> I'd argue that it's much better to live a simple life more connected to nature, even if that means more diseases and more manual labor.
Hard disagree.
> this life is at the expense of the DEATH of millions of nonhuman organisms
It's not like nature cares either way. Sure, we may be accelerating the disappearance of whole species, but even then we're more humane at the killing. It's really hard to do worse than nature, where every multicellular organism is constantly fighting against starvation, disease, or being assaulted and eaten alive.
I have the feeling that concepts like money and property fit your description, with the addditional detail that much more than humanity has been impacted. I also feel that AI has a pretty good chance to be the only way to revert the significant material / psychological damages that have already been caused by those. It seems short sighted to not weight in the whole of our "playing god" so far, and dismiss the idea that we might better defer-to/get-help-from a different intelligence at this point.