For me, in the real world, a singularity means that you are wrong. More precisely, that you model is incomplete, and it applies to physics (eg: black holes) as well as technology.
For black holes, most scientists seem to agree that black hole singularity are not a real thing but an artefact of our lack of understanding of gravity at small scales, a problem that quantum gravity intends to solve.
Same thing for technology, I don't believe in "the singularity" as a real thing. If, as the article says, the singularity will happen in the next 20 years, it simply means that the model they used will break down in the next 20 years and if we want to look further in the future, we need a better model, that's all.
There may be a massive advance in AI, we may see AIs designing AIs, but saying that it will quickly result in god-like beings is naïve at best. The most likely result is that even if we manage to create a superhuman intelligence, it will hit a roadblock and end up only slightly more intelligent than we are, and clearing that roadblock will require time and effort, and will only uncover another roadblock. I believe we will make progress, but it will be a step by step process, no singularity.
And if you think about it, we already created a superhuman intelligence in the form of a computer assisted human. Computer assisted humans can solve problems neither can solve by themselves. AGIs, if they ever become a thing, will not be better than humans at everything anytime soon, I'd think of them as a really smart, but nerdy and awkward coworker, the 100x programmer maybe, but who needs a boss to put his skills to good use. And here I am speculating, a lot, further than that, I simply don't know, or I can say that there will be a singularity, which means the same thing.
Theoretically at some point in the step-by-step process, the computers will be able to clear their own roadblocks by themselves, and faster than we can. And who knows what happens then. Maybe "singularity" isn't a great word, dunno if there's a better one...
I agree with you: I think the model has been continuously breaking down from the beginning, and this is just a case of extreme goalpost moving.
20+ years ago, Kurzweil was very confident in Moore's law + Dennard scaling working to ~2020 and giving us 1 Teraflop/USD [0] but even with GPUs at nominal prices I think we are about 40x worse, with the gap growing.
- He's always predicted the singularity for 2045 - no post moving there.
- He didn't say Moore's law - that graph you link starts from 1900, long before Moore and microchips
- The graph says 10^10 flops for $1000 for 2020 approx, = 10 gflops. A NVIDIA GeForce RTX 3080 costs <$1000 and does 29.77 TFLOPS = 29,770 gflops so a good bit ahead of the prediction
>That's kind of wrong re Kurzweil.
>
>- He's always predicted the singularity for 2045 - no post moving there.
Fair enough. I've really only read "The age of spiritual machines" for an English class in college, and we went over it pretty in depth. It was fascinating initially, but after realizing that (IMO) it was mostly BS I have not read any of his stuff talking about the singularity after that book. So if he is sticking with his date good for him, but it seems pretty crazy to believe it's still going to happen if all the technologies that are meant to get us there are suffering setbacks.
> - He didn't say Moore's law - that graph you link starts from 1900, long before Moore and microchips
He postulates that there is a generalized law of accelerating returns that's universal. There were computational technologies before that reached their limits, and got overtaken by newer technologies that kept the overall exponential trend going. Moore's law was the latest of these computational technologies, ready to be overtaken once it runs out of steam. That's why that specific image spans times before and after Moore's law.
From page 81, he was pretty sure regular progress in semiconductors was going to get us very close to human processing power (20 Pflops in the book) in a personal computer by 2020:
"So, how will the Law of Accelerating Returns as applied to computation roll out in the decades beyond the demise of Moore's Law on Integrated Circuits by the year 2020? For the immediate future, Moore's Law will continue with ever smaller component geometries packing greater numbers of yet faster transistors on each chip. But as circuit dimensions reach near atomic sizes, undesirable quantum effects such as unwanted electron tunneling will produce unreliable results. Nonetheless, Moore's standard methodology will get very close to human processing power in a personal computer and beyond that in a supercomputer."
> - The graph says 10^10 flops for $1000 for 2020 approx, = 10 gflops. A NVIDIA GeForce RTX 3080 costs <$1000 and does 29.77 TFLOPS = 29,770 gflops so a good bit ahead of the prediction
From page 146 about 2019:
"The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second). [2] Of the total computing capacity of the human species (that is, all human brains) combined with the computing technology the species has created, more than 10 percent is nonhuman. [3]"
I get $4000 in 1999 to be ~$6850 in 2022 for 20 Pflops, so ~2.9 Tflops/$. So that prediction was 100x off (and with 3 extra years it should be >50 Pflops). Not sure if the graphs got adjusted later, but fwiw it looks closer to 10^15 than 10^10.
Well maybe. I still think Kurzweil's roughly on track on the exponential improvement in computing per dollar which was really an observation by him and other people as to what happens than a grand theory. Some of his other predictions seem a bit wacky.
I think the most interesting one coming up is Turing Test for 2029 which is not so far off now an it's not so obvious how that'll go.
The singularity assumes that a human-level intelligence can build a better-than-human-level intelligence. That fundamental assumption is the difference between breathlessly talking about how fast the singularity is coming and dismissing AGI altogether. If we can build a computer than I’d smarter than we are, then I think we can assume that intelligence can compound. But until that happens, we won’t even know it’s possible.
This is exactly how I feel about it too. "The singularity" has always sounded like a meaningless phrase to me. AI has seen a lot of interesting progress lately, but nothing remotely as fundamental like it would lead to any kind of superhuman AI. In fact, I think the 20 years before that has seen more fundamental progress than the past 20 year. Most AI today is mostly advanced statistics. I don't think we're any closer to any sort of independent reasoning in a computer.
Back when I studied AI in the 1990s, I felt like Strong AI was a red herring, and the real value of AI is not in replacing humans, but assisting them.
Sometimes we have a major break through which is not just another small step-by-step improvement. I would say that a singularity in AI means a major break through e.g. Artificial General Intelligence (AGI). GPT-3 is a big step forward, but it's not an AGI and therefore not a singularity (major break through). But of course many small step-by-step improvements are required for a major break through, it doesn't drop out of thin air.
I think it's similar to the apocalypse. Society will never entirely break down because there'd be no one left. It will just become a different society with new rules that we struggle to understand just now.
For black holes, most scientists seem to agree that black hole singularity are not a real thing but an artefact of our lack of understanding of gravity at small scales, a problem that quantum gravity intends to solve.
Same thing for technology, I don't believe in "the singularity" as a real thing. If, as the article says, the singularity will happen in the next 20 years, it simply means that the model they used will break down in the next 20 years and if we want to look further in the future, we need a better model, that's all.
There may be a massive advance in AI, we may see AIs designing AIs, but saying that it will quickly result in god-like beings is naïve at best. The most likely result is that even if we manage to create a superhuman intelligence, it will hit a roadblock and end up only slightly more intelligent than we are, and clearing that roadblock will require time and effort, and will only uncover another roadblock. I believe we will make progress, but it will be a step by step process, no singularity.
And if you think about it, we already created a superhuman intelligence in the form of a computer assisted human. Computer assisted humans can solve problems neither can solve by themselves. AGIs, if they ever become a thing, will not be better than humans at everything anytime soon, I'd think of them as a really smart, but nerdy and awkward coworker, the 100x programmer maybe, but who needs a boss to put his skills to good use. And here I am speculating, a lot, further than that, I simply don't know, or I can say that there will be a singularity, which means the same thing.