Consider the following definition for a Technological Singularity (TS):
A TS is an event that occurs when AI advances to the point where its humans can't keep up with understanding and/or predicting it's decision-making process and/or the results thereof.
Using this definition, it would appear that, like physical singularities (black holes) TS can occur on a large or small scale (micro black holes can pop in and out of existence, with little to no effect on the macro world). So, let's say we develop an AI that can teach itself to play Go. After a while, not even the smartest humans can beat it. Indeed, the smartest humans can't even understand* why it plays in the way that it does. If this counts as a TS, where does creativity come into play?
*(Something similar has happened before, but it was discovered that the neural network was using physical electrical effects that occurred in the actual hardware when certain pieces of code was run. When a human tried to analyse the actual code, it made no sense.)
Ari K. Jonsson made software for the MER rovers that planned and scheduled the science done for each day. It basically book-kept the constraints on operations of the rover. (this joint has to be that hot before moving, requiring energy, moving said joint is a pre-requisite for taking picture of rock X). The geologists would take data from the previous day and make a list of all the things they wanted to do (sample that area, picture this rock, etc.) and the planning software would put up this gigantic gantt chart humans would then mess with to get as many "science point" out of the hardware for that day.
Usability surveys after the project was quite mature revealed that the full-on "auto-plan" was not used since the human engineers didn't trust the plans, they did not understand them. They saw that more net-science was being done that way, they saw that all the constraints were being met (no equipment was being jeopardized by the plan itself), just didn't trust the "machine intelligence".
Another striking thing about that project he revealed: The humans using the software didn't really think they were "using the AI" since they didn't use the auto-planner (even if the AI was working very hard maintaining constraints).
Dr. Jonsson was the Dean where I graduated. I'll go listen to his Mars experience any time I know he's speaking about it. I feel fortunate to have had him as a teacher.
You've selected the phrase "a Technological Singularity" and stretched the metaphor to create your definition.
"The Singularity" is what most people are discussing when this phrase is used. That requires something more encompassing than small occurrences, and would have society-wide impact.
Personally, I don't think it will actually occur. By "it" I mean the point where individual humans are eclipsed by a technological gestalt beyond ordinary human comprehension. This is my opinion, but I believe economic factors will retard technological progress enough that "The Singularity" cannot occur. Our society will either tear itself apart, or the disparate technologies will be so fragmented and incompatible as to not come together as a whole.
For examples, I'd cite the space program and the current state of computer operating systems.
It isn't the same thing. But that's just argument.
Let's go with the new subject of how creativity fits in. Creativity allows the expansion of a system of axioms through perceiving possibilities not permitted within the system. This allows escape from "incompleteness" [1].
Of course that is not all creativity does, but it is fairly big deal as it leads to what we tend to call "understanding" or "comprehension". Knowing how to calculate the next number in a sequence, and comprehending that the next number will always be the same as the previous (divide 1 by 3 and express as a decimal number) requires multiple levels of observation.
We've been able "teach" machines discoveries of that nature that we've already made, but we haven't really been able to generate the capability. Take the case of those evolved neural net solutions that took advantage of the physical nature of the hardware to optimize a detection circuit. The optimization could only occur because there was a suitable system-external test to drive optimization. While it is feasible to allow the combination of the physical world and a definition of "survival" to serve as a suitable test for machines, the result would merely be machines that "survive". This would not be "The Singularity" of machines that out-think us, this would be the "gray goo" scenario of machines that devastate our civilization and merely replace us on the top of the food chain.
My point is that there needs to be a way for the machine to generate its own tests. This, in large part, is comprehension, driven by creativity. Granted, I say all of this firmly embedded among the laity.
>Something similar has happened before, but it was discovered that the neural network was using physical electrical effects that occurred in the actual hardware when certain pieces of code was run.
It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip's operation, but they were interacting with the main circuitry through some unorthodox method-- most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors' absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white.
> A TS is an event that occurs when AI advances to the point where its humans can't keep up with understanding and/or predicting it's decision-making process and/or the results thereof.
I always thought the AI Singularity moment was when an AI was advanced enough to improve itself, or to write a smarter AI.
It would then lead to a feedback loop that would create incredibly intelligent AIs very quickly
I don't like your definition at all, as it's already been passed multiple times (like in your go example)
> It would then lead to a feedback loop that would create incredibly intelligent AIs very quickly
That's exactly what I said. I just limited the domain. Consider for example a superintelligent AI, constrained by your laptop on your desk. It has no network connectivity, but it can teach itself at a billion times faster than any human. Lock it in a room and come back the next day. You observe that the battery died. Did the Singularity (by your definition) occur?
EDIT: doh It actually should have network connectivity to train itself, or maybe some offline source of data.
No, probably not. Because a hyperintelligence would not be constrained by your closet or forgetfulness to plug in the laptop. Especially with a network connection, it should have no problem breaking out.
> Especially with a network connection, it should have no problem breaking out.
You're just assuming that. Why would it have no problem? Because of reasons we can't understand? In that case, "hyperintelligence" is equivalent to saying "omnipotence" in this regard, and as such we can easily dismiss it as an option.
As an example, it could manipulate frequencies of its hardware to broadcast signals (like using monitors to broadcast on FM) and entice people to connect it.
With a network connection, this is straightforward. It could hack servers for computing resources. It could do work (say, as a camgirl) for more money and hire meatspace resources if needed.
A nice thought experiment is assume an intelligence has a 128Kbps Internet connection - what real limitations does that impose?
A TS is an event that occurs when AI advances to the point where its humans can't keep up with understanding and/or predicting it's decision-making process and/or the results thereof.
Using this definition, it would appear that, like physical singularities (black holes) TS can occur on a large or small scale (micro black holes can pop in and out of existence, with little to no effect on the macro world). So, let's say we develop an AI that can teach itself to play Go. After a while, not even the smartest humans can beat it. Indeed, the smartest humans can't even understand* why it plays in the way that it does. If this counts as a TS, where does creativity come into play?
*(Something similar has happened before, but it was discovered that the neural network was using physical electrical effects that occurred in the actual hardware when certain pieces of code was run. When a human tried to analyse the actual code, it made no sense.)