Hacker News new | past | comments | ask | show | jobs | submit login
Robots Can’t Dance: Why the singularity is greatly exaggerated (nautil.us)
39 points by joosters on Jan 26, 2015 | hide | past | favorite | 104 comments



Misleading title, robots can actually be great dancers[1]. The problem with AI is that it's a moving target, as soon as there's an advance we tend to go "well that's still pretty mechanical", forgetting that we're also just mechanical systems (probably). Granted, AI and AGI isn't really the same thing, but it seems like we always make the assumption that an AGI has to work like a human internally. To me that seems like a good source of inspiration, but it's probably not a requirement. In the end, I think we'll figure out that for all our "creative genius", we're still just monkeys who see and do, and sometimes we do something new.

[1] = http://youtu.be/ww9ClmCWBr0


You touched the core of the discussion, imo, that is there are two schools of thaught involved.

One sees the universe and anything else as a machine -> determinism

The others are seeing the universe as a discrete continuum -> indeterminism

That is also my fundamental criticism at the current approach, it is not even considering the possibility that the whole approach could be wrong.

I think intelligent machines are possible, but not without much more fundamental understanding. Hyping, cheering deep learning and all this stuff, is just irrational and illogical.


Surely the more relevant dichotomy here is monism(materialism) vs dualism?

For those who find determinism unsatifying, I doubt indeterminism feels like much of an improvement.


Dualism is a part of it, but dualism is more an ethical question.

Indeterminism is more about emergence, quantum fields and such things. How the universe works.

My impression is that a lot of science that is applied to intelligent machines is based on a deterministic physical model based on Newton and LaPlace. The Bayesian networks was pioneered 200 years ago by LaPlace for example. But how is Einsteins relativity theory applied? Or quantum fields?

There are two different models of the universe involved the "old" deterministic model and the "new" indeterministic model (see Karl Popper -> Open Universe -> http://www.goodreads.com/book/show/288137.The_Open_Universe).

May be it make sense to bring some newer approaches into the game, instead of reapplying again and again the same approach.


More on the topic of Dancing Robots, HRP-4C from AIST & Kawada

https://www.youtube.com/watch?v=xcZJqiUrbnI


Yes they dance pretty well. But this is like saying that a tape recorder sings very well.


So where's the target now? That they should be able to improvize? I'm sure that can be added too with some work. Kurzweil famously wrote a program to produce music based on the music of classical composers as a child, I'm sure it could be done for dancing as well, especially since computer vision is making leaps, which would aid analysis. But then again, that would also just be "mechanical", as we would be able to explain the model.


They can't improvise. Improvisation is an innate quality which they lack. Likewise for the music creating program. There were such programs 30 years ago so Kurzweil's is nothing new. The music program can't judge the quality of the resulting piece.


Why is it that we give humans a lot of time doing trial and error and acknowledge them as masters of some art but then when a machine process fails to deliver on it's first baby steps declare that the endeavor is impossible?

Creativity is experimentation coupled with the insight of previous work. Computers can do the experimentation, they just need longer horizons of insight to be able to moderate.

On the other hand thou: AGI is a far way off. I'm not expecting anything HAL-like within my lifetime. Unless my lifespan will be artificially prolonged.


I absolutely agree.

Take this assertion from the article: "But it would be looking for patterns that are already existent. It wouldn’t be able to find that new thing that was totally out of left field. That’s what I think of as creativity—somebody comes up with something really new and clever." -- I find this disingenuous. Creativity is just the outcome of our particular experiences, it's a way to parse our hundreds of thousands of hours of experience we get even at an early age into a new combination. There's no small algorithm that without a mountain of data ("experience") will generate clever insights from simple requests.


To say nothing of the fact that billions of years of evolution imparts a large set of choices made for us. It didn't take 16 years to get a teenage virtuoso violinist, it took 3 billion years of evolution.


It's absolutely fantastic that < 100 years of purposeful hardware research and < 1 year of training can compete with 3 billion years of meandering hardware building and 16 years of training, to the point that the latter feels the need to insist that the former will never be as good.

Given our current trajectory, it wouldn't be shocking if at 150-200 years of hardware research and with <5 years of training, the intentionally created thing surpassed the evolved one.

Ed: I rolled software in with hardware.


> It's absolutely fantastic that < 100 years of purposeful hardware research and < 1 year of training can compete with 3 billion years of meandering hardware building and 16 years of training, to the point that the latter feels the need to insist that the former will never be as good.

The 3 billion years of evolution are still at the foundation of computer research. It's not like computers are spontaneously appearing out of thin vacuum via quantum fluctuations.


but computer like structures do pop up in incredibly simple situations. seashells for instance.


There's another aspect to consider:

When people express themselves in a creative way, they do so so that other people (and themselves!) can appreciate it. We create art for other people - we don't create for animals or plants or robots. Other people can appreciate our art because it may express an emotional and/or psychological state that resonates with them.

Now if we avoid the question of whether or not an AI can be considered creative if it does not express itself voluntarily, how can we judge the "quality" of its art? If it's doing so because it evokes a favourable internal state whenever it observes it, could that not be considered art?

(When you listen to a piece of music and it makes you feel good, would you not consider that a good piece music?)


I consider music that makes me feel anguish to be a good piece of music too (given the right circumstances).


So the question is, how do we make a computer selfish.


Ignoring the singularity talk because I think it's become meaningless, but Jürgen Schmidhuber would disagree with the conclusion that machines can't be creative, or at least that it's beyond algorithms.

He has a formal theory of creativity [0] which claims to explain, among other things, music, humour, beauty [1] and fun. It centres around compression and Kolmogorov complexity.

There's a great video in the first link.

These are hard problems, but it's shortsighted to consider it impossible for us to build machines with approximate behaviour. Often with this class of criticism you'll have arguments along the lines of "sure the submarine moves through the water, but is it swimming?" Apologies to Dijkstra.

[0] http://people.idsia.ch/~juergen/creativity.html [1] http://people.idsia.ch/~juergen/beauty.html


"But generating is fairly easy and testing pretty hard."

This is the reason he thinks computers won't be creatives. But testing theories in science or math can be automated(in most cases). So technological creativity is possible ,and that what's the singularity talks about(although the testing can be quite lengthy and expensive ,which might really slow the singularity maybe to the point of no singularity at all).

And as for artistic creativity - that depends on if we can build some model on how humans evaluate art in general. Who knows, maybe we can build a good model of that. We've certainly improved from say 100-200 years ago where most artists we're considered geniuses, to today ,where a large percent of commercial art is at least is generated or least guided by knowledge on how to create good stories, etc.


> But testing theories in science or math can be automated(in most cases). [...] (although the testing can be quite lengthy and expensive ,which might really slow the singularity maybe to the point of no singularity at all).

So we would be back to the problem being raw computing power.

> And as for artistic creativity - that depends on if we can build some model on how humans evaluate art in general.

I'm pretty sure that's culture and social experience in general that allow us to evaluate art. Not sure robots will be able to do that soon, as it seems to be a product of our intricate biological structure.

> [...] to today ,where a large percent of commercial art is at least is generated or least guided by knowledge on how to create good stories, etc.

So maybe robots are not as much becoming humans as we're becoming robots.


>> I'm pretty sure that's culture and social experience in general that allow us to evaluate art. Not sure robots will be able to do that soon, as it seems to be a product of our intricate biological structure.

That doesn't tell us much about the limits of models trying to emulate that to some level of accuracy. It just tells us it's complex. But we can create complex models.


"seems to be a product of our intricate biological structure"

How do you know?


"seems". I guess that's the most resonable assumption. What else ?


Robots don't actually need to be creative; robots only need to approximate creativity well enough that humans can't tell. After that point robots will always look creative even if they're not. The weight of a little randomness and a lot of brute force will do the rest.


> Robots don't actually need to be creative; robots only need to approximate creativity well enough that humans can't tell.

It's at least up for debate whether these are actually two different things, and therefore whether what you're describing is actually any easier.


We're kind of on the edge of this with algorithmically-optimised clickbait headlines. There's still a human in the loop at the moment, but the more metrics-driven the process is the more likely parts of it are to be automated.


That's actually a nice idea for a satire (and what can be more satirical than real life?). Do you know of any artificial headline generators? If not, it's high time we built one. I bet we could make cartloads of money with it.


You could replace the word "creative" with the word "intelligence" in that statement.


Humans only need to approximate creativity well enough that other humans can't tell.


Just our good luck that some of us are intelligent enough to pull that off :)


Quit bragging. Hmph.


I beg to differ. The hallmark of real intelligence / creativity will always be that they are eventually unforeseeable and uncontrollable. If you build something like that, you better have some “ethical subroutines” in store.


The hallmark of real intelligence / creativity will always be that they are eventually unforeseeable and uncontrollable.

Imagine three different computers. Each one is an identical black box that connected to a printer, and every 5 minutes it prints out an entirely new work of art that, subjectively speaking, is quite nice to look at. You are told the following;

1. One computer is genuinely intelligent in exactly the same way as a human is. The clever engineers have replicated human creativity perfectly and given it the personality of a competent artist. It 'thinks up' just one print and prints it each time.

2. The next computer generates 100 different pieces and then pushes each of them to Amazon Turk for evaluation by 100 different people. It prints the one with the highest score.

3. The last computer has a set of reasonable rules based on a set of the sum of knowledge from Harvard University's Art Society. It uses those rules to pick the best picture out of 100,000,000,000 different pieces it generates completely at random for each print.

How can you tell the difference between which is which?

The point being until you see the result there is no way to know. You have no control over the 3 boxes and you can't see what's happening inside. As far as you're concerned each computer could be any of the different types. They would all be creating a new work of art each time but only one would be "creative" in the human sense, but they're all being creative in the sense they're creating, appraising, and choosing a new solution.


You're imposing a set of rules and just assume an intelligent entity would heed them. My point is, if you're dealing with true intelligence / creativity, you can't be sure a priori which rules it would play by. After all, creativity is the antithesis to predictability.


Consider the following definition for a Technological Singularity (TS):

A TS is an event that occurs when AI advances to the point where its humans can't keep up with understanding and/or predicting it's decision-making process and/or the results thereof.

Using this definition, it would appear that, like physical singularities (black holes) TS can occur on a large or small scale (micro black holes can pop in and out of existence, with little to no effect on the macro world). So, let's say we develop an AI that can teach itself to play Go. After a while, not even the smartest humans can beat it. Indeed, the smartest humans can't even understand* why it plays in the way that it does. If this counts as a TS, where does creativity come into play?

*(Something similar has happened before, but it was discovered that the neural network was using physical electrical effects that occurred in the actual hardware when certain pieces of code was run. When a human tried to analyse the actual code, it made no sense.)


Anecdote on that same line of thinking:

Ari K. Jonsson made software for the MER rovers that planned and scheduled the science done for each day. It basically book-kept the constraints on operations of the rover. (this joint has to be that hot before moving, requiring energy, moving said joint is a pre-requisite for taking picture of rock X). The geologists would take data from the previous day and make a list of all the things they wanted to do (sample that area, picture this rock, etc.) and the planning software would put up this gigantic gantt chart humans would then mess with to get as many "science point" out of the hardware for that day.

Usability surveys after the project was quite mature revealed that the full-on "auto-plan" was not used since the human engineers didn't trust the plans, they did not understand them. They saw that more net-science was being done that way, they saw that all the constraints were being met (no equipment was being jeopardized by the plan itself), just didn't trust the "machine intelligence".

Another striking thing about that project he revealed: The humans using the software didn't really think they were "using the AI" since they didn't use the auto-planner (even if the AI was working very hard maintaining constraints).

Dr. Jonsson was the Dean where I graduated. I'll go listen to his Mars experience any time I know he's speaking about it. I feel fortunate to have had him as a teacher.


You've selected the phrase "a Technological Singularity" and stretched the metaphor to create your definition.

"The Singularity" is what most people are discussing when this phrase is used. That requires something more encompassing than small occurrences, and would have society-wide impact.

Personally, I don't think it will actually occur. By "it" I mean the point where individual humans are eclipsed by a technological gestalt beyond ordinary human comprehension. This is my opinion, but I believe economic factors will retard technological progress enough that "The Singularity" cannot occur. Our society will either tear itself apart, or the disparate technologies will be so fragmented and incompatible as to not come together as a whole.

For examples, I'd cite the space program and the current state of computer operating systems.


> You've selected the phrase "a Technological Singularity" and stretched the metaphor to create your definition.

If you consider the example I gave and expand it to other fields, it's the same thing. As I said, it's analogous to black holes.

My point is, how does creativity fit into all of this?


It isn't the same thing. But that's just argument.

Let's go with the new subject of how creativity fits in. Creativity allows the expansion of a system of axioms through perceiving possibilities not permitted within the system. This allows escape from "incompleteness" [1].

Of course that is not all creativity does, but it is fairly big deal as it leads to what we tend to call "understanding" or "comprehension". Knowing how to calculate the next number in a sequence, and comprehending that the next number will always be the same as the previous (divide 1 by 3 and express as a decimal number) requires multiple levels of observation.

We've been able "teach" machines discoveries of that nature that we've already made, but we haven't really been able to generate the capability. Take the case of those evolved neural net solutions that took advantage of the physical nature of the hardware to optimize a detection circuit. The optimization could only occur because there was a suitable system-external test to drive optimization. While it is feasible to allow the combination of the physical world and a definition of "survival" to serve as a suitable test for machines, the result would merely be machines that "survive". This would not be "The Singularity" of machines that out-think us, this would be the "gray goo" scenario of machines that devastate our civilization and merely replace us on the top of the food chain.

My point is that there needs to be a way for the machine to generate its own tests. This, in large part, is comprehension, driven by creativity. Granted, I say all of this firmly embedded among the laity.

[1] http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_t...


>Something similar has happened before, but it was discovered that the neural network was using physical electrical effects that occurred in the actual hardware when certain pieces of code was run.

Link?


It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip's operation, but they were interacting with the main circuitry through some unorthodox method-- most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors' absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white.

http://www.damninteresting.com/on-the-origin-of-circuits/


This sounds fascinating!

Have these results been reproduced? And if so by whom? I'm looking at the Wikipedia article (https://en.wikipedia.org/wiki/Evolvable_hardware), and it contains no citations.


According to this presentation, several people have discovered similar results, albeit in other areas

http://www.cartesiangp.co.uk/papers/gecco2004-tutorial-mille...

• Gordon Pask - Ferrous sulphate • Adrian Thompson - silicon • Adrian Stoica, Didier Keymeulen, Riccardo Zebulum - silicon • Huelsbergen, Rietman and Slous - silicon • Derek Linden - reed switch array • Paul Layzell and Jon Bird - silicon • Simon Harding and Julian Miller - Liquid Crystal


> A TS is an event that occurs when AI advances to the point where its humans can't keep up with understanding and/or predicting it's decision-making process and/or the results thereof.

I always thought the AI Singularity moment was when an AI was advanced enough to improve itself, or to write a smarter AI.

It would then lead to a feedback loop that would create incredibly intelligent AIs very quickly

I don't like your definition at all, as it's already been passed multiple times (like in your go example)


> It would then lead to a feedback loop that would create incredibly intelligent AIs very quickly

That's exactly what I said. I just limited the domain. Consider for example a superintelligent AI, constrained by your laptop on your desk. It has no network connectivity, but it can teach itself at a billion times faster than any human. Lock it in a room and come back the next day. You observe that the battery died. Did the Singularity (by your definition) occur?

EDIT: doh It actually should have network connectivity to train itself, or maybe some offline source of data.


No, probably not. Because a hyperintelligence would not be constrained by your closet or forgetfulness to plug in the laptop. Especially with a network connection, it should have no problem breaking out.


> Especially with a network connection, it should have no problem breaking out.

You're just assuming that. Why would it have no problem? Because of reasons we can't understand? In that case, "hyperintelligence" is equivalent to saying "omnipotence" in this regard, and as such we can easily dismiss it as an option.


As an example, it could manipulate frequencies of its hardware to broadcast signals (like using monitors to broadcast on FM) and entice people to connect it.

With a network connection, this is straightforward. It could hack servers for computing resources. It could do work (say, as a camgirl) for more money and hire meatspace resources if needed.

A nice thought experiment is assume an intelligence has a 128Kbps Internet connection - what real limitations does that impose?


Excuse me, this is headdesk-worthy clickbait material.

wham Cognition! wham Is! wham Lawful!

If you can scientifically characterize human creativity, then you can program an algorithm to behave creatively. If you believe you cannot program an algorithm to behave creatively, then this is because you don't understand creativity as a cognitive function.

Why is it that as soon as someone says "AI" everyone turns off their normal scientific/naturalist worldview and starts going yippity-skip-de-doo in fairyland!?


>Why is it that as soon as someone says "AI" everyone turns off their normal scientific/naturalist worldview and starts going yippity-skip-de-doo in fairyland!?

Despite claiming to be rational, people are still uncomfortable with the idea that they're meat machines.


Well what else do they expect to be? If non-physical souls existed, they would have to work on some principle too. Reality always bottoms out somewhere.


> We can build a classifier that would look at lots of pairs of successful movies and do some kind of inference on it so that it could learn what would be successful again. But it would be looking for patterns that are already existent. It wouldn’t be able to find that new thing that was totally out of left field.

Just a baseless assertion without any evidence.

There is no reason to assume you couldn't build a system that emulates whatever the human brain is doing there.


I can think of one: you can't simulate an atom.

If you read too much into supercomputing and think that they can simulate atoms, then go tell CERN to stop searching for sub-particles and simulate them instead.

Otherwise, I think you know what I mean. Besides, a real system depends on initial conditions. You can't simulate those.


You are confused about the order in which these things happen:

1. We form a hypothesis about how something works.

2. We do experiments to try to falsify said hypothesis.

3. If a certain amount of experimentation fails to falsify the hypothesis, we conclude tentatively that the hypothesis is a correct model of reality, namely, we promote it to a theory.

4. We use that theory to simulate the real thing computationally.

(4) is the whole point of doing steps (1) to (3) - and all VMG is saying is that there is no reason to assume that (1) to (3) couldn't lead to (4) with regards to the brain, just as there is no reason to do so with regards to atoms, which in turn is why we do operate CERN instead of just asserting that atoms cannot be understood.


Yes you are correct in 1-4 but I think that your hypothesis is wrong wrt to simulation:

You don't know the initial conditions which are a _huge_ part in determining the outcome of the simulation. Maybe you will limit your precision to Plank constant. Can you measure with that precision?


I don't know the initial conditions of what?


Now this question clarifies why you think too much into simulating reality:

http://en.wikipedia.org/wiki/Initial_condition

look for nonlinearity

EDIT: You can't simulate a brain without considering the quantum effects at atomic levels. You can't simulate quantum effects. QED.


> You can't simulate a brain without considering the quantum effects at atomic levels.

That's wild speculation. At the very finest level of detail, we can simulate a brain in this way. At the very coarsest level of detail, we can simulate it as a thermodynamic heat bath. The correct level of detail for the emergence of intelligent behaviour is likely somewhere in-between.

> You can't simulate quantum effects.

What? Of course you can! Classical computers take an exponential amount of time, but it's still a finite problem. The whole field of Computational Chemistry is based on simulating quantum effects!


"The correct level of detail for the emergence of intelligent behaviour is likely somewhere in-between."

I think this is your speculation. :)

"Classical computers take an exponential amount of time"

It is not a matter of time it takes to simulate, but of the details. You don't know which details matter and which don't. You don't know if you need infinite precision or you can stop at Plank's level. Thus, since you don't know a lot of things, you can't be sure that what you simulated was indeed the real thing or something that you imagined/theorized to be the real thing.


"Thus, since you don't know a lot of things, you can't be sure that what you simulated was indeed the real thing or something that you imagined/theorized to be the real thing."

That's a useless distinction as there is nothing where you could "know the real thing", you _always_ work with theories, without exceptions. When you take your first step out of bed in the morning, you don't know the _real_ behaviour of the floor (it's made of atoms, after all), you "only" use a relatively high-level theory of solid materials to predict that it will support you before you step onto it. There is no guarantee that that will work out, but there is fundamentally no way to do better than that, all of science works that way, _everything_ we know about the world is "a theory", even the atom theory is a theory, and the quantum theory is a theory, it's all about modeling as best as we can, none of it is "proven to be the real thing", and no scientist ever even tries to prove something to be "the ultimate reality", all that matters to science is to make models more precise, and to figure out what those models seem to be sufficiently precise to use for - and then use them, which we do very successfully indeed.


> I think this is your speculation. :)

True, but I'm hedging my bets ;)

> You don't know if you need infinite precision or you can stop at Plank's[sic] level.

Objectively: we know that infinite precision computation is no obstacle, since we can just do everything symbolically in proof-space rather than numerically in value-space. Whether we can call that "simulation" is a matter of semantics, but it's "only" an exponential slowdown.

Empirically: we know that the Planck-level interactions have no observable effects on macro-scale phenomena. If they did, we wouldn't need to build particle accelerators! Brains do not exploit particle physics for their abilities; there is no point simulating at any scale lower than up/down quarks + electrons + photons. They don't even exploit nuclear physics; there's no point simulating at a scale below the atomic. Brains do exploit chemistry, so an atomically-precise simulation may be justified.

Of course, that's only if we want a faithful simulation. There's nothing to say that protein-level simulations won't be perfectly fine, or even cell-level, if the relevant lower-level effects can be approximated. Some current approaches operate at the level of mini-columns, and work well on standard AI tasks like computer vision.

These coarser-grained simulations are less precise than the finer-grained ones, but all that matters is that they exhibit intelligent behaviour (however we're defining it). Even if some atomic-level quirk were being exploited by biological brains, we may still be better off without it. Again, it's all a matter of resource usage (intelligence per CPU-second/GB/dollar/etc.). A coarse-grained simulation might not be as intelligent as a fine-grained simulation, but it will run much faster; instead of spending more resources to run the fine-grained simulation, why not spend those same resources running the coarse-grained simulation at faster than real-time, with a larger virtual brain?


"You can't simulate a brain without considering the quantum effects at atomic levels. You can't simulate quantum effects. QED."

One of the neat side effects of using QM to counter determinism is that you go directly from "everything you do is predetermined" to "everything you do is essentially random". Cf. Bell's theorem.

https://en.wikipedia.org/wiki/Bell%27s_theorem


The conclusion is that you need the quantum "layer" to get a intelligent machine.

Not the simulation of quants. Could be much easier than trying to simulate everything. Google tried to use DWave quantum computers for pattern matching in Google Glass.

But there is a debate that DWave computers are really quantum computers, some say they are not quantum computers.


You might not need the quantum layer, then again, it might be indispensable. The thing is: you don't know. And without considering all the effects you have at atomic and subatomic levels, you can't say that you simulated a brain. All you can say is that you simulated something that looks like a brain. Which is completely different.


I wasn't asking for a random example of a system that I don't know the initial conditions of, but what specifically you meant when you tried to construct an argument for the impossibility of emulating a brain.


Simulating subatomic particles is a huge part of the work done at CERN. That's how they predict what signals to expect from their experiments.

They can't run simulations instead of their experiments, since they're looking for new fundamental Physics; in other words, they're trying to find out which of their simulations describes the world best.

AI research isn't looking for fundamental results; we have a good theory of computation, and a good-enough theory of Physics to describe our brains. We know which simulation to run. The remaining problems are 'only' to do with resource usage: which computations/arrangements-of-atoms can give us good-enough results in short-enough time?

If we could efficiently run O(2^n) algorithms, AI would be unnecessary; we could just brute-force everything (including brain emulation).


Exactly because "they can't run simulations instead of their experiments" you can't simulate an atom and thus a brain.


You can simulate atoms and you can simulate the brain. The outcome of CERN's experiments has nothing to do with brain emulation (other than the fact it's pushing hardware innovation forward).

The reason CERN has built a 27km, 14TeV, super-cooled, superconducting magnetic particle collider is because we know so much about fundamental Physics that it takes something really crazy, like the LHC, running billions of collisions, in order to find something which we might find surprising. So far, it's not surprised us; although it has confirmed the ideas Peter Higgs developed 50 years ago.

Of course, there is still debate and surprise to be had in other, non-fundamental areas; yet these are studying the consequences of the fundamental laws (they're debating the output of a simulation, not its code).


Is there anything physical we can simulate that does involve atoms?


Is that a trick question? The answer is: it depends on what you consider a satisfying result.

You can stop higher than atoms anytime you think the result satisfies your experimental measurements.


No, I am simply trying to understand your argument.

It seemed to me that your argument was "There are details of how an atom works that we currently don't know how to simulate. A brain's behaviour is a result of the behaviour of the atoms it consists of. Therefore, it's impossible to simulate a brain to any degree of accuracy at all."

Using the same logic, and given that most everything we interact with is made of atoms, one could deduce that it's impossible to simulate anything at all, except for electron beams, maybe. That seems to me to be obviously in contradiction to how well we can navigate and manipulate the world in our day-to-day lives (let alone in engineering), which depends on us simulating the world all the time, at least to some degree.

Now, I'd be interested in understanding how you reconcile that.


Yes, this is my argument. But understanding day to day things doesn't need atomic level details. For most things your measurement precision matches a model at a much higher level. You can stop at Newton's or Einstein's equations whenever that fit's your view of reality.

People use that to extrapolate in the other direction, and assume that because Newton's laws can be simulated, you can simulate an atom, thus everything. We don't have a view of reality at subatomic levels that says: "this is it and there's nothing beyond it". You don't know if your simulation of a brain will show an emergent intelligence if simulated with 'float', 'double' or 'infinite'.


OK, and could you now explain how you start from not knowing what it takes to emulate a brain and then deduce from that that it's impossible to emulate a brain? Your original argument was a supposed reason why emulation is in fact impossible, not why it might later turn out to be impossible, which is why I (and others) disagree - which you still haven't established, though, as far as I can see.


You humans can't appreciate the dance of robots.


I think at least one side of creativity can be summed up as "producing new combinations of things we already know". In this context art would be more than creativity : means of suggesting new unexpected combinations of ideas in the minds of others. There is a social sharing side to this equation. This in my opinion is what AI won't get soon, as it requires embodiment, and more specifically human embodiment. It is already difficult to communicate with other animal species that share many biological structures with us and thus ways to experience the world. How could it be easy to make a machine that produces meaning, as in combinations of ideas that make sense in the context of human experience ?

What will save us is the building of machines that will collect, store, process and repurpose meaning in a meaningful way (no pun). Like, linking pieces of data to emotional states. Yet they won't get it.


On a related topic - I can recommend the new movie Ex Machina:

http://en.wikipedia.org/wiki/Ex_Machina_%28film%29

About the only thing I feel safe commenting on without fear of spoilers is where the outdoor scenes were filmed - Norway - which looked stunning.


Computers have already passed a domain specific Turing Test by composing music that is as good as human composed music [1]. The phrase "robots can't [do X]" should always be suffixed with a "yet". There was a paper a few years back demonstrating a system that could compose an image based on text by finding images of the desired objects and composing them together. It's just a step away from creating a system that can generate paintings in the same style as great artists based purely on a text description.

[1] http://www.psmag.com/books-and-culture/triumph-of-the-cyborg...


I've studied with Dr. David Cope, the guy you referenced as having proved that "computers have already... [composed] music that is as good as human composed music," and I've looked at the code in question. He's achieved something remarkable, sure, but nothing which even remotely validates your claim.

Emily Howell (the software in question) mostly just mixes and matches fragments of existing scores, normalized by key signature. There's a bit more to it than that, but nowhere near enough to justify a claim like "computers out-compose human musicians." It doesn't do much more than Beethoven's own experiments in aleatoric music in the early 1800s.

Edit: ok, that last sentence re Beethoven's a little unfair, but imagine Beethoven with modern processing power and you're not far off.


Have blind tests been performed, where musicians could't distinguish between music composed by computers and music composed by humans?


yes, and the software did quite well, but see my post above for caveats.


"It would instantly generate all possible combinations of movies and there will be some good ones. But recognizing them, that’s the hard part."

By his definition, the vast majority of people are not intelligent.


>> “No, you’re missing that a fundamental aspect of intelligence is experience and that requires embodiment.” He knew that to understand the world you needed to be inside the world, you needed to experience its behaviors and responses to you. Well, he was right. We may be making progress in being able to do things like recognize a cat in a photograph. But there’s a huge gulf between that and doing something creative.

I'm not sure if I agree with this, but it is a compelling argument. Could experience be simulated the first time, hence bootstrapping the AI's?


There are tons of people working on giving robots the ability of collecting experience through embodiment in robots.

It may be a compelling argument for why the various timelines thrown around for the singularity will be off, but it's a speed bump, not a road block.


So goes the argumentation since 1960.


By the standards of 1960 we already have AI. After all, computers can beat the best humans at chess.


A chess board is a closed system with fixed rules, as far as I know you only need a lot of computing power to apply the min-max algorithm to solve any chess game.

In a chess game there are no probalities, AI is about to independently recognizing patterns in noise and develop assumptions out of it. The trick human brains are applying here is called intuition.


That wasn't what people said in 1960 - as soon as AIs solve a particular problem we redefine intelligence to mean something different. Modern AIs (e.g. hypothesis generation toolkits) can be better at recognizing patterns in noise than humans. I bet in 20 years' time we'll see that - and we'll be having this same conversation about how pattern recognition isn't really intelligence.


by some definition of AI


The singularity is a linear projection of computing power, in this projection anything that is questioning the singularity is pro-actively ignored. What puzzles me most, is why it is ignored. For me it does not make sense to hope that just the amount of simulated neuronal complexity will be enough, that suddenly out of the complexity something intelligent emerges. The whole approach is flawed. Something very essential is missing, that is a proven model how brains work and why they work, down to the last quantum state.


Is it really necessary? As far as I know, the theories about generating lift were still being heavily debated long after the Wright brothers flew. People often simply try things until they work, then go back and try to understand why they work.

Having a decent theory helps, but I'm skeptical that a complete understanding of the issue is required to make it work.


The difference is the plane did fly without the understanding after little research, but we are trying to make the machines intelligent since decades.

Something is missing, imo.


Little research? We had been trying to make heavier-than-air machines for centuries, if not millennia. Even the specific concept of the modern airplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control was put forward in 1799, more than a century before the Wright brothers flight. If we talk about flying machines in general, the Bamboo-copter[1] is 2400 years old.

[1] http://en.wikipedia.org/wiki/Bamboo-copter


It was in my best intend to ignore all attempts before the Wright brothers to try to fly.

It is the same here, we have the wish to make intelligent machines, but we may lack an engine to do that. Besides the airplane design it was also the availability of powerful engines and other features to go the skies.

(I did read the Wikipedia article): in that sense I afraid we are more at stadium of Da Vincis concepts than an aeroplane.


The whole point of the Singularity is that the technology will progress to a point where AI techniques will advance to produce something that is creative and intelligent enough to advance itself.

Obviously, we haven't hit that point yet - so we don't know what those advances are.

Before penicillin was discovered, doctors couldn't conceive of curing many serious and deadly infections.


Quite. I have no time for exaggerated singularitarian nonsense about strong AI being inevitable in a few decades. We don't even have the primitive fundamental concepts required to even begin to outline the actual design of such a thing. Therefore any attempt to estimate a timeline for its development is a blind guess.

On the other hand, just because we can't design or build one now doesn't mean we never will. Lord Kelvin was clearly wrong that heavier than air craft were impossible because birds. They are physical, mechanical systems that are heavier than air and yet fly. Therefore such systems are self evidently possible. So it is with strong AI. Physical systems that exhibit human intelligence or exist - us. Therefore physical systems like us are possible.

Here again the example of birds is instructive. Birds fly, but constructing a machine that flies I. The same way as birds is incredibly hard. Far, far harder than building rockets, propeller driven planes and even jets. There's no law of he usiverse that says our first strong AI will be designed along the same architectural lines as the human brain or that itse performance envelope will be similar to ours. At this stage, as the OP says, we don't know.


Joke: may be you need a quantum singularity to get the intelligent machine singularity...

No I do not think that we do have black holes in our brains, and no we are not connected by worm holes...but it is nice idea. Likeminded people are connected by tiny worm holes.


Nothing about the singularity specifically requires creative AI; indeed, a large part of it is enhancing the existing potential of humans; replacing them outright is not necessarily the case. As it is, an article titled about AI and creativity without the gratuitous "singularity won't happen" would be more accurate, but so much less sensational and clickbait-y that it might not even be able to justify its own publishing, I guess.

Also, had to laugh at gratuitous misuse of "supercomputers".

Yet another layman grasping at concepts they don't understand... yawn. Such an interesting coincidence that someone who says "AI can't be creative" also happens to be an artist... Reminds me of Roger Ebert dismissing the potential of games vs. movies as he felt threatened by them. As it is, that is one of the most gross and ignorant misunderstandings of singularity, that people will be somehow marginalised or not valued rather than the main point being the next logical step of tools (post-)humans use for their own benefit.

Edit: Oh, look, that whole "...the greatest scientists are also artists." canard again. Why am I not surprised? There have been maybe two or three who were exceptional or even widely appreciated in both, which is indicative of a "renaissance man" who is proficient in many fields and not an overall tendency/requirement. While many are skilled in nonscientific fields too, I would hardly call Hawking, Feynman or Dawkins an artist just because they were good at speeches, lectures or books, for example.

I would also remind everyone that predictions of the future are so often pessimistic...

"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." -- Western Union internal memo, 1876.

"Heavier-than-air flying machines are impossible." -- Lord Kelvin, president, Royal Society, 1895.

"Airplanes are interesting toys but of no military value." -- Marechal Ferdinand Foch, Professor of Strategy, Ecole Superieure de Guerre.

(Apocryphal; thanks for pointing this out to me as I was not aware; left in for completeness' sake) "Everything that can be invented has been invented." -- Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.

"No flying machine will ever fly from New York to Paris." -- Orville Wright.

"Professor Goddard does not know the relation between action and reaction and the need to have something better than a vacuum against which to react. He seems to lack the basic knowledge ladled out daily in high schools." -- 1921 New York Times editorial about Robert Goddard's revolutionary rocket work.


"Everything that can be invented has been invented." -- Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.

Poor man, constantly libeled due to lazy book writers: http://en.wikipedia.org/wiki/Charles_Holland_Duell


Thanks; have added a disclaimer on that quote.


Where is your argumentation? Ditching together something that is looking like a neural net and boost it with statistical tricks, is just not a brain.

If you just define something as "intelligent" without even trying to benchmark it to original, well than you also can just a statistical experiment done with a lot of CPUs. I am sure Intel would be happy with that.


My singularity is certainly not exaggerated, but I, being a 100% non-robotic entity, can assure all of you that I can't dance at all.


Well, to address the title of the article, robot motion usually doesn't look very good for a known reason. The motion control systems used are usually positional. Most robotic control systems have a processor or PLC for each joint, and that processor usually accepts position goals, not force goals. Then there's some central coordinator issuing positional commands. This is simple to code, and many robotics frameworks have that hierarchical approach more or less nailed into them.

That hierarchical approach is not very good for dynamic motion. For that, you need force control and coordination in force space. I used to work on this; here's the first anti-slip control for legged robots, from 1995: (https://www.youtube.com/watch?v=kc5n0iTw-NU). That was picked up by a grad student at McGill, who put it into their running quadruped Scout II. Then his professor, Martin Buehler, left McGill for Boston Dynamics and became the head engineer on BigDog.

All the actuators on BigDog are run by one CPU, which is a Pentium 4 class machine running QNX. The balance servoloop runs at 100Hz, and the hydraulic valve control loop runs at 1KHz. This allows for coordinated force control across all actuators, which is why BigDog is so agile. The Atlas robot version 1 is basically a modified BigDog, although version 2 seems to have been redesigned above the hips, with onboard power.

The motion in the DARPA Humanoid Challenge looked so bad last time because most of the participants using the Atlas robot were using a Windows DLL provided by Boston Dynamics. That DLL was just intended to provide some basic functionality to get participants started. Functions provided included "walk slowly" and "stand stably while arms do something". They didn't have the running, balance recovery, or slip control capabilities Boston Dynamics put into Big Dog. Expect much better performance in round 2 next winter.

Until recently, most robotics simulators were hopeless about force accuracy or friction. Most of them used physics engines borrowed from video game technology, where nobody cares about force accuracy or friction as long as things blow up prettily. This was recognized as a problem by DARPA, and they funded Dr. Mike Sherman at Stanford to put a serious dynamics simulator into Gazebo. Sherman previously had a commercial company, Symbolic Dynamics, building dynamics simulators for industry, and did know how to get the dynamics right. So now you can simulate force-controlled robots in Gazebo.

(Unfortunately, it took two decades to get this right, so I've moved on to other things, after a detour through physics engines for animation.)

Anyway, that's why robots can't dance very well yet. That problem is being fixed.


Would human creativity pass turing test?


forget dancing. Even simple human tasks like tying shoelaces is borderline impossible for machines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: