Hacker News new | past | comments | ask | show | jobs | submit login
Ancient dreams of intelligent machines: 3,000 years of robots (nature.com)
38 points by Hooke on July 30, 2018 | hide | past | favorite | 24 comments



The article’s final paragraph: >”Every robot rebel has its benevolent counterpart, such as C-3PO in the Star Wars franchise or the android child David in Steven Spielberg’s 2001 film A.I. Artificial Intelligence. Both kinds of stories, the hopeful and the fearful, reveal to us our complex emotional responses to AI. Understanding these and their deep history is crucial to making the most of life with intelligent machines.”

The authors interchangeably use robots, automata, AI (meaning AGI, I am guessing) — that makes for a confusing read. If they mean AGI, then nothing about how humans dealt with robots will likely prepare us for how to deal with an exponentially fast learning, super smart, intelligence. I would argue that learning from history would be anchoring ourselves to wrong ideas (ideas of incrementalism). To prepare for AGI, we need to think quite differently and radically, in some sense.


Indeed, I recommend Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, which explores this topic in great depth, including the control problem: https://en.wikipedia.org/wiki/AI_control_problem


Thanks, I will read this. I think in general the reasoning I see in this area is very poor. I think, since people have a hard time understanding their own intelligence, let alone sub- or super-intelligence, that they just project all kinds of human foibles onto it.


The oldest actual automata mentioned in the article is explored (including mechanics and a demonstration) in this YT video: https://www.youtube.com/watch?v=5LBlusUD3Kg

Note that Hero is credited, and the date is 10-70 AD, but Hero's contribution was to posit the mechanism by which the theatre worked, which was first described by Philon of Byzantium in the late third century BC! [Source: My thesis, submission deadline 28 September 2018 ;)]


3,000 years of imagination and it falls on this generation and our kids to define how it will all play out.


Born too late to explore the world

Born too early to explore space

But born just in time to explore the consequences of strong AI!


If you believe in strong AI during our lifetimes, that should increase your expected likelihood for exploring space.


>> But born just in time to explore the consequences of strong AI!

There's nothing to suggest we know how to create strong AI, or that we will have any idea any time soon. It will most probably take several human generations before we can even get close to anything resembling human intelligence implemented in software or hardware.


What signals do you expect will be in place in the world five years before the creation of human-level general AI?

Dog-level general AIs barking around?


>> What signals do you expect will be in place in the world five years before the creation of human-level general AI?

That is a question lifted straight from the playbook of the Singularity Institute (MIRI, nowadays) and the like - "we will not know strong AI until it is upon us! We must act now!".

I can tell you what I'd consider a "signal" that strong AI is, well, not imminent but forthcoming: research into actually creating strong AI.

Currently, there is no such research to speak of, not at a practical level. For example, the majority of machine learning scientists are competing to see who can create the best-tuned classifier to solve this or that machine learning dataset. They are not even trying to, say, combine different classifiers into a system that can perform multiple classification tasks, let alone a system that can do that and reason about the objects it's able to classify, etc. All modern AI is as far from general intelligence as it can possibly be.

Without any research focus on strong AI, how is it going to come by? The Singularitarians believe (emphasis on "believe") that it's just going to happen by accident, presumably when some unsuspecting graduate student serendipitously mixes together into just the right kind of deep neural net architecture just the right amount of data and just the right set of hyperparameters.

It is plain to see that this is a fantasy grounded in pipe smoke and wishful thinking and not in any way, shape or form a realistic expectation to hold.


Just so long as nobody decides to build an AI by uploading a cat mind:

https://en.wikipedia.org/wiki/Accelerando


Think there are a few of those around already. Guys at Boston D seem to have a keen interest in creating dog-level general AIs with some pretty impressive robotics HW.


The Boston Dynamics robots don’t have the computational AI equivalence of even a fruit fly - and not even that by many orders of magnitude. Dogs use their intelligence for an awful lot more than just walking around objects.

Dogs have complex social hierarchies and social behaviour, a wide range of emotional responses, they communicate, participate in group tactical hunting activities requiring planning and executing coordinated attacks, they exhibit sexual behaviour, they can learn to respond to audible and visual commands. They are also vastly more agile than the Boston dynamics robots, able to climb three dimensional environments, jump through gaps and swim, and they can practice and independently choose to learn to do these things entirely autonomously.


Not sure I implied that Boston Robotics dogs have dog level intelligence but you have to admit that they are making fascinating progress around HW and robotics. SW/AI side is definitely still at the basic, single function type of intelligent level and will take much longer to achieve. But progress both on HW and SW side and the pace at which the progress is happening is pretty impressive. Clearly, the AI side and the current approaches with the focus on machine learning and neural networks will need to evolve to be much more multifunctional/general. Who knows maybe it will be Machine Learning 2.0... or it might be a completely new approach to AI more aligned to biotech/human brain or maybe something completely else. Time will tell.


Having a theoretical plan for how to architect such a thing even in principle would be a pretty useful indicator.


What's wrong with "model and simulate the human brain"?


We don't know how to do that. Really. We have no idea how much detail the simulation has to have in order to work, and we don't have any way to actually capture the complete chemical state of a functioning human brain, while it is in operation, in order to prime the simulation.

None of our current or projected theoretical scanning technologies are up to the task. We could capture the state of a dead brain, but that won't get us anywhere, for obvious reasons. Hypothetical invasive techniques would kill the patient long before sufficient data could be collected. Hand-wavey 'nano-probe' solutions are sometimes suggested, but again we have no idea how to design or build such things, and in reality we'd need to capture complete instantaneous snapshot data from every part of every cell of a brain to do it, which is absurd. It's no more valid than star trek technobabble.


We are also very close to solving world hunger by "Feeding People".


In my defense, the bar to clear was "a theoretical plan for how to architect such a thing even in principle".

If that bar was hyperbole, then someone please give me the non-hyperbole version. I honestly don't know what level of detail of such plan people expect to be common knowledge in the imminence of its first success.


While I suspect frockington was being flippant we are close to getting rid of world hunger by feeding people as tech provides more food and money.

>Today, more people die from obesity than from starvation (http://www.ynharari.com/book/homo-deus/)

Also trying to copy mechanisms from the human brain is being done with some success by DeepMind. We are not quite up to running a decent brain simulation yet though.


True, but you could argue that this generation will define what that path to general AI will look like... ethics, laws, regulation, warfare... certainly in this generation we will see a huge impact on the society which doesn't require human-level intelligence in AI.


>There's nothing to suggest we know how to create strong AI, or that we will have any idea any time soon.

There's the advance of hardware such that it may be possible to run a simulation of a human brain for example. Also as hardware reaches human levels, things like self driving cars start working which leads lots of bright researchers to pile into AI which leads to advances in the algorithms. That stuff is happening now, not in several human generations.


The hardware for a whole-brain human simulation is not the limiting factor - if we had the hardware right now, we wouldn't know how to program that simulation.

Of course, the Agile Manifesto says you can develop systems without having a requirements specification or detailed design beforehand, but it also says that questions about when it will be done are off-topic.


Spot on. Although in our retirement age we might get to explore the moon or mars at least. But not too happy about not being born a couple of centuries later to see what warp 9 feels like.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: