Hacker News new | past | comments | ask | show | jobs | submit login
Stanford to host 100-year study on artificial intelligence (stanford.edu)
127 points by bra-ket on Jan 9, 2015 | hide | past | favorite | 38 comments



Headline from 2115: Stanford AI100 study results suggest a strong AI breakthrough a mere 15 years away


Indeed. On the flip side, "AI" is a moving target--people used to think mathematical proofs, chess, and jeopardy required "intelligence", but we now know we just need software. I've heard people say "We've made no progress towards AI", but I think that's as hyperbolic as saying "it's right around the corner". Progress (in this case, using machine learning to build cool things) isn't this on/off thing. There are worse fields of study to enter than machine learning. You'll learn things that are broadly applicable. And maybe "real AI" happens someday, this would be on par with invention of flight or moon landing, both of which sounded batshit before someone (a bunch of people) did it.


Reminds me of a German legend I recently read to my child, of Till Eulenspiegel. He traveled around in the middle ages and pranked people. Once a couple of professors challenged him to teach a horse to read. He immediately said "of course I can do it, but since horses are not very smart, it will probably take 20 years". His thinking was that before 20 years the horse might die, or the professors who challenged him might die, so he would be fine.


Somewhat cynical question, but is the 100 year aspect of this study anything beyond a marketing gimmick?


Variables affect the world in long time frames. The influential Harvard study on human happiness[1] could only have been conducted over the course of a lifetime. Check out The Long Now[2] for a group of people evaluating variables at the 10,000 year time scale.

[1]: http://www.huffingtonpost.com/2013/08/11/how-this-harvard-ps... [2]: http://longnow.org/


Even if the study doesn't live as long as its name states, a clear mission is useful because it guides people's decision making. A researcher in this should be optimizing her efforts for the 100 year timescale and not for the ~5 year one.


But I am not sure how saying it is a 100 year study really changes that? Sure it might be better than a 5 year study, but is it better than one that is simply ongoing? Or how about opening a specialized department focused on this type of thing? 100 seems like an arbitrary number that was only chosen to make this seem important. "Stanford to host ongoing study on artificial intelligence" is just a less interesting headline.


I don't see how that changes the individual researcher's incentive to make achieving short term, publishable discoveries a priority.


One concrete example would be data acquisition. Long term, large sample panel data sets take decades to produce, but are incredibly valuable if well designed / executed.

If you are applying for funding as part of a 100 year study, you won't get continued funding unless you put in the effort up front to design the data acquisition correctly.


Another is what a project can offer your personnel. A four-year project will almost solely have temporary jobs.

A hundred-year project will be able to offer more permanent assignments. That creates less 'publish or perish' pressure. It also may attract scientists with different character traits who wouldn't be able to compete in the rat race to tenure.


What about Stanford isn't a marketing gimmick? I recall signs along 101 saying something meant to be condescending like you too could get a degree from Stanford, if you paid to do an n-years master of whatever. Sounded vacuous.


The study's final authors may not be human. As such, they will clearly be biased.


Well, humans aren't unbiased either.


Reminds me of a symposium I attended at Stanford in 2000 titled "Will Spiritual Robots Replace Humanity by 2100?"[1][2].

(At the time, Bill Joy had written a provocative cover story for Wired headlined "Why the Future Doesn't Need Us: Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species"[3] and this panel was assembled in response.)

Glad to see that the backers of this new initiative agree with the more nuanced folks on that panel, like John Holland--that these will continue to be meaty questions to consider, both in computational and philosophical terms, well past 2100.

[1] http://news.stanford.edu/news/2000/march29/robots-329.html [2] https://www.youtube.com/playlist?list=PLvW5zob1PPbbFUZK_LdzU... [3] http://archive.wired.com/wired/archive/8.04/joy.html



The headline in 2040: 100-year Stanford AI study to add first Android professor.


Someone behind this study clearly has little faith in the imminence of AGI.


Well, it's been imminent since the summer of 1956 ( http://en.wikipedia.org/wiki/Dartmouth_Conferences )


Or, in other words, for about half the time those people are expecting no AI.

Never mind the Moore's Law putting computers more capable than our brains less than 30 years away.


Moore's Law is about transistor density, not computing power, so no it won't.

We could probably already build a computer more powerful than the human brain, it would just be huge (millions of cores or something). But that wouldn't help, because the real issue is that the Von Neumann architecture fundamentally prevents scaling the kinds of computations we want to do for neural networks. We need something more neuromorphic, although probably still discrete. (I'm guessing basically just a giant DSP integrated into memory on the same core.)



Nobody thinks Moore's law will last another 30 years.


Nobody thought it would last till here


See this graph:

http://www.extremetech.com/wp-content/uploads/2013/08/CPU-Sc...

The green line was the only one still going, and it plateaued about a year ago (you'd see that in a newer graph).


The green line is the only one that Moore's law is about. Power consumption plateauing is a good thing, clock speed was never going to be the primary driver of performance increases long-term, and instruction-level parellelism (green line) is not a measure of performance-per-clock-cycle.


Well, I think it's already dead for some good 3 or 4 years. It just didn't start stinking yet.

But that's no reason to think that it'll take more than 100 years.


There's plenty of room on the bottom.


Once upon a time, men said that AGI was imminent despite the fact that they had no computational model whatsoever of human cognition, and no limitary results other.

Now, we have increasingly good computational models of human cognition, with abundant limitary results on both human and mechanical reasoning, but only madmen believe AGI will work in their lifetimes.

Irony!


It seems to me it's more about the effects of AI on people and society than about AI itself.


Related to the impact of AI to society, especially wrt the increase of unemployment. I was reading the comments of this MIT Tech Review (http://www.technologyreview.com/news/533686/2014-in-computin...) and I came across this opinion, which gave me the shivers. Two points:

1) AI taking manual workforce-based jobs. I can't help seeing how beneficial the industrialisation of processes has been for humanity. Instead of relying on inaccurate human judgement for manufacturing jobs, we let machines produce perfectly similar assets much better than we can do. This has increased the reliability of the outputs, in addition to lowering the prices of the products, which has made them affordable to many more people. Jobs get more specialised, so like the tools human beings have developed throughout history. Once more, survival entails adaptation. And this is again a matter of supply and demand. In Spain, where the economic crisis is still hitting the markets and unemployment, having a proper specialised education no longer guarantees landing a job (and it's not because evil robots are doing the tasks of leaving scientists).

2) AI taking over engineers, lawyers, etc. AI is difficult per se. Nobody comes up with a human replica made of metal by chance. Things take their time, and improvements are gradual. That's a matter of fact. At present, AI (plus Machine Learning, Pattern Recognition...) delivers a set of tools that allow us to see father, from the shoulders of giants. We had never been able to digest the amount of data we are capable of doing nowadays. Isn't this progress? We haven't yet created a creative machine and I don't see it coming any time soon.

I am so firmly convinced that AI has so much good to do that I just created a blog (http://ai-maker.com/) solely dedicated to AI and its applications, and I'm going to dedicate my spare time for the following years to grow this side project into something awesome, because that's where AI is leading us.


It might interest you to join the LessWrong community, or read up on the work of MIRI (the Machine Intelligence Research Institute, https://intelligence.org/).

You can start by reading a few posts at the site: www.lesswrong.com.


The singularity or humanity will destroy everything within 50 years. If we make it another 100 years, I'll be pretty surprised. Old, and surprised that is.


Damn it. I am, like, NEVER going to finish this dissertation.


100 is a nice, round and even number. It's also a marketing ploy. Truth is none of these guys can make an accurate prediction about what AI can do in 20 years.


I don't know the researchers in question but Turing's paper [0] was surprisingly predictive in 1936, decades before any of this came to light. It's not out of the question that they could set the direction of research for a decade or two in the future.

I agree, 100 as a number is a marketing ploy. But I like the idea of a sustainable long term mission, rather than the funding rat race and trend chasing you tend to see.

[0] http://www.abelard.org/turpap/turpap.php


> I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 10^9 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

He was mostly right, out of only some 25 years, 50% more. A really great estimate for something that changed so fast. But just next:

> Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

That was quite off the mark.

Anyway, Turing's prediction was for 50 years in the future, that's orders of magnitude easier than 100 years in the future. And nearly all of the predictions by that time were completely wrong, what makes you think those people are the ones of our time that'll get their predictions right?


Playing Chess against Stockfish, there's an setting for "thinking time".


Wintermute




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: