Hacker News new | past | comments | ask | show | jobs | submit login
Peter Thiel’s CS183: Startup - Class 17 - Deep Thought (blakemasters.tumblr.com)
90 points by r4vik on June 5, 2012 | hide | past | favorite | 12 comments



Funniest part from the notes!

"And then there’s the probably apocryphal story about Columbus on the voyage to the New World. Everybody thought that the world was much smaller than it actually was and that they were going to China. When they were sailing for what seemed like too long without hitting China, the crew wanted to turn back. Columbus convinced them to postpone mutiny for 3 more days, and then they finally landed on the new continent.

Eric Jonas: Which pretty much makes North America the biggest pivot ever"


I got a good chuckle out of this one as well: "We may end up creating a supercomputer in the cloud that calls itself Zeus and throws down lightning bolts at people." Thiel is hilarious.


I liked this part. What a great class, wish I could attend.

"PayPal, at Luke Nosek’s urging, became the first company in the history of the world that had cryogenics as part of the employee benefits package. There was a Tupperware-style party where the cryogenics company representatives made the rounds trying to get people to sign up at $50k for neuro or $120k for full body. Things were going well until they couldn’t print out the policies because they couldn’t get their dot matrix printer to work. So maybe the way get biotech to work well is actually to push harder on the AI front."


The class is pretty easy to crash, you just show up and sit in the back, there is plenty of room. Parking is pretty easy too.

04/02/2012 - 06/06/2012 Mon, Wed 2:15 PM - 3:45 PM at Annaud


The initial discussion about the hugeness and strangeness of AI reminded me of the godlike AIs in Dan Simmons' Hyperion Cantos, which are so incredibly advanced as to be pretty much beyond understanding in their native forms. The friendly ones end up speaking in zen koans.

http://www.seanparnell.com/Hyperion%20Cantos/Web%20Pages/Umm...


We need a higher resolution in our vocabulary when talking about technological risks. Someone can be in favour of most forms of technology, and yet even when dealing with technologies with extinction risk, the word that comes up to describe the opposition is 'Luddite'. Which is not to say that I am opposed to AI research.


"The audience was split 50-50 on that. If it can accelerate—if it can more than double every 18th months going forward—it would seem like you’d get something like AI in just a few years. Yet most people thought AI was much further away than biotech 2.0. "

Interesting leap - is a large number of transistors on a die enough to enable an acceptable AI? I would have thought that more is necessary, and figuring out when it will happen will not be easily determined by a model like Moore's Law.


At least Mc Carthy thought that machines from the 70ies should have enough power to do it, i.e. it is an algorithmic/representation/etc… problem, not a lack of computational power one.

If you look at what happens in nature, relatively low-level organisms are capable of some traits that we wish the most powerful computer had (vision of insects, etc…). Also, if you look at a well defined problem like speech recognition, the relationship between computational power and say accuracy is closer to a logarithmic curve than an exponential one. The law of diminishing returns has been the rule most of the time.


That's a longstanding debate, but yes, a common view is that there's more of an algorithm/formalization bottleneck than a hardware-speed bottleneck. There are some things we in principle know how to do in AI, but just can't practically do fast enough--- those will be fixed by speed increases. But there are a lot of things where we don't even know how, in principle, to write down an algorithm to approach the problem satisfactorily. Hence the notion of "AI-completeness" rather than "NP-completeness" as a colloquial expression for what makes an AI problem hard.


Singularitards tend to forget "size of brain" isn't related to "usefulness of brain." We don't have superintelligent elephants and whales running around.

Brains are slow, dumb things. They are just have freaky connectivity.

On a related note, it's hilarious modern GPUs aren't classified as weapons.


How do you weaponize a GPU?


You could use it for cryptography or cryptanalysis, for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: