Hacker News new | past | comments | ask | show | jobs | submit login
Short Story on AI: A Cognitive Discontinuity (karpathy.github.io)
69 points by gcr on Nov 15, 2015 | hide | past | favorite | 13 comments



This is really great, far better than the average totally anthropomorphized AI story.

I think what non-practitioners don't realize is that most AI isn't really "build a Terminator, set it loose" so much as "build a system for supplying feedback to your Terminator and hope it learns from it effectively". In a real sense the training & feedback system is more an AI than the particular model & weights.


This is awesome, I'm a big fan. I love it because (excepting the spontaneous creation of General AI) the treatment of future AI seems so plausible. Few people consider how we will handle this stage of AI development, but undoubtedly there will be lots of cool stuff like that between here and self-improving super intelligent computers.

Will there be a sequel?


thank you! Except the story doesn't involve the creation of a General AI, only a cognitive step - one of several along the way. Future stories could center around the limitations still present, and what additional steps might mean. Things get especially funky once you try to think of architectures that might support computation that surpasses the cognitive abilities of humans - not just in terms of a continuous speed/scale (that's easy) but in terms of a discrete thing - a type.

In that sense the story title is slightly misleading because the idea of a "step" implies a 1-D function that goes up suddenly, while my mental model of intelligence is more similar to a multi-dimensional space, where each axis is a type of computation. When you're putting things on faster hardware you're moving yourself upwards along all axes, but when you add a type of computation (e.g. Mystery module) you're introducing a new axis, capturing a wider volume of cognitive ability. In that sense the title is alluding to a cognitive step on the 1-D function of the total volume, not of any individual axis of it. Okay, maybe this needs vigorous hand-waving and/or a whiteboard to communicate properly, I'm sorry :)


Is there some way I can subscribe so I don't miss the next instalment?


there's an RSS feed http://karpathy.github.io/feed.xml, and also my Twitter. But to set the expectations I have no immediate plan to write a part 2 - this was a fun experiment that I consider mostly unsustainable in terms of time commitment :) You can expect more AI-related posts in general though!


It's a terrible day for rain.


If the author is around, I've got a project working that I just posted to HN that I think can be pretty promising as an AI research platform. If you have Chrome, you can find the project at [ https://nacl-pg.appspot.com/desk?intro=the-shaker ]. It's called "The Native Client Proving Ground". I actually have a silly little AI application in JS on the site, but I would like to leverage Chrome's Native Client SDK to get some really awesome native algorithms running in the browser, doing things like NLP and computer vision.

(You can find the post under my submissions, I think it actually got onto the front page on HN. I saw it high up on the second.)


Some interesting Neural Network fan fiction from Andrej Karpathy!


I would love to read more about these characters and the AI that just spawned. Great story!


That's excellent stuff, thank you.


Amazing. What happens next?


I would love to see the author explore some of the ethics a little.

For example, what if a shaper completely unintentionally trains some malicious behavior into the network? Would they go to jail? How would they find a job in the future when all of these things are tracked alongside them?


He actually mentions that before changes/experiences are committed to master, they are extensively reviewed for any kind of suspicious (and subomtimal, I guess) behaviour.

Bit yes, i enjoy ethical dilemmas as well!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: