Hacker News new | past | comments | ask | show | jobs | submit login

In our current NLU experiments we use an off-the-shelf open source parser for the initial parse (Parsey McParseface, from Google). We then take the grammatically-tagged words and our system maps them to concepts it knows about. It theorizes about multiple possible interpretations of the sentence (pronouns in particular), judging their likelihood using real-world knowledge. A Winograd Schema can be solved because Cyc knows things about the concepts; it knows that a dog has four legs, and what it typically weighs, and that it's a mammal (and shares further traits of mammals), and knows that it couldn't interbreed with a cat because they're different species and the act of procreation requires two organisms of the same species. So far this information has been hand-encoded, but this has been going on for decades, and we have a lot now. Going past that:

1) We're able to map outside data from a DB into Cyc's knowledge format, rather than hand-encoding it. This knowledge is inherently not as rich as the rest, but it can obviously be useful anyway.

2) At some point we hope to reach a critical mass of knowledge that will allow Cyc to simply "import" a Wikipedia page by parsing and understanding it. It will interpret a given sentence into its own understanding, then assert it as true and do reasoning based on it down the line.




An expert system will never be able to process or understand new things unless given explicit solutions.

So you’re basically saying that it is possible to enumerate and solve for everything possible by manually iterating through every single possible edge case that exists. There’s a reason why Cyc has spent over 30 years and only has been able to get this far. You’re fundamentally limited by human constraints. The only realistic way of achieving a general purpose learning system is by teaching a system how to learn and then letting it figure things out on its own. Patently some method involving reinforcement learning.

By the way, it’s not like RL is some newfangled thing. Much of it started during the 80’s, as it concurrently developed with the other purported method of developing intelligence, which was through expert systems.

If you're interested, I highly recommend checking out a lecture that Demis Hassabis gives talking exactly about this issue: https://www.youtube.com/watch?v=3N9phq_yZP0


>> An expert system will never be able to process or understand new things unless given explicit solutions.

(Not the Cyc person).

Your comment is arguing for an end-to-end machine learning (specifically, reinforcement learning) approach. However, modern statistical machine learning systems have demonstrated very clearly that, while they are very good at learning specific and narrow tasks, they are pretty rubbish at multi-task learning and, of course, at reasoning. For breadth of capabilities they are no match to expert systems that can generally both tie their shoelaces and chew gum at the same time. Couple this with the practical limitations of learning all of intelligence end-to-end from examples and it's obvious that statistical machine learning on its own is not going to get any much farther than rule-based systems on their own.

Btw, reinforcement learning is rather older than the '80s. Donald Michie (my thesis advisor's thesis advisor) created MENACE, a reinforcement learning algorithm to play noughts-and-crosses in 1961 [1]. Machine learning in general is older still- Arthur Samuel baptised the field in the 1959 [2] but neural networks were first described in 1938 by Pitts and McCulloch [3]. Like Goeff Hinton has said, the current explosion of machine learning applications is due to large datasets and excesses of computing power- not because they're a new idea that people suddendly realised has potential.

_______________

[1] https://rodneybrooks.com/forai-machine-learning-explained/

[2] https://en.wikipedia.org/wiki/Arthur_Samuel

[3] https://en.wikipedia.org/wiki/Artificial_neuron


The future of generalized intelligence will not be based in brittle datasets, but purely based off repeated self-play, allowing for the bootstrapping of an infinite amount of possible data.

Rule based systems fail catastrophically the moment they encounter something that has not been codified into their rule set. I point to Chess as a prototypical example with two AI engines, StockFish and AlphaZero. StockFish, a manually created meticulously designed over the course of decades expert system, is handily defeated by AlphaZero, a reinforcement learning based system that trains purely through self-play.

If you look at any of the sample games between the two AIs, you can see a distinct difference in style between the two. In colloquial terms, StockFish acts far more “machine-like” whereas AlphaZero plays with a “human grace and beauty” according to many of the grandmasters that commented on its play. These of course are purely due to the fact that StockFish has certain inherent biases caused by brittleness from its codified rule set, which causes it to make sub-optimal moves in the long run, whereas AlphaZero is free from the constraints of any erroneously defined rules allowing it to do things like sacrifice it’s pieces as a strategy. Meanwhile, because StockFish codes in the value of losing a piece as giving negative points, it inherently has to overcome this bias every time it might choose to make a move in this manner, pushing its search space to find moves where it doesn’t have to be sacrificing pieces which is more optimal under its rule set.


Statistical machine learning models are just as brittle as hand-crafted rule-bases, when it comes to data they have not seen during training. They are incapable of genearlising outside their training set.

>> The future of generalized intelligence will not be based in brittle datasets, but purely based off repeated self-play, allowing for the bootstrapping of an infinite amount of possible data.

How will general intelligence arise through self-play, a technique used to train game-playing agents? There's never been a system that jumped from the game board to the real world.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: