Hacker News new | past | comments | ask | show | jobs | submit login

I don't think what you are saying contradicts the text. What he's saying is that we need to put our efforts into how to design and use the tools that tackle the problem space, rather than reasoning about the problem space itself, e.g. how to use neural nets, monte carlo search, etc. That doesn't mean we just throw a for-loop at the data.



But this doesn't work either - convolutional layers in neural networks have a very specific structure, which encodes strong prior knowledge that we have about the problem space (translation invariance). If we just had multilayer perceptions, we wouldn't be talking about this right now.


>convolutional layers in neural networks have a very specific structure, which encodes strong prior knowledge that we have about the problem space

Yes. The point of the author is that it doesn't do this symbolically.

Don't get confused with the terms "brute force", "neural net", etc.

The main idea of the author is that AI that uses brute force, simpler statistical methods, NN, etc, wins over AI that tries to implement some deeper reasoning about the problem domain the way humans do (when thinking about it consciously).


Hmm, I'm not sure I see the difference. Why is it not "symbolic"? The symbols that construct the neural network are what encodes translation invariance -- not some vector of reals.


Symbolic as in "symbolic algebra systems", "symbolic AI", etc [1]. Not as in having some symbols in the code for a NN.

A NN doesn't work with the domain objects directly and abstractly (e.g. considering a face, facial features, smiles, etc as first class things and doing some kind of symbolic manipulation at that level).

It crunches numbers that encode patterns capturing those things, but its logic is all about numbers, links between one layer and another, and so on -- it's not a program dealing with high level abstract entities.

To put it in another way, it's the difference between teaching, say, Prolog to identify some concept and a NN to do the same.

E.g. from the link "The most successful form of symbolic AI is expert systems, which use a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols."

A NN does nothing like that (not in any immediate, first class, way, where the rules are expressed as plain rules given by the programmer, like "foo is X", "bar has the Y property", etc).

Here's another way to see it: how you'd solve a linear equation with regular algebra (the steps and transformation etc), and how a NN would encode the same.

A symbolic algebra system will let you express an equation in symbolic form (more or less like a mathematician would write it), and even show you all the intermediate steps you'd take until the solution.

A NN trained to solve the same type of equations doesn't do that (and can't). It just tells you the answer (or an approximation thereof).

[1] https://en.wikipedia.org/wiki/Symbolic_artificial_intelligen...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: