Hacker News new | past | comments | ask | show | jobs | submit login

No, the compiler is following rules humans implemented. The humans were reasoning. The compiler follows a well-defined process. This is also what a scientific calculator can do - but the calculator isn't an example of AI, either.

Many linguists think that every human language follows fundamental patterns (e.g. [0]). In that context, the achievement of GPT is that it indirectly derived such a model by working through ungodly amounts of data. The results sound meaningful for us - but that doesn't imply that GPT intended meaning.

Every theory of reason I know has consciousness as a hard requirement. I'm not trying to be pedantic, but the topic of this thread is exactly the kind where clear definitions of words are important.

If Prolog is reasoning, then a scientific calculator is, too. But now we just need another word for the thing that differentiates us from calculators.

[0] https://en.wikipedia.org/wiki/Principles_and_parameters




What sort of definition of reasoning implies or requires consciousness? I haven’t seen one.


Those from Locke, Hume, Kant, Habermas, among others.


Is that then really a definition that carves reality at the joints? What is it that the AI will actually be hindered in doing, that is described by its inability to meet this definition?

If I think about a process using a certain mechanism, and the AI thinks about a process using a similar mechanism, but also I have a consciousness attached on top and the AI does not, then it seems petty to assign these processes different labels based on a component whose mechanical relevance is not shown. I'm not doubting the impact of conscious, reflective reasoning on human capability, mind! But most of the thinking I do is not that.

Also as a general rule, you should be skeptical of considerations of reason that are based largely on introspection; the process is inherently biased towards consciousness as a load-bearing element, since consciousness is so heavily involved in the examination.


These are very good points! Current theories of reason are obviously assuming human minds. Still, even if one wants to create a new definition that includes AGIs, there has to be some concept of agency, of wanting to achieve something, with the capability being the means to that end. The capability alone isn't what brings us closer to AGI.


well, the same way, a neural network follows rules humans implemented. With a little bit of mathematical optimization to actually describe a problem!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: