Hacker News new | past | comments | ask | show | jobs | submit login
Why I doubt that AI can match the human mind (mindmatters.ai)
16 points by udev4096 8 months ago | hide | past | favorite | 12 comments



> Computers are exclusively theorem generators, while humans appear to be axiom generators.

> However, they are limited by the fact that they cannot establish axioms

Maybe I'm missing subtleties, but it seems like it'd be relatively trivial to get even just current LLMs to generate a couple of original axioms?

If there's going to be further stipulation ("those axioms don't count because ...") I'd be interested to know whether the author believes there's some testable distinguishing factor between human and AI axioms, or if it's going to get to the point of "its generations appear identical to human axioms, but it's not actually generating axioms - just simulating/predicting/etc."

> In essence, you cannot create a computer program that will tell if another arbitrarily chosen program will ever finish

For real physical computers you can, such as with Floyd's tortoise and hare algorithm, since they don't have have infinite memory.

Neither humans nor computers are exempt from the central trick that you can construct an unanswerable question by in some way referencing the question's answer ("What won't your answer to this question be?").


Since the human mind is very poorly understood, that's not an unreasonable position.

And, looking at the increasingly bad results from "industry leaders" like GPT-4 (which, at least for coding tasks, now seems to be resigned to generating "insert boilerplate code here" comments, which is as hilarious as it is useless), it might very well be correct as well.

The more an AI approaches the human mind, the lazier, and therefore less useful, it gets?


i dont think LLMs in their current state are anything like the human mind. They need the ability to have multiple thoughts ongoing, background thoughts, planning... right now LLMs are a little like snap responses answering questions, the type you give without thinking, like intuition. Which can very easily fall outside the bounds of an acceptable answer


I'll save reasonable people some time by highlighting the key premise: "what if materialism is not the only game in town?"


Is there a portion of the article that repudiates materialism, or do you mean something more general like "To not believe everything Sam Altman says on Twitter about our glorious A.I. future is to not be a materialist."


Even if we grant all of this -- that humans are good "axiom generators", that AIs are not, and that this is important -- it is totally unconvincing because it conflates current AI tech in practice with computers in principle. It's always important to remember that humans aren't made of fundamentally different stuff; we're just very strangely-implemented computers, and that if it's possible for us, it would be astounding if it wasn't, at least in principle, possible for an AI. There's no ethereal module for second-order logic bestowed by the universe only on biological creatures. Whether that's a hard thing to build and why isn't a topic given any space in the article; it's just sort of conclusory.

In fact, in the five years since this article was written, many of the specific supporting details have become questionable, or would now at least need to be more precisely stated and argued. Is AI really so obviously bad at metacognitive thinking? Doesn't seem like it: I bet GPT can generate fresh and interesting axioms given the right prompt about axioms, which is how humans do this kind of thing too. Second-order statements are just...statements, so there's no reason to expect AI to be bad at working with them. And here we're arguing that computers are incapable of this kind of thing in principle?

Just feels like copium for dualists.


> It's always important to remember that humans aren't made of fundamentally different stuff;

How do you know this?


Because there is no evidence to the contrary?


So? Lack of evidence is not proof positive of the contrary position. Even if you accept that there is no evidence, which is doubtful in itself since the fact that only humans seem to be able to think the way humans think could be considered evidence.


It's pretty strong evidence! We understand the basics of how humans (or living things in general) are constructed, and in that framework, built out of physics and chemistry, there's no space for special magic stuff; anything biology can make is made of atoms and could in principle be replicated. Even if there's some exotic whatsit we have somehow not been able to detect thus far, something that lives outside of our existing scientific theories, that would simply require updating those theories, and then figuring out how to follow the same steps biological systems do. Thus, the idea that there is some other "non-physical" thing intrinsically inaccessible to us is an extraordinary claim.

You similarly have no direct evidence that there isn't a bottle A&W root bear on Europa, but our understanding of the history of humanity (and root beer and space travel) makes it very unlikely. It is reasonable to conclude that there is no such bottle, and wildly unreasonable to posit that there is.

Edit: added the word "direct" + minor clarifications


> Even if there's some exotic whatsit we have somehow not been able to detect thus far, something that lives outside of our existing scientific theories, that would simply require updating those theories, and then figuring out how to follow the same steps biological systems do.

Assuming the whatsit could fit into the materialistic/mechanistic framework. But that's not necessarily the case.

> You similarly have no direct evidence that there isn't a bottle A&W root bear on Europa, but our understanding of the history of humanity (and root beer and space travel) makes it very unlikely.

I didn't make a claim, I asked you how you can be so sure of your claim.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: