Hacker News new | past | comments | ask | show | jobs | submit login

> Everyone’s giving you grief when you’re clearly merely making the Kantian assertion that all experience must have both immediate and mediated elements, namely “our subjective perceptions” and “the unknowable ineffable objectively-undifferentiable Real World”. The best we can do is construct our little hallucinations and hope (and try to empirically ensure!) that they match whatever Reality might be, knowing that we’ll never have a true fundamental link to it.

Insofar as the mechanism by which LLMs operate mirrors the mechanisms by which the brain operates - which they do, to some degree - yes, the fact that LLMs hallucinate reflect the gap between perceptions, thoughts, and "reality" in human behavior, and there is indeed a deeper philosophical lesson to be drawn here. If LLMs manage to disabuse the field of software engineering of its apparently common conceptions about the brain works and reflects reality, I'll consider it a win.

> Well, it’s either that or you were being a human elitist and portraying a difference of degree as a difference in kind. Idk if we’ve settled on terms yet but I’d probably level ones like “automatist”, “bio supremacist”, or maybe “radical humanist”…

I'm not sure how I could be considered a bio-supremacist for pointing out that a hammer is not a drill. LLMs are phenomenally complicated tools, but as currently deployed, they're tools, not beings. We may at some point create a system that one could credibly claim is an actual artificial intelligence, and that system may contain things that look or operate like LLMs, but right now what we're looking at are large statistical model that we're operating by providing it inputs and reading the outputs.

A first-line criteria here, for me, would be that the system operates continuously (ie: not just when someone puts a token in the box). The LLMs we're discussing here do not do that. Someone generates a series of tokens, provides those as inputs to the equation, and then runs the equation, and then decides whether to continue to provide inputs into the equation until they're satisfied by the result. That's a tool, not a "being".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: