Hacker News new | past | comments | ask | show | jobs | submit login

Have you used it outside of the training set? And have you actually used it for generalized tasks? Let's be kind to the LLM and define generalized to mean "do anything if it can be hammered into a textual interface." It fails at pretty much everything that's not on the internet.

It produces plausible language and code. It's the future of natural language agents, that's for sure, even though it has no business being the future, today (because of "prompt injection").

These failures are an aside to whether a generalized AI actually carries a substantial global catastrophic risk to us. In that case, if it were actually possible, I don't believe that it's a catastrophic risk either.

Let's define the risk to be, at least in the context of LLMs, when junior developers expose internal APIs and databases and access to email accounts to these easily socially-engineered NLP front-ends. It's a very local risk. As far as the future, I can't see 70 years into the future, so anything is possible, but is it likely? I, personally, don't believe so.




> Have you used it outside of the training set? And have you actually used it for generalized tasks? Let's be kind to the LLM and define generalized to mean "do anything if it can be hammered into a textual interface." It fails at pretty much everything that's not on the internet.

I've used it, yes, and I've seen it fail and hallucinate on me; but that does not invalidate its capabilities in my eyes. The thing is, you CAN talk with it, and it CAN extract meaning from your words and provide useful responses, unlike anything we had before.

To me, the risk in this whole enterprise is that AI is inherently "better" than humans in several ways, and that these differences might be completely game-changing:

Namely it's much easier to scale up (power/size/interconnect bandwidth) compared to a research group or somesuch, and its also cheaper, faster, has better availability and is functionally immortal.

These advantages make it very likely to me that it WILL be replacing human white collar workers shortly-- simply because that's economical.

And the more interfaces you give it to physical reality (which it'll need to do its jobs), the higher the risk.

Speculating on if/when/how it will show awareness or self-interest is pure guesswork, but it's almost indefensible to call that likelihood zero.

Regarding promp injection: I'm highly confident that this will not be a long-term obstacle, even though I'm uncertain that it can be solved; there's two reasons why:

1) If SQL injection had been an "unfixable" problem, and everyone had known about it from the start, do you believe that this would have prevented the rollout of internet-connected databases? Because I don't think so, and my view on hallucinations is analogous (but I believe that problem might be more tractable).

2) Literally every person is vulnerable to prompt injection already; every salesman knows that it is quite feasible to coax people into acting against previous instructions and even their own interests if you are allowed to talk into them for a good while.


I don't think it's there or even necessarily close to being GAI.

Within our own human brains, we have many sections dedicated to different tasks and processes that all must work together with years and decades of real world interaction to produce what we consider to be a generally intelligent human being, and even in the case of a certain percentage of us humans, even a small amount of damage to a single part of the brain can cause us to become functionally subhuman in our intelligence, barely able to move or eat on our own.

A human that has been lobotomized can still sometimes speak full sentences after all.

The current models seem to be able to imagine an image or video and show that imagination to us, and they can parrot words to us with a large vocabulary and with many references, but I find myself feeling like these are similar to the sections of our brains that can compute words and imagine pictures. Doesn't quite equal a human yet.

These systems need "statefulness", short and long-term memory that can be referenced by multiple discreet interconnected AI systems to take a worthwhile step towards GAI.

These systems need an overseer AI that manages and shepherds the LLAMAs and CHATGPTs and StableDiffusions to all work together towards some goal, one that can manage statefulness and manage a limited pool of computational resources (because any GAI would automatically assume that the entire world would provide it with every available resource, because why wouldn't you, right?)

Until there is an AI system that has multiple AI systems under its subconscious control, to the point where it can surprise itself with the products it produces, has a memory bank that can be referred back to repeatedly by all of those processes, and that can accept that even if its reasoning is perfect it has been born into an imperfect world that is run by imperfect creatures and so it cannot have or do everything that it might could do even under the best of circumstances, we will not have a GAI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: