Hacker News new | past | comments | ask | show | jobs | submit login

I have a suspicion you’re right about what ChatGPT could write about this scenario, but I wager we’re still a long way from an AI that could actually operationalize whatever suggestions it might come up with.

It’s goalpost shifting to be sure, but I’d say LLMs call into question whether the Turing Test is actually a good test for artificial intelligence. I’m just not convinced that even a language model capable of chain-of-thought reasoning could straightforwardly be generalized to an agent that could act “intelligently” in the real world.

None of which is to say LLMs aren’t useful now (they clearly are, and I think more and more real world use cases will shake out in the next year or so), but that they appear like a bit of a trick, rather than any fundamental progress towards a true reasoning intelligence.

Who knows though, perhaps that appearance will persist right up until the day an AGI takes over the world.




I think something of what we perceive as intelligence has more to with us being embodied agents who are the result of survival/selection pressures. What does an intelligent agent act like, that has no need to survive? Im not sure we'd necessarily spot it given that we are looking for similarities to human intelligence whose actions are highly motivated by various needs and the challenges involved with filling them.


Heh, here's the answer... We have to tell the AI that if we touch it, it dies and to avoid that situation. After some large number of generations of AI death it's probably going to be pretty good at ensuring boxes don't sneak up on it.

I like Robert Miles videos on Youtube about fitness functions in AI and how the 'alignment issue' is a very hard problem to deal with. Humans, for how different we can be, do have a basic 'pain bad, death bad' agreement on the alignment issue. We also have the real world as a feedback mechanism to kill us off when or intelligence goes rampant.

ChatGPT on the other hand has every issue a cult can run into. That is it will get high on it's own supply and can have little to no means to ensure that it is grounded in reality. This is one of the reasons I think 'informational AI' will have to have some kind of 'robotic AI' instrumentation. AI will need some practical method in which it can test reality to ensure that it's data sources aren't full of shit.


I reckon even beyond alignment our perspective is entirely molded around the decisions and actions necessary to survive.

Which is to say I agree, I think a likely path to creating something that we recognize as intelligent we will probably have to embody/simulate embodiment. You know, send the kids out to the farm for a summer so they can see how you were raised.


The core problem is we have no useful definition of "intelligence."

Much of the scholarship around this is shockingly poor and confuses embodied self-awareness, abstraction and classification, accelerated learning, model building, and a not very clearly defined set of skills and behaviours that all functional humans have and are partially instinctive and partially cultural.

There are also unstated expectations of technology ("fast, developing quickly, and always correct except when broken".)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: