Hacker News new | past | comments | ask | show | jobs | submit login

The other major thing missing from Chat GPT is that it doesn't really "learn" outside of training. Yes you can provide it some context, but it fundamentally doesn't update and evolve its understanding of the world.

Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.

I would be much more concerned about a far stupider program that had the ability to independently interact with its environment and update it's believes in fundamental ways.




In context learning is already implicit finetuning. https://arxiv.org/abs/2212.10559. It's very questionable to what extent continuous training is necessary past a threshold of intelligence.

Memory Augmented Large Language Models are Computationally Universal https://arxiv.org/abs/2301.04589


In context learning may act like fine tuning, but crucially does not mutate the state of the system. The same model prompted with the same task thousands of times is no better at it the thousandth time than the first.


GPT-3 is horrible at arithmetic. Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066 Think about what that means.

"Mutating the system" is not a crucial requirement at all. In context learning is extremely over-powered.


> Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066 Think about what that means.

That means that even with the giant model, you need to stuff even the most basic knowledge for dealing with problems of that class into the prompt space to get it to work, cutting into conversation depth and per-response size? The advantage of GPT-4’s big window and the opportunity it provides for things like retrieval and deep iterative context shrinks if I’ve got to stuff a domain textbook into the system prompt so it isn’t just BSing me.


> Think about what that means.

It means you have natural language programming. We would need to prove that natural language programming is more powerful than traditional programming at solving logical problems, I haven't seen such a proof.


> Yet if you define the algorithmic steps to perform addition on 2 numbers

You’re limited by the prompt size, which might be fine for simple arithmetic.


> It's very questionable to what extent continuous training is necessary past a threshold of intelligence.

To absorb new information about current events, otherwise they will always be time-locked into the past until a new dev cycle completes.


The point I'm trying to make is that you don't need continuous training to absorb new information about current events


Very interesting paper, thanks for the link!


> Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.

On the eve of the Manhattan Project, was it irrational to be weary of nuclear weapons (to those physicists who could see it coming)? Something doesn't have to be a reality now to be concerning. When people express concern about AI, they're extrapolating 5-10 years in the future. They're not talking about now.


And yet, we invented nuclear weapons and yet we are all still here and fine.

I’m sure plenty of people thought the advent of nuclear weapons spelled doomsday not to dissimilar to how people think AI spells doomsday.

History only shows me that humans are adaptable and problem solving and have the perseverance to survive.

Is there a historical counter point?


Doomsday predictions will always be wrong in hindsight because if they were correct then you wouldn't be here to realize it. The near misses in the Cold War, where we almost accidentally got obliterated, show that the concern wasn't misplaced. If anything, the concern itself is the reason it didn't end badly.


I think this is only a matter of time, though? Like how many years away do you think this is? 1? 2?


I'm not sure, I've read that it's currently prohibitively expensive.


Prohibitively expensive before everyone + dog decided to throw a bunch of capital at it.

Now it’s just “runway”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: