> This appears to be true because you haven't defined "better".
Better intelligence can be defined quite easily: something which is better at (1) modeling the world; (2) optimizing (i.e. solving problems).
But if that would be too general we can assume that general reasoning capability would be a good proxy for that. And "better at reasoning" is rather easy to define. Beyond general reasoning better AI might have access to wider range of specialized modeling tools, e.g. chemical, mechanical, biological modeling, etc.
> if it is true it'll be obvious in a way that doesn't make it sound interesting anymore.
Not sure what you mean. AI which is better at reasoning is definitely interesting, but also scary.
> they just come from training bigger models on the same data.
I don't think so. OpenAI refuses to tell us how they made GPT-4. I think a big part of it was preparing better, cleaner data sets. Google tells us that specifically improved Gemini's reasoning using specialized reasoning datasets. More specialized AI like AlphaGeometry use synthetic datasets.
> Yes, OpenAI was literally founded by a computer worshipping religious cult.
Practice is the sole criterion for testing the truth. If their beliefs led them to better practice then they are closer to truth than whatever shit you believe in. Also I see no evidence of OpenAI "worshipping" anything religion-like. Many people working there are just excited about possibilities.
> Humans don't have a "recursive self-improvement" ability.
Human recursive self-improvement is very slow because we cannot modify our brains' at will. Also spawning more humans takes time. And yet humans made huge amount of progress in the last 3000 years or so.
Imagine that instead of making a new adult human in 20 years you could make one in 1 minute with full control over neural structures, connections to external tools via neural links, precisely controlled knowledge & skills, etc.
Better intelligence can be defined quite easily: something which is better at (1) modeling the world; (2) optimizing (i.e. solving problems).
But if that would be too general we can assume that general reasoning capability would be a good proxy for that. And "better at reasoning" is rather easy to define. Beyond general reasoning better AI might have access to wider range of specialized modeling tools, e.g. chemical, mechanical, biological modeling, etc.
> if it is true it'll be obvious in a way that doesn't make it sound interesting anymore.
Not sure what you mean. AI which is better at reasoning is definitely interesting, but also scary.
> they just come from training bigger models on the same data.
I don't think so. OpenAI refuses to tell us how they made GPT-4. I think a big part of it was preparing better, cleaner data sets. Google tells us that specifically improved Gemini's reasoning using specialized reasoning datasets. More specialized AI like AlphaGeometry use synthetic datasets.
> Yes, OpenAI was literally founded by a computer worshipping religious cult.
Practice is the sole criterion for testing the truth. If their beliefs led them to better practice then they are closer to truth than whatever shit you believe in. Also I see no evidence of OpenAI "worshipping" anything religion-like. Many people working there are just excited about possibilities.
> Humans don't have a "recursive self-improvement" ability.
Human recursive self-improvement is very slow because we cannot modify our brains' at will. Also spawning more humans takes time. And yet humans made huge amount of progress in the last 3000 years or so.
Imagine that instead of making a new adult human in 20 years you could make one in 1 minute with full control over neural structures, connections to external tools via neural links, precisely controlled knowledge & skills, etc.