> AGI is not, and there is no evidence that it is even possible.
We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.
> People in 1000 CE could (and did) safely ignore all those things
Whereas the people, and specifically leadership, of Japan unsafely ignored one of them on the 5th August 1945. Some of the leadership were still saying it couldn't possibly have been a real atomic bomb as late as the 7th, which is ultimately why the second bomb fell on the 9th.
>> AGI is not, and there is no evidence that it is even possible.
> We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.
I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
> I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
I don't know what you mean by "as-yet circular assumption". (Though in the philosophy of knowledge, the Münchhausen trilemma says that everything is ultimately either circular, infinite regression, or dogmatic).
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
Sounds like you're arguing against ASI not AGI: G = General like us; S = Super-, exceeding us.
That said, there's evidence that ASI is also possible: All the different ways in which we've made new minds that do in fact greatly exceed ours in capability.
When I was a kid, "intelligent" was the way we described people who were good at maths, skilled chess players, good memories, having large vocabularies, knowing many languages, etc. Even ignoring the arithmetical component of maths (where a Pi Zero exceeds all of humanity combined even if each of us were operating at the standard of the current world record holder), we have had programs solving symbolic maths for a long time; Chess (and Go, Starcraft, Poker,…) have superhuman AI; even before GPT, Google Translate already knew (even if you filter the list to only those where it was of a higher standard than my second language) more languages than I can remember the names of (and a few of them even with augmented reality image-to-image translations).
And of course, for all the flaws the current LLMs have in peak skill, most absolutely have superhuman breadth of knowledge: I can beat GPT-3.5 as a software engineer, maths and logic puzzles, or when writing stories, but that's basically it.
What we have not made is anything that's both human (or superhuman) skill-level while also human-level generality — but saying the two parts separately isn't evidence that it can be done is analogous to looking at 1 gram of enriched uranium and a video of a 50 kg sphere of natural uranium being forced to implode spherically, and saying "there no evidence that humans are capable of designing an atom bomb or that it's possible to make an atom bomb that greatly exceeds chemical bombs in yield."
You won't get a proof until the deed is done. But that's the same with nuclear armageddon - you can't be sure it'll happen until after the planet's already glassed. Until then, evidence for probability of the event is all you have.
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability
There's plenty of good reasons to assume it's possible, all while there's no evidence suggesting it's not.
"good reasons" sounds like another way of saying "no actual evidence, but a lot of hope". There is no actual evidence that it's possible, certainly not anytime soon. People pushing this narrative that AGI is anywhere close are never people working in the space, it's just the tech equivalent of the ancient aliens guys.
> People pushing this narrative that AGI is anywhere close are never people working in the space
Apart from the most famous AI developer group since near the beginning of this year, on the back of releasing an AI that's upset a lot of teachers and interview-question writers because it can pass so many of their existing quizzes without the student/candidate needing to understand anything.
I suppose you could argue that they are only saying "AGI could happen soon or far in the future" rather than "it will definitely be soon"…
Yes, the people selling the hammer want you to believe it's a sonic screwdriver. What else is new? You sort of prove my point when your evidence of who is making those claims are the people with a vested interest, not the actual scientists and non-equity developers who do the actual coding.
"But a company said the tech in their space might be ground-breaking earth-shattering life-changing stuff any minute now! What, you think people would just go on the internet and lie!?"
I haven't set up a No True Scotsman proposition, I made a very clear and straightforward assertion, that I've challenged others to disprove.
Show me one scientific paper on Machine Learning that suggests it's similar in mechanism to the human brain's method of learning.
It's not a lack of logical or rhetorical means to disprove that's stopping you (i.e. I'm not moving any goalposts), it's the lack of evidence existing, and that's not a No True Scotsman fallacy, it's just the thing legitimately not existing.
This is a myth; Japan was not in denial that the US had atomic bombs, it had its own atomic bomb program (though incredibly in-advanced), and was aware of Germany's program as well. It just didn't care.
What caused Japan to surrender was not the a-bombs, it was the USSR declaring war on them.
That aside, that still supports my point, which is that they should not ignore things that exist, while they can ignore things that don't. Like AGI.
I could've phrased it better, it sounds like you're criticising something other than what I meant.
One single plane flies over Hiroshima, ignored because "that can't possibly be a threat". The air raid warning had been cleared at 07:31, and many people were outside, going about their activities.
> it had its own atomic bomb program
Two programs; it was because they were not good enough that they thought the US couldn't have had the weapons:
--
The Japanese Army and Navy had their own independent atomic-bomb programs and therefore the Japanese understood enough to know how very difficult building it would be. Therefore, many Japanese and in particular the military members of the government refused to believe the United States had built an atomic bomb, and the Japanese military ordered their own independent tests to determine the cause of Hiroshima's destruction.[0] Admiral Soemu Toyoda, the Chief of the Naval General Staff, argued that even if the United States had made one, they could not have many more.[86] American strategists, having anticipated a reaction like Toyoda's, planned to drop a second bomb shortly after the first, to convince the Japanese that the U.S. had a large supply.[1]
[0] Frank, Richard B. (1999). Downfall: the End of the Imperial Japanese Empire. New York: Penguin. ISBN 978-0-14-100146-3
[1] Hasegawa, Tsuyoshi (2005). Racing the Enemy: Stalin, Truman, and the Surrender of Japan. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-01693-4
--
> AGI
You personally are a General Intelligence; we have Artificial Intelligence. Is GPT-4 a "general" "intelligence"? That depends on the standards for the words "general" and "intelligence". (Someone is probably arguing that anything trained by an evolutionary algorithm isn't necessarily "artificial", not that I know how it was trained, nor even care given I don't use that standard).
We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.
> People in 1000 CE could (and did) safely ignore all those things
Whereas the people, and specifically leadership, of Japan unsafely ignored one of them on the 5th August 1945. Some of the leadership were still saying it couldn't possibly have been a real atomic bomb as late as the 7th, which is ultimately why the second bomb fell on the 9th.