I don’t know who said it, but an amazing quote I love is: “they call it AI until it starts working, see autocomplete”
I love this because when a company tells me they do AI (as a software engineer) they tacitly say that they have little to no knowledge of where they want to go or what services they will be offering with that AI.
As someone who works in the field and works with LLMs on the daily - I feel like there are two camps at play. The field is bimodally distributed:
- AI as understandable tool that power concrete products. There's already tons of this on the market - autocorrect, car crash detection, heart arrythmia identification, driving a car, searching inside photos, etc. This crowd tends to be much quieter and occupy little of the public imagination.
- AI as religion. These are the Singularity folks, the Roko's Basilisk folks. This camp regards the current/imminent practical applications of AI as almost a distraction from the true goal: the birth of a Machine-God. Opinions are mixed about whether or not the Machine-God is Good or Bad, but they share the belief that the birth of Machine-God is imminent.
I'm being a bit uncharitable here since as someone who firmly belongs in the first camp I have so little patience for people in the second camp. Especially because half of the second camp was hawking monkey JPEGs 18 months ago.
AI as understandable tool that power concrete products.
This why my wariness.
Contemporary AI stands upon mechanical turks.
In contrast spellcheckers, checkers engines, and A* were built solely by people with employer provided health insurance.
In the old days, the hard work for professional pay was the justified means.
Today, taking advantage of the economically desperate is the justified means.
There’s no career path from mechanical turk to Amazon management because mechanical turk is not an Amazon position. It’s not even employment. No minimum wage. No benefits. No due process.
There's a blur between the two camps once you get to the so called "AGI" thing.
People think creating super-human intelligence is a technological challenge, but given that we aren't able to consistently rank human-level intelligence, the *recognition* that some AI has attained "super-human" levels is going to be a religious undertaking rather than a technological one.
And we're kind of close to the edges of that already. That's why discussions feel a bit more religious-y than in the past.
tl;dr: until there's a religion that worships some AI as a "god", there won't be any "AGI".
> tl;dr: until there's a religion that worships some AI as a "god", there won't be any "AGI".
I fear you may be correct. Though now I'm thinking of how AI have been gods in fiction, and hoping that this will be more of a Culture (or Bob) scenario than a I Have No Mouth scenario.
(And if the AI learns what humans are like and how to behave around them from reading All The Fiction, which may well be the case… hmm. Depends what role the AI chooses for itself: I hear romance is the biggest genre, so we may well be fine…)
There's a good breakdown and cliche-by-cliche comparison in there, but I find the penultimate paragraph both memorable and quotable:
> It’s also interesting to think about what would happen if we applied “Rapture of the Nerds” reasoning more widely. Can we ignore nuclear warfare because it’s the Armageddon of the Nerds? Can we ignore climate change because it’s the Tribulation of the Nerds? Can we ignore modern medicine because it’s the Jesus healing miracle of the Nerds? It’s been very common throughout history for technology to give us capabilities that were once dreamt of only in wishful religious ideologies: consider flight or artificial limbs. Why couldn’t it happen for increased intelligence and all the many things that would flow from it?
We cannot ignore those other things you list, because they are here already.
AGI is not, and there is no evidence that it is even possible. So it, we can safely ignore for now. Once some evidence exists that it may actually be achievable, we'll need to pay attention.
People in 1000 CE could (and did) safely ignore all those things, for this exact reason.
> AGI is not, and there is no evidence that it is even possible.
We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.
> People in 1000 CE could (and did) safely ignore all those things
Whereas the people, and specifically leadership, of Japan unsafely ignored one of them on the 5th August 1945. Some of the leadership were still saying it couldn't possibly have been a real atomic bomb as late as the 7th, which is ultimately why the second bomb fell on the 9th.
>> AGI is not, and there is no evidence that it is even possible.
> We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.
I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
> I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
I don't know what you mean by "as-yet circular assumption". (Though in the philosophy of knowledge, the Münchhausen trilemma says that everything is ultimately either circular, infinite regression, or dogmatic).
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
Sounds like you're arguing against ASI not AGI: G = General like us; S = Super-, exceeding us.
That said, there's evidence that ASI is also possible: All the different ways in which we've made new minds that do in fact greatly exceed ours in capability.
When I was a kid, "intelligent" was the way we described people who were good at maths, skilled chess players, good memories, having large vocabularies, knowing many languages, etc. Even ignoring the arithmetical component of maths (where a Pi Zero exceeds all of humanity combined even if each of us were operating at the standard of the current world record holder), we have had programs solving symbolic maths for a long time; Chess (and Go, Starcraft, Poker,…) have superhuman AI; even before GPT, Google Translate already knew (even if you filter the list to only those where it was of a higher standard than my second language) more languages than I can remember the names of (and a few of them even with augmented reality image-to-image translations).
And of course, for all the flaws the current LLMs have in peak skill, most absolutely have superhuman breadth of knowledge: I can beat GPT-3.5 as a software engineer, maths and logic puzzles, or when writing stories, but that's basically it.
What we have not made is anything that's both human (or superhuman) skill-level while also human-level generality — but saying the two parts separately isn't evidence that it can be done is analogous to looking at 1 gram of enriched uranium and a video of a 50 kg sphere of natural uranium being forced to implode spherically, and saying "there no evidence that humans are capable of designing an atom bomb or that it's possible to make an atom bomb that greatly exceeds chemical bombs in yield."
You won't get a proof until the deed is done. But that's the same with nuclear armageddon - you can't be sure it'll happen until after the planet's already glassed. Until then, evidence for probability of the event is all you have.
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability
There's plenty of good reasons to assume it's possible, all while there's no evidence suggesting it's not.
"good reasons" sounds like another way of saying "no actual evidence, but a lot of hope". There is no actual evidence that it's possible, certainly not anytime soon. People pushing this narrative that AGI is anywhere close are never people working in the space, it's just the tech equivalent of the ancient aliens guys.
> People pushing this narrative that AGI is anywhere close are never people working in the space
Apart from the most famous AI developer group since near the beginning of this year, on the back of releasing an AI that's upset a lot of teachers and interview-question writers because it can pass so many of their existing quizzes without the student/candidate needing to understand anything.
I suppose you could argue that they are only saying "AGI could happen soon or far in the future" rather than "it will definitely be soon"…
Yes, the people selling the hammer want you to believe it's a sonic screwdriver. What else is new? You sort of prove my point when your evidence of who is making those claims are the people with a vested interest, not the actual scientists and non-equity developers who do the actual coding.
"But a company said the tech in their space might be ground-breaking earth-shattering life-changing stuff any minute now! What, you think people would just go on the internet and lie!?"
I haven't set up a No True Scotsman proposition, I made a very clear and straightforward assertion, that I've challenged others to disprove.
Show me one scientific paper on Machine Learning that suggests it's similar in mechanism to the human brain's method of learning.
It's not a lack of logical or rhetorical means to disprove that's stopping you (i.e. I'm not moving any goalposts), it's the lack of evidence existing, and that's not a No True Scotsman fallacy, it's just the thing legitimately not existing.
This is a myth; Japan was not in denial that the US had atomic bombs, it had its own atomic bomb program (though incredibly in-advanced), and was aware of Germany's program as well. It just didn't care.
What caused Japan to surrender was not the a-bombs, it was the USSR declaring war on them.
That aside, that still supports my point, which is that they should not ignore things that exist, while they can ignore things that don't. Like AGI.
I could've phrased it better, it sounds like you're criticising something other than what I meant.
One single plane flies over Hiroshima, ignored because "that can't possibly be a threat". The air raid warning had been cleared at 07:31, and many people were outside, going about their activities.
> it had its own atomic bomb program
Two programs; it was because they were not good enough that they thought the US couldn't have had the weapons:
--
The Japanese Army and Navy had their own independent atomic-bomb programs and therefore the Japanese understood enough to know how very difficult building it would be. Therefore, many Japanese and in particular the military members of the government refused to believe the United States had built an atomic bomb, and the Japanese military ordered their own independent tests to determine the cause of Hiroshima's destruction.[0] Admiral Soemu Toyoda, the Chief of the Naval General Staff, argued that even if the United States had made one, they could not have many more.[86] American strategists, having anticipated a reaction like Toyoda's, planned to drop a second bomb shortly after the first, to convince the Japanese that the U.S. had a large supply.[1]
[0] Frank, Richard B. (1999). Downfall: the End of the Imperial Japanese Empire. New York: Penguin. ISBN 978-0-14-100146-3
[1] Hasegawa, Tsuyoshi (2005). Racing the Enemy: Stalin, Truman, and the Surrender of Japan. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-01693-4
--
> AGI
You personally are a General Intelligence; we have Artificial Intelligence. Is GPT-4 a "general" "intelligence"? That depends on the standards for the words "general" and "intelligence". (Someone is probably arguing that anything trained by an evolutionary algorithm isn't necessarily "artificial", not that I know how it was trained, nor even care given I don't use that standard).
My college textbook on AI from 20 years back considered a large enough set of if-else statements (ie: an expert system) as rudimentary AI. Now we'd call it a bunch of hard coded if-else statements but 40 years ago it was state of the art AI and 20 years ago it was worth including in a textbook.
Norvig and Russell's textbook (one of the current go-to AI books) calls the big "if-else AI" a "simple reflex agent". It observes the environment in a rudimentary way and then goes through the if-then chain. One of the first things students (should) learn is how inefficient this is for more challenging problems.
My students were just given an assignment where they build AI to play Connect 4. Some will try to make a simple reflex solution because they want to avoid recursion, then come to office hours asking how to make it work better. It... can't. There really is an observable upper-bound on if-then performance.
> There really is an observable upper-bound on if-then performance.
Only on if-else chains you can code by hand.
There's a lot of machine learning methods that can be seen as using data to generate large networks of if-else decision points. There are methods that perform (a discrete simulation of) continuous generalization of if-else chains. And fundamentally, if-else chains with a loop thrown in the mix is a Turing-complete system, so it can do anything.
The problem here is that if-else chains are a really inefficient way for humans to model reality with. We can do much better with different approaches and abstractions, but since they all are equivalent to a finite sequence of if-else branches, it's not the if-else where the problem is - it's our own cognitive capacity.
It's a quality zinger, but ironically the product may have been a subset of the feature: I'd argue the product is the fact that Dropbox doesn't belong to a platform vendor and therefore can't be leveraged for anticompetitive purposes / lock-in.
Dropbox laughing in that $10B market cap and $2B+ ARR. Jobs was right about the concept (insert meme about Apple ecosystem devs realizing their product was killed by an Apple feature release), but wrong in that specific instance.
If I gave you $250,000,000 to grow a company, and then next year I saw you had $250,050,000 in the bank, then $250,102,000 next year, and so on, I'd be pretty annoyed that I backed you. You have so much money you could be spending on hiring, development, and marketing, and you're instead just slowly chugging along, padding the corporate bank account? What am I paying you for?! Give me my money back.
VC-backed companies that spend more than they earn aren't duds. It's the nature of VC-backed corporations.
They spent it all on storage and on their new spammy looking marketing emails that pester free users to upgrade I guess. I don't recall something really new on Dropbox since they were established.
I am paying Dropbox for storage and will pay them until I die. Rock solid sync and object durability, API access to my storage for my apps, no complaints whatsoever. I don't want new, I want storage I don't have to think about.
Until they die, relatively soon. 100 years from now Dropbox will be a distant memory, but locally mounted FTP directories under version control will be alive and well.
> but locally mounted FTP directories under version control will be alive and well. [1] [2]
This might matter to you, but it does not matter to me. In the meantime, my life will have been better and my time saved between now and death (certainly less than 100 years from now). That's what the money is for. Time is non renewable. If you have more time and ideology than money, I admit your solution is a better fit for your life and use case(s). Self host if you want, I have better things to do personally vs cobbling together technology that I can buy polished for the cost of two coffees a month. There's a product lesson in this subthread. No business lasts forever, the benefit is the value delivered during its lifecycle. Provide value, and I will happily let you ding my credit card monthly or annually (please support annual plans B2C!) forever. "Build something people want" or something like that [3].
Self hosted and versioned FTP drives represent true power, like the stone buildings that stand for centuries. A dropbox subscription is the shitty McMansion that falls apart after 10 years.
Unfortunately, longevity doesn't matter to an economy whose participants surf the cash flow. The shitty McMansion may fall apart after 10 years, but if it lets you earn more than it costs to replace, it's good enough. Sad as it is, a lot of the economy relies on the churn.
So was almost every unicorn startup. They purposefully aim for growth until its unsustainable, then switch over to exploiting their market position. We may not like it, but the business model is far from novel or unexpected.
This quote always struck me as a weird anti competitive flex, in the "we can crush you anytime" way.
And Apple later released the whole iCloud suite that made Dropbox a second class citizen in the OS, even as to this day Dropbox works better than iCloud in many ways. We more and more hear the "services revenue" drum at every Apple earning call, so Jobs was not wrong either.
I've had to deal with this kind of company in FOMO mode. They start from a solution (AI) in search of a problem to solve, while the ideal approach would be the inverse.
Pretty much a guarantee that a lot of money will be wasted while panickly iterating through pointless approaches. I figure this happens every time a new fundamental technology comes out, the dot-com bubble has probably seen many such companies.
Starting from a problem and making a solution requires that you understand both your problem domain and the domain of the solution very well. Much harder than taking a hammer and hitting everything that kind of looks like a nail
more money is being printed than ever before.. some people literally have to find something to do with it.. the waste in AI marketing is one result.
AI in the digital age is uniquely disruptive however, since it connects directly to the way we communicate.. so there is some reason to be wound-up by this, whatever role you are in
> “they call it AI until it starts working, see autocomplete”
No disrespect, but this is a pretty bad quote. Are we really at the point where a lazy tweet not-so-hot-take deserves to be stored in the annals of history?
I love this because when a company tells me they do AI (as a software engineer) they tacitly say that they have little to no knowledge of where they want to go or what services they will be offering with that AI.