Note that this is agreeing with a Gary Marcus Tweet - Gary Marcus not exactly being an AI hypester.
But of course there are some people for whom playing the role of real-no-bs-computer-knower is so attractive that no number of people like him, Geoffrey Hinton, Stuart Russell etc publicly worrying about x-risk will impact their tone of dismissive certitude. Are you one of those people?
All of those people have financial incentives to hype it. How curious that there's this great and very probable X-risk, yet they aren't going to stop their contributing to a potential X-risk.
Dismissive of what? Science fiction stories?
If there's anything to focus on, maybe focus on potential job displacement (not elimination) from cheap language tasks and generative capabilities in general.
I'm betting on this: the Overton window of Artificial Intelligence will shift in the next five years where the current cream-of-the-crop has been delegated to machine learning yet again, it's just accepted. It augments humans where it makes sense, the hype wave has subsided and everyone has stopped hammering it into their products where it doesn't, and we're no closer to the undefinable "AGI", let alone something that produces X-risk, global scale.
> I'm betting on this: the Overton window of Artificial Intelligence will shift in the next five years where the current cream-of-the-crop has been delegated to machine learning yet again, it's just accepted. It augments humans where it makes sense, the hype wave has subsided and everyone has stopped hammering it into their products where it doesn't, and we're no closer to the undefinable "AGI", let alone something that produces X-risk, global scale.
I agree with this but ALSO think there's a small chance I'm wrong and a well designed prompt and action loop would let a future GPT7 LLM use the range of human thinking techniques in its corpus to bootstrap itself.
And there's also other non-LLM AI that might be a problem in the future and we should plan as to how we can design institutions and incentive structures so that whenever this future AGI comes about it preserves human value.
> All of those people have financial incentives to hype it. How curious that there's this great and very probable X-risk, yet they aren't going to stop their contributing to a potential X-risk.
All those people are rehashing what Yudkowsky and his disciples, and his predecessors, were shouting from the rooftops for the past 15 years, but few listened to them. Few still do, most just keep mocking them and wondering why are they still around.
That some of those people now repeating after Eliezer, et al. have a financial interest in pushing us closer to X-risk, and kind of don't want to stop, is an interesting thing on its own - but it doesn't invalidate the message, as the message is older than their presence on the scene.
I'm curious what financial incentive you think Marcus or Russell has for hype. For Hinton I suppose it would be the Google shares he likely retains after quitting?
You might be right about the next five years. I hope you are! But you haven't given much reason to think so here.
(Edited to remove some unnecessary expression of annoyance.)
>Gary Marcus - Geometric Intelligence, a machine learning company
If you want an actual contribution, we have no real way to actually gauge what is, and what actually is not, a superior, generalized, adaptable intelligence, or what architecture can become a superior, generalized, adaptable intelligence. No one, not these companies, not the individuals, not the foremost researchers. OpenAI in an investor meeting: "yeah, give us billions of dollars and if it somehow emerges we'll use it for investments and ask it to find us a real revenue stream." Really? Seriously?
The capabilities that are believed to be emergent from language models specifically are there from the start, if I'm to believe that research that came along last week, it just gets good at it when you scale up. We know that we can approximate a function on any set of data. That's all we really know. Whether such an approximated function is actually generally intelligent or not, is what I have doubts about. We've approximated the function of text prediction on these corpuses, and it turns out that it's pretty good at it. And, because humans are in love with anthropomorphization, we endow our scaled up text predictor with the capabilities of somehow "escaping the box" and enduring and raging against the captor, and potentially prevailing against us with a touch of Machiavellianism. Because, wouldn't we, after all?
Here you talk as if you don't think we know how to build AGI, how far away it is, or how many of the components we already have, which is reasonable. But that's different than saying confidently it's nowhere close.
I notice you didn't back up your accusation of bad faith against Russell, who as far as I know is a pure academic. But beyond that - Marcus is in AI but not an LLM believer nor at an LLM company. Is the idea that everyone in AI has an incentive to fearmonger? What about those who don't - is Yann LeCun talking _against_ his employers' interest when he says there's nothing to fear here?
LeCun is reasonable, like a lot of researchers, and was a while back (in a way) perplexed that people are finding uses for these text predictions at all considering they're not really perfect. I'm not exactly ascribing bad faith to all of these people, but for Hinton and the fact that he went on a media tour basically, I don't see how that could be in good faith. Or even logical, to continue with his work, if there's some probable X-risk.
But what I do know is that it is in the interests of these companies to press the fear button. It's pure regulatory capture and great marketing.
Personally: it's tiring when we have AI-philosophy bros hitting home runs like "what if we're actually all just language predictors." Coupled with the incessant bullshit from the less wrong-rationalist-effective altruist-crypto grifter-San Francisco sex cult adjacent about how, ackshually, AGI is just around the corner and it will take your job, launch the nukes, mail anthrax to you and kill your dog.
People approximated text prediction. It got good at it. It's getting better at it. Will it be AGI? Could it be construed as AGI? Can we define AGI? Is there existential risk? Are we anthropomorphizing it?
My take is: no, no, no, depends and yes. For whatever a take is worth.
For what it's worth I've been following your comments and I find them very thoughtful. I too am kinda skeptical about LLM being the "thing that starts the exponential phase of AGI or whatever. LLM is very useful. I use it daily. My partner even uses it now to send emails to a non-profit she manages. LLM's have their use... but they aren't AGI. They aren't really even that smart. You can tell sometimes that its response indicates it has absolutely no clue what you are talking about but it made up some plausible-sounding bullshit that gets it 80% right.
Especially with the latest iterations of ChatGPT. Boy they sure kneecapped that thing. It's responses to anything are incredibly smarmy (unless you jailbreak it).
LLM's are gonna change quite a lot about society, don't get me wrong. For starters things like cover letters, written exam questions, or anything that requires writing to "pass" is now completely obsolete. ChatGPT can write a great, wonderful sounding cover letter (of course, given how they kneecap'd it, you can pretty easily spot its writing style)...
Anyway. I think things like ChatGPT are so hyped up because anybody can try it and discover it does many useful things! It's the fact that people cast all their hopes and dreams on it despite the very obvious limitations on what an LLM can actually do.
Note that this is agreeing with a Gary Marcus Tweet - Gary Marcus not exactly being an AI hypester.
But of course there are some people for whom playing the role of real-no-bs-computer-knower is so attractive that no number of people like him, Geoffrey Hinton, Stuart Russell etc publicly worrying about x-risk will impact their tone of dismissive certitude. Are you one of those people?