We work with LLMs on a daily basis to solve business use cases. From our work, LLMs seem to be nowhere close to being able to independently solve end-to-end business processes, in every use case they need excessive hand holding (output validation, manual review etc.). I often find myself thinking that a use case would be solved faster and cheaper using other ML approaches.
LLMs for replacing work in its entirety seems to be a stretch of the imagination at this point, unless an academic breakthrough that goes beyond the current approach is discovered, which typically has an unknown timeline.
I just don't see how companies like Anthropic/OpenAI are drawing these conclusions given the current state.
The developers may well be Clever Hands-ing themselves, seeing capabilities that the models don't really have.
But the… ah, this is ironic, the anthropic principle applies here:
> From our work, LLMs seem to be nowhere close to being able to independently solve end-to-end business processes
If there was an AI which could do that, your job would no longer exist. Just as with other professions before yours — weavers, potters, computers: https://en.wikipedia.org/wiki/Computer_(occupation) — and there are people complaining that even current LLMs and diffusion models forced them to change career.
> I just don't see how companies like Anthropic/OpenAI are drawing these conclusions given the current state.
If you look at the current public models, you are correct. They're not looking at the current public models.
Look at what people say on this very site — complaining that models have been "lobotomised" (I dislike this analogy, but whatever) "in the name of safety" — and ask yourself: what could these models do before public release?
Look at how long the gap was between the initial GPT-4 training and the completion of the red-teaming and other safety work, and ask yourself what new thing they know about that isn't public knowledge yet.
But also take what you know now about publicly available AI in June 2024, and ask yourself how far back in time you'd have to go for this to seem like unachievable SciFi nonsense — 3 years sounds about right…
… but also, there's no guarantee that we get any particular schedule for improvements, even if it wasn't for most of the top AI researchers signing open letters saying "we want to agree to slow down capabilities research and focus on safety". The AI that can take your job, that can "independently solve end-to-end business processes" may be 20 years away, or it may already exist and be kept under NDA because the creators can't separate good business from evil ones any more than cryptographers can separate good secrets from evil ones.
> If you look at the current public models, you are correct. They're not looking at the current public models.
> Look at what people say on this very site — complaining that models have been "lobotomised" (I dislike this analogy, but whatever) "in the name of safety" — and ask yourself: what could these models do before public release?
Give politically incorrect answers and cause other kinds of PR problems?
I don't think it's reasonable to take "lobotomised" to mean the models had more general capability before their "lobotomization," which you seem to be implying.
> Give politically incorrect answers and cause other kinds of PR problems?
If by that you mean "will explain in detail how to make chemical weapons, commit fraud, automate the production of material intended to incite genocide" etc.
You might want to argue they're not good enough to pose a risk yet — and perhaps they still wouldn't be dangerously competent even without these restrictions — but even if so, consider that Facebook, with a much simpler AI behind its feed, was blamed for not being able to prevent its systems being used for the organisation of the (still ongoing) genocide in Myanmar: tools, all tools including AI, make it easier to get stuff done.
> I don't think it's reasonable to take "lobotomised" to mean the models had more general capability before their "lobotomization," which you seem to be implying.
I don't like the use of the word, precisely because of that — it's either wildly overstating what happens to the AI, or understating what happens to humans.
And yes, when calling them out on this, I have seen that at least some people using this metaphor seem to genuinely believe that what I would call "simply continuing the same training regime that got it this far in the first place" is something they are unable to distinguish from what happened to Rosemary Kennedy (and yes, I did use her as the example when that conversation happened).
It could simply be that the work environments they're in are simply echo chambers, which is probably a necessity of working there. They likely talk to each other about happy paths and everything else becomes noise.
I think it says more about their self perception of their abilities in realms where they have no special expertise. So many Silicon Valley leaders weigh on in on matters of civilizational impact. It seems making a few right choices suddenly turns people into experts who need to weigh in on everything else.
I don’t think I’m being hyperbolic to say this is a really dangerous trend.
Science and expertise carried these people to their current positions, and then they throw it all away for a cult of personality as if their personal whims manifested everything their engineers built.
These “AI will end all jobs” takes never seem to understand that a significant portion of the economy is driven by hype, celebrity, social pressures, and competition. People do not follow certain content creators because they are the most economically efficient and provide the “best” information, they follow them for largely human reasons. And as long as putting in more work leads to being more successful in this realm, jobs will continue to exist.
Ergo I think the large shift will not be from jobs to no-jobs, but from jobs to entertainment-based jobs. In a world where everyone’s basic needs are met, the result is an attention-driven economy, not no economy.
Also, it's hard for proponents, often those in AI companies, to avoid coming off as either buying into their own hype, or otherwise just pushing the marketing for their product. The AI person is so convinced that AI will be so good that it might end work as we know it? Maybe, but it's also good for pushing the bull case for his company.
The thing I laugh about is the ignorance of these pieces of the first reason people organized and built the bases of law and economies- to control what we now call real estate. RE is the basis and reasons for nation states, militaries, assets and the whole financial system- from which this person expects there to issue UBI. Jobs only exist when that whole system is stable.
There is and always will be better and worse property around this unique globe. This will be exacerbated by climate challenges. An AI enhanced country will simply take control of desirable property of another weaker state, with predictable outcomes for the humans
involved. The assumption that "we" will just get past that transition is a rich one, that is, one only a rich person can make.
The idea that with AI everyone’s basic needs will be met is rubbish. The current mindset is you don’t work you don’t deserve to eat or go see a doctor. That isn’t going to change anytime soon. 8 people have as much wealth as the bottom 50% of the entire world. The machine that created that disparity won’t stop.
> Ergo I think the large shift will not be from jobs to no-jobs, but from jobs to entertainment-based jobs.
And that, in and of itself, is dystopian.
Furthermore, there's far more inequality in "entertainment-based jobs" than other types of work. Everyone can't be an influencer, period, and it should be blindingly obvious to anyone who's lived in the world that not everyone can be an entertainer.
> In a world where everyone’s basic needs are met, the result is an attention-driven economy, not no economy.
"AI" will not bring us to "world where everyone’s basic needs are met." We lack the ideological foundations for that (and if we had them, everyone's basic needs would already be met. It's far more likely that we just get a bigger homeless problem, with people trying to scrape by living in sewers (like they already do! https://www.nzz.ch/english/the-misery-of-the-homeless-in-las...).
There are an awful lot of currently employed people who are neither remotely entertaining nor creative-minded enough to fill a more behind-the-scenes role for someone who is (assuming jobs like that even exist in this hypothetical near-future massive employment shift).
I am easily able to spend 24/7 consuming content that I like - and I have very specific niche tastes, but even there enough content exists basically until the end of time.
I could see this work if we develop even more exotic and creative tastes, creating even more hyper-specific niches and tiny communities around them.
> People do not follow certain content creators because they are the most economically efficient and provide the “best” information, they follow them for largely human reasons.
You recon? I think most of this today is algorithmic, while for most of the 1900s it was effectively a lottery.
> Ergo I think the large shift will not be from jobs to no-jobs, but from jobs to entertainment-based jobs. In a world where everyone’s basic needs are met, the result is an attention-driven economy, not no economy.
Peacocking will still be thing[0], but I don't expect it to go the way you describe.
Even with your assumption, we cannot all be creators. At best we can signal to each other how "refined" we are, that we "have good taste", but we'd already passed from the age of writing poems or playing instruments as a primary way to woo others to one of sharing mix-tapes before I was born (and from mixtapes to playlists now).
Instead, I think we may have a world where the super-rich pay for humans to be valets to show they can afford it, where those humans themselves are "forced to get by" with robots who do a better job of it, in much the same way and for much the same reason that a $20 Casio F-91W keeps better time than a $20k Rolex or a $200k Omega (and of course, your phone or smartwatch is constantly syncing with someone else's atomic clock).
[0] and indeed, I think this is the only correct argument for the claim "AI can't ever make art": to the extent that the point of art is to be an expensive signal (rather than merely "pretty"), then AI can only be that when it's unaffordable.
I think the conflict between "is art the signal" or "is art the stuff" is a similar pattern to the arguments over "if a tree falling when nobody is around to hear it, does it make a sound?" being secretly about "does 'sound' mean the vibrations in the air or is it your qualia caused by that?"
And you don’t seem to understand that AI influencers are already here, and are generating far more accounts and amassing far more followers than actual people. And also actual people (including onlyfans models) who used to fake the real connection by using men (like the Tates) to type it are now using AI and bots.
The sea shift will come When people on both sides (not just the influencer side) PREFER multimodal AI to dumb texting. And then you don’t need the human model herself anymore because the AI model is far more versatile (in commercials too).
Followers and engagement is a gameable metric, so expect capitalism to slowly favor AI in everything.
If an AI model can ingest millions of publicly available videos and back up its points with the best B-roll and clips, it can also convince better than a team of 3 people. Even if its argument is bullshit. As you said, it’s not just about truth and results, it’s about follower counts and clout!
So it’s not just about losing jobs, it’s also misinformation at scale, across many accounts. Enjoy the brave new world!
I am talking about only what’s there NOW and possible with CURRENT technology.
> AI influencers are already here, ...and amassing far more followers than actual people
The only one I've seen that gained any significant following is Neuro-sama, and that's largely thanks to Vedal's own constant involvement, during streams, in trying to improve her, by coding plugins to interact with games, training her, and combining different models (text and visual) and having her interact with chat and himself.
Essentially Neuro-sama is more of an extension of Vedal, and wouldn't exist without his efforts. Also half of what makes her entertaining is the banter between her and Vedal. He's a great straight man to her wacky sayings and non sequiturs.
Not saying there won't eventually be some big AI influencers, but I don't see them taking over anytime soon, or even most people prefering them to actual people even when they do become widespread (there will still be enough that do prefer the A.I. that I believe there can be a few that have big followings though).
I think for the most part it will remain pretty niche, though. I watch Neuro-sama sometimes, but I watch content with actual people far, far more often (Neuro/Vedal is like ~0.1% of all the content I watch right now, while I could be watching most or all of time, as there's quite a few archives on Youtube/Twitch...but I don't).
Also I'm assuming by making this claim you're aware of more than just Vedal/Neuro-sama, so I'm curious which AI influencers you're thinking of that have such massive followings.
I’m pretty unconvinced by the AI influencer idea. Most of them are just niche aggregator type of things, not accounts with real influence. By the time AI can replicate a streaming live video, devices and platforms will have already implemented real-identity features. Remember that the platforms themselves don’t benefit from AI content dominating everything and are pretty incentivized to ensure that content is from real people.
> Remember that the platforms themselves don’t benefit from AI content dominating everything and are pretty incentivized to ensure that content is from real people.
These platforms themselves are AI. Even when the posts are made by humans — and posts don't need to be "real time" nor do they need to be video — the feed is algorithmic to maximise your attention, and the choices of which advert to stick in front of your eyes are chosen algorithmically to maximise revenue… and even before the use of LLMs for copy and diffusion models for images, the ads themselves are AB tested by the businesses to see which have the greatest effect.
Not only that but Instagram — owned by FB - is totally being overrun by AI-produced content. Including, for instance, digital models and women, they’ve solved the fingers problem. In a way it’s good because women will no longer aspire to show off their body to many strangers, and AI will generate whatever the audience wants instead.
And if you add partially AI-produced content, such as automatic clips translations b-roll and edits etc. it becomes even more content that’s AI produced.
YouTube, instagram etc. do not benefit from a world in which their entire content base is AI generated slop. They need real people to spend time and attention there in order to sell ads, and that includes creators.
> YouTube, instagram etc. do not benefit from a world in which their entire content base is AI generated slop.
"slop" being the key word. I remember when all 3D animation was "slop".
But even ignoring technological progress, lots of people have regularly criticised mass media for being "slop", and yet the masses keep eating it up — the other side of that coin is that even if we had an AI that was essentially a perfectly sublime artist, I'd expect it to go like this: https://www.smbc-comics.com/comic/ai-12
As I said in the original comment, I think people largely follow content creators for “human reasons”, including that they feel a connection to the creator. This doesn’t exist with AI tools and functionally never will.
The future of media is even more personality-driven than it is already.
> I think people largely follow content creators for “human reasons”, including that they feel a connection to the creator. This doesn’t exist with AI tools and functionally never will.
Where do you propose the future AI influencers of the world are going to get their inputs? How can they make content so good that it replaces human creators, if those human creators aren’t even expressing themselves? It will be entirely disconnected from culture and therefore not be compelling to viewers in the first place.
The AI-replacement theory just doesn’t line up with how online trends and social behavior actually works. Creators become popular because they are relatable, are charismatic, etc. not because they have a formula that can be replaced by a machine.
> Where do you propose the future AI influencers of the world are going to get their inputs?
AB testing on how humans respond to it.
> It will be entirely disconnected from culture and therefore not be compelling to viewers in the first place.
1. Not disconnected: at such a point, the AI is the culture.
2. Your use of "therefore" does not work even absent #1, for the same reason that drugs are compelling irregardless of any connection to the culture in which they are consumed.
> Creators become popular because they are relatable, are charismatic, etc. not because they have a formula that can be replaced by a machine.
And charisma is… well, an LLM is going to be better at the personal touch when you've got too many fans to keep track of as a mere human, which is what, a few hundred? After that point, you're going to be giving vague non-specific platitudes no matter what. Things like this (the response to "As charismatically as possible, respond to a star-struck fan email"): https://chatgpt.com/share/41bf083f-7a37-444f-84f3-ab99a9caf9...
Thing is, that actually works: we know it does, because a performer can come on stage and say "Hello Wolverhampton!" without giving any of the audiences' names and be cheered rather than booed off (assuming they're in Wolverhampton, obviously); and also from the example of horoscopes being a thing that people enjoy despite dividing the world up into 12 boxes and then having to make a vague platitude that could fit anyone.
This is the same kind of thing people always say, that a machine can’t replace a doctor’s bedside manner or a pilot’s intuition. Here is a clip from the first Iron Man, watch the first 15 seconds, word for word same kind of sentiment:
And then technology proceeds to trounce humans and it turns out that people dont want London Cabbies to have “The Knowledge”, they’re totally happy with an Uber driver to mindlessly follow the GPS routing system which knows far more about traffic. And then a self driving car which never has any social issues and can regale you with anything you want during the drive, stay silent or play you any movie. Did you ask your parents for most subjects or ask google? People overestimate how much their special idiosynchrasies are needed by other people. They just want to get in, get out, and get what they want.
And if you add partially AI-produced content, such as automatic clips translations b-roll and edits etc. it becomes even more content that’s AI produced.
People expect that increasing level of production value, and soon they will treat human influencers the same way they treat, say, a picasso painting vs DALL-E. The Picasso painting is just there, it can’t crank out what you want every minute. So it is a curiosity. But not where you get most of your news or whatever.
All the qualities you mentioned can be replaced by a superstimulus and most people will come to prefer it the same way birds prefer rounder fake eggs to their own eggs, and even kick their own eggs out.
As for where they will get the content? Where does news on twitter and telegram appear from? Crowdsourced news from everyone having a cellphone and then everyone discussing what they just uploaded. Citizen journalism. Just summarize a freaking forum and you have a podcast in all languages recorded in whatever voice and hottest anchors you want. And they can then also tell you anything you want to know in the “would you like to know more?” fashion of Starship Stormtroopers, if it had been a thing LOL. And sell you far better because they can give you individual attention and remember preferences of millions of people and your friends. And solve the two-sigma problem for educating students. Results will matter for parents. This next generation will grow up with AI friends and tutors, and will not want human tutors — for most subjects ! Let’s hope this next generation will still want human friends.
Yeah I think you’ve veered into sci-fi story territory here and far from a realistic vision of the future. Which is really the issue with all of these absurd AI predictions.
Here’s what I think is more likely: phone and camera manufacturers implement a way to track if a photo is taken from a camera in real life or not. Social media platforms do the same thing. AI content continues to exist but is not implicitly trusted, similar to the way we perceive photos today (but didn’t 50 years ago), in that they can be Photoshopped.
> Yeah I think you’ve veered into sci-fi story territory here and far from a realistic vision of the future. Which is really the issue with all of these absurd AI predictions.
Much as I would agree that quoting a Marvel film is not a useful thing… the same quote exists for real-world examples like playing Go, or the game of Diplomacy.
> Life continues on mostly as normal.
The one single thing I'm confident of, is that "normal" will mean something completely different.
I have only the vaguest beliefs about what that difference might look like, but the status quo absolutely isn't stable.
Even the mere existence of the web upon which social media runs, is a thing which I lived through, a thing where I remember the before times; a time with film stars sure, but no online influencers. It wasn't a magic time without issues (there were, and remain, many problems with everything), but today would be considered a wild sci-fi fantasy compared to when I left secondary school, what with us all carrying GPS trackers because they were too cheap to not include in the phones we use to make too-cheap-to-meter video calls, which we use to share clips of cats being silly or how nice our lunch seems.
There was no single moment in my life where I thought "this changes everything", but all those small changes added up and we don't live in the 1990s (or 80s) any more. The home computer went from a Commodore 64 playing Jet Set Willy to a Performa 5200 playing Marathon 2 and giving me a dial-up modem, but the games were still just games and the 56k modem struggled with everything and only connected at 8-10 kbit/s; I went to university and got a direct ethernet connection to the UK academic backbone and VOIP reached the consumer market but I had nobody to call; I eventually got a mobile, first a Nokia 6210 then some random colour screen feature phone then a Nexus 5 etc., but none of the individual transitions or upgrades really felt worthwhile, they were forced by the hardware breaking, and yet my iPhone SE 3 is clearly more capable and can enable things that were simply not possible on the 6210.
The absurd and useless vantagepoint of a 25 year old living inside the apex of a world unique privilege bubble. The false utopianism echoes of a Brave New World playing itself out again in real time.
Of course. Her own perspective is her own. But that wasn't that piece. That piece was her trying to tell a larger story. In doing so, she started from her own quite significant ignorance and tried to infer from that a larger story. There is little value in that progression.
In talking with other young folks I see echoes of her misunderstandings. A lot of this is the tool vs oracle discussion. I think there are significant risks where young folks see LLMs as oracles, where they really are better understood as tools. However, many young people don't have the experience to make that distinction.
I had to do me expenses this morning and one task is to split out the bill into Room, Taxes and Dinning. I asked gpt-4o to handle and it was a few dollars short. I then asked it to check and be careful and it produced A/B alternative response where A was actually correct when it figured out it's calculation was short of the provided total in the bill.
It could not be trusted to get it right first time and hence I had to do the calcs as well as check it's response - so rather than making me more productive it made me worse.
A rule based system might have been better or perhaps loading the data into a DB or data frame and getting it to produce the SQL and run that instead.
Either way this simple task could not be solved by the best LLM model out there.
I think most people are buying into the hype because search for information has just become baaaaad due to uuuhm ... incentives and the current crop of LLM provides a good view into all information available (also behind paywalls...) on the internet in any language.
That way it works great if you want to solve programming problem X for which there is a library and it also works if you want to know which companies built bicycles in Poland pre-1990. It also was very confident in answering what happened to the factories. Just that some of the factories didn't exist and some of the new companies it mentioned came very close to what google search would return... (aka unrelated advertisement)
In this highly dimensional chaos, every outcome is almost as likely. From 'civil war' to 'everyone is chilling at the beach all day', from 'we have robot servants' to 'we are the robot's slaves (possibly without us even realizing)', from 'all prices go to zero' to 'there are robot cars with guns patrolling the streets'.
You can get in a spiral of more and more fear and hopelessness and anxiety, and you can of course go up a solarpunk stairwell.
My advice is to relax and take advantage of what AI can do right now, don't think too much about the future, we have reached the steep part of the S curve, so tomorrow is yesterday as Terrance McKenna says.
I spent most of my free time studying, because now chatgpt allows me to study any subject and I can ask it stupid questions when I don't understand something in a lecture. I literally take a screenshot (with youtube captions enabled) and ask it to clarify, and then watch the video again.
Having youtube and chatgpt is godsend. Having access to Andrej Karpathy and Richard Feynman,
Leonard Susskind and Carl Jung, you can even hear Erwin Schrödinger himself speak https://www.youtube.com/watch?v=hCwR1ztUXtU .. what a time to be alive!
There is a saying: its never as good or as bad as it seems.
The universe is an interesting place. Certainly won't complain if there's more time in the day to get lost in its details.
There's still so much to do, however. Fully open to the upside of artificial intelligence, but cautiously so. I fear it is invoked as a kind of deus ex machina. In my view, it is a multiplier and not a substitute.
Flying car essentially exists - it's called helicopter.
It's just uneconomical to do actual flying cars due to costs and fuel inefficiency. Flying cars will happen when if we have breakthroughs in battery technology and potentially energy generation.
I enjoy "working" on ideas. I will always enjoy working on ideas. I don't work in AI specifically but did purchase "The Society of Mind" by Marvin Minsky and followed OpenCYC.
I read this story and thought that the author is too close to subject. I'm sure what he is working on is impressive but the real trick with technology is having it work in the real world.
Outside of large rent seeking business plans what practicalities will this provide for the everyday human?
The best I can see coming from AI is that large corporations phone systems may start to provide quality responses. I feel that training AI on the internet is not as productive as it may seem. The internet is an artificial environment almost entirely dominated by commercial content. I'm not surprised AI can appear quite smart in those confines.
The stumbling block I see is the integration with the outside world. This is domain with sentient creatures with unique ideas.
The biggest impact for AI in terms of employment will be in the customer support (Call Center) and employee support (IT Help Desk and HR Support) functions.
The second biggest impact will be at the lower levels of the software development functions (QA Engineers/Testers, Technical Writers, Business Analysts, Junior Devs).
Outside of those areas, minimal impact over the next 5 years. Even within those two areas, I predict 30-40% productivity gains but it will not all lead to reduced headcount, more than half of it will lead to increased expectations as all innovations do.
One other observation is that what's leading to much of the recent angst is CEOs engaging in mass layoffs to meet Wall Street expectations and publicly attributing it to AI when in fact the work is just being moved offshore or not being done at all (until there's a big public failure and it resumes).
I asked gpt4o to output some test questions for what I'm studying.
Then I put one of the questions back into GPT to ask it to explain which of the 4 multi-choice answers was correct.
GPT said the correct answer was answer C then output something completely different than any of the four options it gave me originally.
I find GPT very useful yet at the same time it has reliability issues to the point where I wouldn't trust it without another person to look at its output and verify on important issues.
I'm not really a machine learning scientist but this unreliability seems baked into the core of how it runs as it doesn't have 'human reason' it's just word location statistics.
Maybe reliability will happen in the future?? Ir maybe it will remain like a knowledge worker drill gun instead of a screwdriver and enhance people's work.
> There are obviously other vital questions, like how people will be able to meet their material needs. Many have examined this question, with no final answer yet adopted as official policy for this contingency by any government. I am instead going to do something that may feel like cheating. I will go ahead and assume that people can meet their financial needs through universal basic income or other transfers and will solely concentrate on the question of whether people can and will be happy—or at least as happy as they are now—without work.
You've gotta be fucking kidding me with this.
We can barely agree on whether service workers should receive tips, let alone what a Universal Basic Income will look like at even a municipal level. You can't wave that away.
I really do not understand the exuberance to rush to a post-work future where AI controlled by private corporations meets most of our needs. I cannot see a future in which this doesn't end with mass unemployment and neofeudalism.
Case in point: Avital is Chief of Staff to the CEO of one of the most valuable and well-funded companies in tech right now. They stand to make significant fortunes if/when Anthropic undergoes a financial event. They will absolutely benefit from the increased worker pool that mass unemployment will create.
I will point out that even if the author is 100% correct (which other comments here have been great at disputing), that still doesn’t explain how all the hairdressers and physical therapists and restaurant servers and line cooks are going to be losing their jobs.
Even truck drivers are somewhat safe because even if you get self-driving working perfectly, you still might want a human to physically operate parts of the truck that a computer can’t interact with, because a computer doesn’t have any hands with opposable thumbs that can adapt to an infinite set of situations.
E.g., Truck drivers who deliver Pepsi and Frito Lay aren’t going to be replaced by self-driving because driving isn’t really most of the job.
Hypothetically, If a large part of workers no longer have income how would those roles be supported? Surely there would be reduced demand that would lead to those roles disappearing as a full time position.
This fails to account for the fact that it's people working that keeps the economy going. If AI do replace more and more workers there will be less flowing back to the economy.
At some point there won't even be companies needing AI.
So just by scaling more they think they will get to AGI? My guess is that it will need something new, maybe a new Nobel prize worthy discovery. It will be interesting to see how history will view the LLMs of today.
I disagree with the bulk of the comments thus far that this is too far-fetched. I believe that AGI, and some of the scenarios the author proposes are possible and society needs to be aware of this likelihood.
"The general reaction to language models among knowledge workers is one of denial." - FTA
What am I missing? We know executive’s who make shit up like this can be replaced by software instances that make shit up.
I thought the point is people want influence.
I’m sorry to be ageist, but at 25 I thought lots of things were going to change the world, but they didn’t. I hate to break it to 25-year-olds, but the world is far more complex, messy, and beautiful than most
of you can conceive of.
I wish it were true! I for one would welcome our new AI overlords. But mostly I'm just shocked at extremely smart people like Tegmark or Hinton giving these apocalyptic and sci-fi visions of LLMs taking over the world credit.
It's a bit ironic that knowledge work will become obsolete before blue collar work. I disagree with the author that it will become completely obsolete in the next 20 years (although certainly in the next 200), but it will change significantly and there will be a massive downward pressure on knowledge worker compensation as the barriers to entry have been decimated. On the other hand, good luck getting your AI to unclog your sink, remodel your kitchen or fix your AC any time soon.
I mean if you think it a bit forward, if you really have a general system that can mostly obsolete all knowledge work, I don't think the leap to having some sorts of robots would be that many years away. These general agents could be taught to control the robots and perform different things, or they could even do that research by themselves.
Also if most of knowledge work is gone, these people are either unemployed and have no money to spend basically or retrain to some blue collar roles which would push down wages if there is more supply of the workers. It's very hard to actually predict how it all will end up in that case
You could assume the writer has insider info, but that attitude is a bit outdated these days. If Anthropic does have an advantage, it isn't a very significant one. The vast majority of research is public and most, if not all, big discoveries "circulate" around the economy.
If we're looking at how competent "AI" systems will have to be in order to take over all of programming, we'll have to wait for them to topple over almost all other white collar jobs before coming to this one. You could ascribe this to the "First they came for X ... then they came for me and no one was left" viewpoint, but considering the progress we've had since 2021 and the fact that this field is way more popular and hence "stickier" and slower means progress will slow and newer ideas will have to push through waves of mediocre-but-popular and technologically-regressive-but-economically-viable ideas. It seems very unlikely that in 3 years all white-collar jobs will be replaced.
Computer hardware is gold in this new boom, and 3 years isn't a significant amount of time for hardware development (considering how consumer and enterprise hardware development goes hand-in-hand in this field). In fact, all the progess visible in 3 years has likely already been decided upon.
What most people fail to realize is that LLMs indeed are stochastic parrots, but the internet is so much more vast than they can fathom. It has data on almost EVERYTHING. ALL of that was fed into this recursive architecture that then became half-decent to talk to. To visualize a LLM, imagine a never-ending "net" of information. To reach from one point to another, LLMs can't make a straight line -- they make a sphere with its center at the starting point and when it connects, the volume is the amount of data is has to compute during training. It's a very inefficient algorithm!
Filtering and sorting and labelling data is a difficult task and is something that isn't focused on enough. The end result (training on massaged data) is given undue importance because it's easy and accessible. There simply isn't enough time in 3 years to train models or to re-filter/sort/label data that will make this author's predictions come true.
As usual, the "hard limit" for tech is human mental capacity. Most people can not learn a new language after 20. Most people are not very good at reading. Changing these stats takes centuries of good education and nutrition. 3 years is little to nothing. After losing your job (primary income stream) the biggest hurdle isn't "saving face" but rather figuring out how one's going to afford to put food on the table and pay off bills. There isn't going to be UBI in 10 years, because efficiency gains lead to major, society-wide human discontentment because of the pyramid's base becoming wider. Hoping for a near-flat structure is utopian. Thinking it'll come across without a war is even more fantastic. Taiwan's chip competence seems to be the bedrock of our modern civilization, and before getting giddy for UBI one should realize that maybe sometimes it's all just too good to be true and might come crashing down any second.
LLMs for replacing work in its entirety seems to be a stretch of the imagination at this point, unless an academic breakthrough that goes beyond the current approach is discovered, which typically has an unknown timeline.
I just don't see how companies like Anthropic/OpenAI are drawing these conclusions given the current state.