Hacker News new | past | comments | ask | show | jobs | submit login

“Learn to code.”

That’s what folks were crassly saying to truck drivers, baristas, and cashiers.

The writing was/is on the wall: AI is only going to get more powerful and able to be applied to more and more complex tasks.

The thought was that “unskilled” labor would be the lowest hanging fruit, that automated AI - with some (but minimal) human oversight - would replace serious chunks of the workforce in various minimum-wage and “blue collar” sectors.

Machines don’t necessarily need to sleep, they don’t have labor unions, or laws that require healthcare or overtime pay. They don’t get upset, take things personally, seek revenge, or reciprocity like a person.

Sounds like that could be a threat to many kinds of jobs, many of them “bullshit jobs” (in the words of Graeber), but others as well.

It seems (to me, at least) that the more immanent threat is text-based AI - with some human oversight - replacing large swaths of the tech workforce (many of whom were leading the narrative about the truck drivers).

The incentives for companies like Microsoft, Amazon, Google, and others, to go this route are obvious, beyond the stated reasons why “low-skilled” labor is at risk. They already have enormous investments, acquisitions, projects, established platforms, and infrastructure related to AI.

I expect to see more partnerships like the one between Microsoft and OpenAI, from all of the major tech companies.

I also believe the connection between the acquisitions/partnerships and the mass layoffs will become more and more obvious as these acquisitions/partnerships continue to happen.

I can’t be the only person that is noticing this…




Here's the weird thing though. Bullshit Jobs could already be eliminated, yet they are not. Why not? Why would companies wait until AI to eliminate what we already know are Bullshit Jobs?


As tempting as it may be for "management" to imagine a future where requirement docs get translated into code, the actual reality of the opetation will always require a few humans to oversee it. But I do expect businesses that claim to do exactly that to appear. They will provide very convincing pitches and make fortunes. However, they will still need to retain humans working quietly in the background. I don't expect "management" to ever realise of the added overhead


It will take 10 years for folks to realize the layoffs/unemployment are AI, 20-30 years for a coherent political movement to show up that lays out a post-AI-labour society.

In the meantime, rocky road.


A lot of very smart people, both within NLP research, and here, are in complete denial about what the proliferation of high quality LLMs means for their jobs and earning potential.

The only thing which makes me less sad is that I'm pretty sure Moravecs paradox is actually not all that real, but is more due to the relative lack of engineering interest put on solving continuous control problems. Apparently reinforcement learning on transformers works now (RLHF in ChatGPT). This implies that we should see high effective continuous control models very soon. Robots are coming for physical labor, it'll just take a bit longer.

Shit man, when I was last in South Korea, I felt like I was living in the future. They had many "24/7" drink cafe's where it's literally just a robot arm that makes the drink for you for a few dollars.

It is painfully ironic to knowledge workers that they are destroying their own earning potential, but physical labor is not safe. Nothing is safe.


Stop this drama and get a gun :) I give 1-2 years for basic programming, up to 5 for physical labour.

IMO this will have positive consequences if AI doesn’t enslave us.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: