Hacker News new | past | comments | ask | show | jobs | submit login
AI Crap (drewdevault.com)
279 points by Jiejeing on Aug 29, 2023 | hide | past | favorite | 254 comments



As with much of this author's content this is a strong opinion that lacks nuance, but I basically agree with the fundamental assertion: that the lasting impact of this AI bubble will be to further centralise power, taking it away from workers.

My hope is that a desire for authenticity prevents this from happening – whether that's a strong bias towards human content creators, towards speaking to a human on the phone for customer support (already something companies try to win customers on), or even winning customers on well-paid humans cooking their food for them (something that seems to be increasing).

Unfortunately, I suspect we will get a two-tiered system, where the "middle class" (whether that's disappearing is another question) can afford human content/human support/etc, and the working class are forced to endure poor experiences with AI generated content and so on. This may even get worse over time if, say, AI hits education and provides a worse quality education, but that's probably no different to what we already have with public school funding issues in the US/UK and many other countries.


> Unfortunately, I suspect we will get a two-tiered system, where the "middle class" (whether that's disappearing is another question) can afford human content/human support/etc, and the working class are forced to endure poor experiences with AI generated content and so on

I’m imagining world where companies like Netflix and Spotify introduce dirt-cheap subscription tiers that are populate with AI generated content, while they raise the prices on their existing offerings that have stuff made by humans.

If you’re poor you watch shitty, AI-generated movies on Netflix for $1.50/mo.


«The literature that the Ministry of Truth produced for the proles was of "a lower level," and consisted of "rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes [and] films oozing with sex."» [George Orwell, "1984"]


In fairness I don't think we've needed any AI or even state help in achieving this but it stands to convert some recently loss making outlets into profitable entities again.


> If you’re poor you watch shitty, AI-generated movies on Netflix for $1.50/mo.

I suspect even worse scenario. If you're poor you watch shitty, ai-generated content for the current price. If you're persuaded to belive that you're middle class you cough up twice more to watch shitty, ai-generated content called premium becuase it will be "artisanely curated by humans (tm)". If you're really rich you will go to the theatre or opera.


> If you're really rich you will go to the theatre or opera.

Mm, Shakespeare.

'tis easy mimic'd in style and verse; and the many devices used, familiar. When anachronism strikes, few yet notice, and fewer yet call out, for the Bard is older to us than he to Chaucer.

What folly, that the rich dress and peacock themselves so, sitting quiet and polite; gathered at great expense to what was, in days yore, the entertainment of yeoman and serf who cheered and jeered as saint and villains pranced before?

Hark, though; I say not that Avon brought no talent, rather that the talent of The Globe was to his time as the talent of The Disk to ours, as the easy-read and prolific prose of Pratchett is the closer cousin, despite the esteem of the powerful going to those hardest to follow in modern vernacular — Ulysses, War and Peace, Crime and Punishment, and yea, also the Bard and King James.

'tis almost as if the difficulty is the point.

(Exits, pursued by a peacock)


Well let's hope that's the worst effect.

I'm seeing a world where kids are put in a room with an AI that "educates" them, setting the lowest possible bar for personal development for your average kid and as quickly as possible expelling kids who are deemed to be a problem.

Having real teachers will be a luxury.


Perhaps not even the lowest possible, but the highest allowed: https://en.wikipedia.org/wiki/Examination_Day


Why would the poor not be able to afford human generated content? Copying human generated context costs basically nothing, so the marginal cost of letting the poor view it is basically nothing.


> the marginal cost of letting the poor view it is basically nothing.

That’s why all kinds of products today with nearly zero marginal cost are free?


Yup.

I'm old enough to remember when operating systems cost money[0].

And compliers. And encyclopaedias. And maps.

I've got too much good zero-cost audio and video content to get through, even at double speed.

[0] MacOS 8, I don't remember what I spent on it (UK), but wikipedia says it cost $99 in the US when it came out in 1997. Inflation adjusted, $188.56 today.


> Why would the poor not be able to afford human generated content?

Because, why not? It's called segmentation.

> Copying human generated context costs basically nothing, so the marginal cost of letting the poor view it is basically nothing.

How do you explain why more recent movies are more expensive to buy or rent on streaming services, then?


> How do you explain why more recent movies are more expensive to buy or rent on streaming services, then?

Artificial price inflation and recouping costs combined. Fueled by FOMO, people tend to pay more to be able to access it and be up-to-date (TM) in their social circles.

They'll probably recoup the costs without inflated prices, but if they can exploit that title for more money, they'll do it.


Because you have to pay those humans their royalties when their content is consumed.


Creating human-generated content will cost a lot more, so it will be paywalled. This pattern already exists, limitless machine crap will just make the differential greater.


Kinda interesting that facebook is switching from "real people" generated content (i.e. someone you more or less know) to "random crap" content (not even the tiktok algo). It is like shooting themself into a leg.


I'm reminded of the whole story behind

* https://coconuts.co/bangkok/features/primitive-technology-an...

* https://coconuts.co/bangkok/features/primitive-technology-yo...

People just make content that get views and optimize what gets eyeballs. Take people out of the equation and add gradient descent and who knows what insanity will be unleashed.


That’s already how Netflix works - shitty mass-produced content.

All good movies/shows are elsewhere and usually “for rent”. Anything IMDB top 100 is for rent, not included in any subscription.


Who’s to say that the AI-generated content will be worse than human-generated? Just because that’s the case today doesn’t mean it’ll be so forever.


Because it's upper limit is the best human created content. It doesn't create anything out of thin air. Just "emitting" things mixed from its training set.

Also, a machine cannot create equally complex or more complex than itself, so AI is always capped at human capacity, at most, asymptotically.

It can be faster. It can batch process, but it can't process "like a human".


For now.


Well, nature's laws are pretty resilient as far as I can see.

Laws about thermodynamics, and entropy still hold. Also, as far as I can see, no living creature, incl The Nature itself succeeded to create something more complex and sophisticated than itself.

So, I'll be sticking to my arguments, for now.


Diamond Age ractors, then.


>> Unfortunately, I suspect we will get a two-tiered system, where the "middle class" (whether that's disappearing is another question) can afford human content/human support/etc, and the working class are forced to endure poor experiences with AI generated content and so on.

Due to the economics of information, this is unlikely for contents. The cost to watch Avatar is often quite close to a local film with <1% its budget. The most interesting clips made by content creators (Ted Talks, Veritasium, MrBeast, etc.) are accessible to everyone with internet access. There will be some price differences based on access points (theater vs TV vs phone) or resolution but not necessarily the content itself.

I also suspect the best contents in the future will be co-created by humans and AI.


This is a fair point, a different way of looking at things.

The way I was thinking about it was, for example, newspapers going online behind paywalls, YouTubers/Podcasters creating paid content on Patreon in order to fund their activities.

On the other side you get Buzzfeed creating AI generated quizzes (rather than news), and YouTube/TikTok content farms with AI generated scripts. Both of these are ad supported, so free to the end consumer, and therefore more accessible than a Patreon/NYT/etc subscription.


Articles from The Economist, The Wall Street Journal and The New York Times are better and also more expensive to produce than those from Buzzfeed, which justify their subscription fees. What if editors of such caliber could leverage AI-assisted tools to scale their efforts and create high-quality content at a lower cost?

While it's true that, even with advancements in AI, top-tier content uniquely shaped by the individuals or teams behind it will still be more costly to produce, these are likely to get cheaper over time as AI improves. As a result, high-quality content will probably become more accessible.


AI is the only way for small countries to compete with Hollywood. I'm sick and tired of all those glorified stage plays that my country produces simply because we have no access to Hollywood money.

Meanwhile Hollywood has so much money that they don't know what to do with it - they give it to directors like Michael Bay to create all those pointless cgi-fests.


When you say "to compete with Hollywood" you either mean one of two things: on quality or on box office.

The latter is nothing to do with CGI or production budget: it's about marketing budget, in which AI has less to offer.

The former, in my view, will not be aided by AI. The "glorified stage plays" are (not always but on aggregate) of significantly higher quality than quite a lot of Hollywood output. AI won't change that in any positive way.


Say you want to create a historical drama that doesn't look like crap. You have two options. Either create massive physical sets or (as is increasingly popular in Hollywood) CGI ones. Both cost an enormous amount of money. It isn't just tedious popcorn movies about superheroes or aliens that require massive budgets for effects.


Tbh I would like to see a movie about superheroes or aliens from a non-Hollywood director as well. They could put their own unique spin to it.


This sounds a bit like District 9 (albeit having Peter Jackson as producer might exclude it) - the director then "graduated" to Hollywood, resulting in a decline in critical reception (though I've heard Gran Turismo is pretty good by video game movie standards).


If it comes to pass that AI dramatically reduces the cost of making a movie that works for Hollywood as well as small countries and probably doesn't result in the small countries being able to compete as a result?

In the same sense that the revolution that occurred in the digital distribution of games and general improvements in the accessibility of tooling meant the market for games expanded into more niches rather than producing more AAA games from smaller developers.


If a local director could make a movie with comparable quality to Hollywood movies - I would watch his movie. Hollywood only dominates, because their movies have way higher production quality.

There is only so much you can do with shoestring budgets that our directors have.


My question to you is: why do people spend money on those glorified stage plays and not just spend all their money on Hollywood movies? It's likely that Hollywood movies just don't align perfectly with the local culture due to limits/costs on localization. Now imagine if Hollywood movies had perfect dubbing (including changing scenes and lip movements), local references and jokes, removed cultural mismatches, etc. Would that make people less or more likely to spend their money on them?


We watch mostly Hollywood movies. Local output has a pretty low market share. Both due to low budgets and the fact that Hollywood can make a lot more movies per year.


But do you really want your country to compete with Hollywood and make superhero movies that are indistinguishable from each other?


There is plenty of middle ground between capeshit and a stage play.


> Unfortunately, I suspect we will get a two-tiered system, where the "middle class" (whether that's disappearing is another question) can afford human content/human support/etc, and the working class are forced to endure poor experiences with AI generated content and so on

Have you dealt with human support agents recently? It's just an exercise in gaslighting. I can't wait for AI to take over.


No. That is precisely the wrong attitude. It is the 'corporate' drive to push service to be cheaper (for them) that lead to outsourced and IP poor user services, which AI will take to a whole new level of bad.

What should be desired, is a return to quality and respecting the user (or at least taking the time to understand them), which unfortunately seems unnecessary now with a global reach and unimpeded manipulation and influence that can ignore demands for improvement.


> which AI will take to a whole new level of bad.

Why do you assume that?

> What should be desired, is a return to quality and respecting the user (or at least taking the time to understand them), which unfortunately seems unnecessary now with a global reach and unimpeded manipulation and influence that can ignore demands for improvement.

I know the assumption is always that Big Greedy Co™ is hiring the worst possible employees to save money, but I think it's far more likely that the scale of work required far exceeds the labour required to provide it.


Customer support being a bad experience has always to do with the company policies or a general lack of training.

For company policies, an AI would be rigidly trained and limited to always minimize loses to the company, since you can game the AI if it's too lax by saying the exact prompts or keywords.

For companies that do not even provide basic training except for a FAQ sheet, I do not think replacing the human with an AI is going to improve customer experience, because, a human(IC or manager) might be driven by motivation of compensation or job security, to learn more than what is provided, to do their job well.


What does AI do if it doesn't have a good answer? How do you know it gave you a good answer? There are no qualities like 'that person sounded a bit useless' or 'sounds like they're reading from a script' or any of those other wonderful interactions we have with poor customer service was can judge with sentiment/intuition. You just get 'the AI answer.'

Big greed co will hire the cheapest employees, and will do exactly the minimum amount of customer service they can get away with, so long as their profits are sustained.


I submitted a bug yesterday and got an automated response asking if I'd tried turning it off and on again... now that's not AI but I thought it was funny.

I asked it to forward my email to their devs, human or not ;)


AI that is better at gaslighting?


I just switched my banking from the largest national institution in Australia to a local credit union and another bank that has always had an online/phone only model and is good at it. I get the best combination of human customer service and good value accounts and leave behind the abysmal contempt for customer service demonstrated by the major, with all its infuriating new AI mediocrity. I can't be the only one?


I even go as far as not doing the self checkout at the supermarkets. But In the end I don’t think it will make a difference.


Not everyone has the grit to hold up to the customer retention tactics that companies use once they learn you’re leaving. Offering a short term discount or a last-ditch attempt to look into the customer’s otherwise long neglected issue costs much less and retains most customers out there.


You of course already have this for the most part, it's just "premier support" vs "faq that doesn't answer you question" for most people. It's just the "premier support" providers will be even more thinned out than they already were.


I'm not entirely sure about the centralization part. We already have models that can run on consumer hardware and are freely available to use for anyone (e.g. code-llama 34B is actually a viable gpt 3-5 replacement, if not slightly better)

Training these is still out of reach, but fine tuning is getting close (LoRA) and running them is almost easy at this point.

The products we've built so far are power-centralized, but augmentative. Where we go from there is up to people, not the nature of the technology. My hope is decentralized and augmentative, but the worst case scenario is indeed centralized and substitutive.


By centralisation I don't necessarily mean of the models. It might be the training, training your own model requires expensive hardware (even just a high end graphics card is out of reach for most of the world).

But also, a model running on your phone generating AI content is likely to be cheaper and not as good as human curated content in whatever form that is.

I think compute can be decentralised, while the power is still centralised, or at least those at the low end lose out on quality.


I think you need a caveat... Right now.

There was a time where computers took up the space of whole rooms and had much much MUCH less processing power than the phone I am currently writing this comment on...

The same thing will happen with ai.


oh, believe me, regulations would be lobbied. I.e. to run an AI model as a part of some service, you would need to get certification (a lot of money). Even now you could see a couple of talks on tech conferences from "non-profit" organisations. Also, web services are highly centralised, so there are all the chances to get highly centralised AI services too.


>bias towards speaking to a human on the phone

Hatred of phone calls aside, I heard an interesting take from Alex Hormozi (paraphrased): "You're on the phone and you say, are you human? No? Oh thank God, because you know the AI has a thousand times more experience than all the humans combined."


Is that supposed to be a serious opinion about contemporary AI..?

Because that's crazy to me. Humans can be reasoned with. AI can't. My experience with the likes of ChatGPT tells me that if the AI is wrong about something (which it very very often is), there's no point trying to explain to it that it's wrong or how it's wrong, it will say something like "You are right, sorry for the confusion." and follow up with the same or a similar error again.

AI might eventually become an alright first line, but losing the option to speak with a human -- an intelligent entity which can actually be reasoned with -- seems dystopic.


Having worked in customer service before, the only time customers tried to reason with me was when they were wrong. For at least a large portion of customer service calls, the ability to reason is a negative. Customer calls to say the product is faulty, and with a simple question they admit they didn’t connect the ground wire. They then try to reason with me that they deserve a warranty replacement because their mum just got cancer - fuck reason on cs calls. (Reason might still have a place on complex situations [which yours never is], customer retainment, sales etc)

Even as the top tier tech support for a complex product, 99% of my calls could have been dealt with well by chat gpt in its current form. And my customers would have a more concise outcome, without the variability of my mood, hold times, judgement of their tone etc to effect my advice.

I can’t wait for more ai cs so I can stop talking to a fuckwhit, and talk to an ai that’s almost certainly far more equipped to handle my query.


I'm not talking about a customer arguing that they deserve warranty. I mean if the human CS agent misunderstands the problem, the customer can explain the problem better and try to make the human CS agent understand. If the "AI" CS agent misunderstands the problem, there is no recourse.


Not to mention when you have a problem the AI hasn't been trained for.

With humans you have a chance to escalate. With a chatbot, you might as well take your business somewhere else.


These things already exist (primitive form) and most of them already have the option to escalate.

It's usually well hidden though, to save money.


> Is that supposed to be a serious opinion about contemporary AI..?

No. We're talking about the future. ChatGPT doesn't have millions of hours of customer service experience. It has zero!

(Well, unless you count the RLHF stuff, but my point is it isn't actually learning from human feedback, although we don't actually know what they're doing with those thumbs up / down buttons...)

> losing the option to speak with a human

Already the case at many companies, sadly. (In those cases, if not elsewhere, it's going to be a significant improvement.)


> an intelligent entity which can actually be reasoned with

We're just too impatient to wait for GPT-7 aren't we?


I think that's a point a lot of people are missing. If AI can already talk about certain topics better than most humans, imagine in the future. There may be a time when talking to humans may become really underwhelming compared to all-knowing AIs.


Why have all-knowing customer-support AIs waste their time and resources talking with underwhelming human customers such as myself? It would be much more efficient for customer support calls to be made by all-knowing personal assistant AIs, which can surely explain the nature of the problem much better than me, as well as being better able to infallibly put in practice whatever the all-knowing customer-support AI suggested.


Even better, they may be able to prompt-inject the support AI to actually escalate the support case.


The AI will have more experience composing bullshit put-offs, but even less access to the systems that control the systems that are exhibiting problems customers currently experience. We'll see the problem of layered customer-service (where the first two layers have no power, other than to maybe turn on some pre-configured bonus/offset/coupon) made worse by further automation.


Maybe in future. Right now it's infuriating and miserable, to the point I will absolutely switch any service provider the moment they subject me to this torture. Old systems with "Press 1 to talk to a person" were bad enough, the AI-based ones are so much worse.


Couple thus with the expectation of everyone carrying a phone at all times:

1 in Italy booking a train ride without providing your phone number, email and tessera sanitaria, proved a challenge last month

2 civid required us to carry our green pass everywhere (I'm not anti vaxx)

3 get stopped on some countries (like US) at a border and unable to present a phone for search might deny you entry.

The whole affair was predicted already 50-70 years ago. Jacques Ellul's La Technique is a great comprehensive read on this idiocy


Not sure about creativity. But.

Previously, I had to deal with "Junior python" or "Junior bash" crap at $work.

Finding the dangerous bugs, was measured in seconds. Helping the person to be better in the future, used to work.

Now, all the company is requesting code to ChatGPT. Hey, please, use my script.

I have to deal with "looks fine at first/quick view" code, that needs deep analysis to understand what it's trying to do, why, and where are the (100% sure) hidden "break production" kind of failures.

It's more like "were is waldo"... you know that there will be at least one or two things really really wrong, always, always one or two (or more) catastrophic details, but that they are hidden below something "apparently nice".

And what is worse, all effort to point and fix issues, is lost. Or repeated again and again.

I apologize, but as a senior sysadmin/oncall model, I cannot run your chatgpt code, until you understand how things work.


While I sympathize with your point, I found this passage funny:

> code, that needs deep analysis to understand what it's trying to do

Because if there's on thing AI can already do really well, it's explaining what some code is doing.


> Because if there's on thing AI can already do really well, it's explaining what some code is doing.

But it just makes up a series of words which sound good. It's accidentally correct 75% of the time and just wrong the other 25%. (and IMO, just wrong in some way almost every time I use it).


I am another AI pessimist. Can I please ask the optimists to list the good things LLMs can do for humanity as a whole?

I don't mean banal stuff like Copilot, which is a double-edged sword that might be used against junior developers. I mean world-changing benefits, one step closer to the techno-utopia.

Because on paper, the net benefits vs net negatives, for me and other AI pessimists like the author, are not worth the amount of spam, customer service bollocks and lost jobs LLM will cause to basically make mega-corporations richer.

So please tell me, what will LLMs ever do for us?


Accelerate education at unprecedented levels (if we can figure out how to integrate LLMs into the education system).

To this day I find myself doing a double take every time I'm about to ask ChatGPT to produce a ludicrous amount of output, because it would simply not be reasonable to ask that of some random person. But with LLMs you can do that, over and over again and it will comply every time.

How great is it that if I want to learn about, say, hexagonal architecture pattern, I can ask this thing to produce 500 LOC in a language I happen to be familiar with, and then interrogate it to no end until it clicks for me?


On "accelerate education" - it's often a bad teacher especially on higher level subjects . I've asked it to summarise research for me using the Bing GPT4 model and it would frequently come to conclusions that couldn't be corroborated in the source material, that were even contradictory across different chat sessions and even generate citation links that were totally irrelevant and incorrect, and then try to tell me that it was of a totally different subject to what it had actually pointed to.

Regular chatGPT is even more dangerous because you have no idea what it's referencing most of the time. Yet it lowers the bar to this poor information to such a degree that people will be incentivised to use it regardless.


That has not been my experience with ChatGPT 4. If I were to ask it about some specialized info about quantum field theory or superconductor tech, I'm sure it would send back nonsense fairly frequently. But if I ask it to explain the difference between median, mean and average, or ask for examples on an architectural pattern, it's seldom incorrect.


If you prompt directly about the subject it can be fine but it veers into BS territory more often if you try to get it to synthesise information meaning it's less good with applied examples that are given to it by the user. Not always but the error potential is definitely higher.

The problem then I think is that this means that students will have to prompt "on rails" to get reliable answers. The most curious students who want to stretch their knowledge are likely to cause it to generate falsehoods which are presented confidently and convincingly.


I'm not sure the error potential is much higher than with average teachers, but I guess it depends on the domain. For coding the error rate is lower than a typical teacher right now.

I suspect we'll soon figure out an effective way for LLMs to look-up reliable references to confirm their answers, which should improve the situation drastically. As they are now most LLMs are very barebones. An educational LLM could e.g. connect to the university textbook library and use that to verify its answers.


I think the main issue today is its inability to just say "I don't know" when appropriate. A teacher can do that and sometimes that's a better answer than a fabrication. A teacher can also use less certain language when their confidence level is low as appropriate.

I think this is probably a technically solvable problem but doing so has potential 'optics'/marketing issues in terms of reducing the level of confidence it projects which a lot of people associate with competence.


>Accelerate education at unprecedented levels (if we can figure out how to integrate LLMs into the education system).

More likely we'll replace poor people's teachers with ChatGPT and whoever has more cash affords actual teachers. This is so real that we're experiencing this >today<, in a different scale with private schools and distance learning in countries such as Brasil where there are projects to reduce schools and/or move some of them to use distance learning.


For most people today, learning / teachers can be far, far worse that even what ChatGPT 3.5 can provide.


I don’t know, but anyone who has the motivation to “interrogate it to no end until it clicks for [them]” already didn’t have trouble to become well-educated before. AI can speed things up for those people, but I can imagine much less impact on general education.


The speed difference is out of this world. I have very long sessions with ChatGPT on the Rust ownership and borrowing system. Its sometimes wrong (would say about 1 in 30 messages), but when it is it becomes clear quite quickly and then I search the internet armed with the new knowledge of terminology and relations within that terminology


I agree on your first statement, and raise a challenge on the second.

We're smart, lets figure out how to create the same impact on general education.

Could we wire up ChatGPT so it asks us questions and leads the conversation instead of us? Could we also make it present the questions in ways that make educational youtube channels like Kurzgesagt so easy to engage with?

We could start with private tutoring. LLMs tailored to educate kids of rich parents. Easy market fit as they're already looking for private tutoring. We prove out the need then quickly move onto schools.

Imagine if we could show that engaging with an LLM for an hour produces as much learning as sitting in class for 4. Could we cut school days in half, set LLM interaction as homework for an hour (teachers would get reports from the LLM on how the child is doing)?


>already didn’t have trouble to become well-educated before

I agree that I didn't have trouble interrogating a subject until it clicked before if I was sufficiently motivated. However ChatGPT reduced the time needed a lot, and thus motivated me to take on more learning than otherwise.

Of course, I take "facts" it outputs with a pinch of salt until I double check things with my own research, but it's great at uncovering unknown unknowns and is like an infinitely patient tutor that will let me throw analogies at it to test if my understanding is correct or needs adjustment.


I just watched the latest Star Trek show and something occurred to me: the Enterprise computer acts much like an LLM. It doesn't take actions on its own, but you ask it any question about the ship, and it answers politely.

LLMs are literally science fiction come to life, something out of the movies.

It's early days still, they're a bit primitive and the compute substrate is woefully underpowered for what we would ideally like... but the promise is there.


Working with LLMs while programming is also not unlike the TNG holodeck interface. Verbal generation of the first draft, some feedback and more specific commands to get closer to the intended goal, then sometimes handmade edits to get it all the way there.


The good AI can do for humanity is increasing our productivity.

It should be the goal of every human to eliminate as many jobs as possible.

Every job that is eliminated frees up that person and the people who would have done that job in the future to do other work.

Humanity gain the productivity of that other work.

If we had not forced painful job losses on people, we would still be 97% subsistence farmers, as we were at the time of the American Revolution in the 1770's.

If you oppose progress due to the job losses, please at least be consistent and become a subsistence farmer.


> It should be the goal of every human to eliminate as many jobs as possible.

I mean this is exactly what Drew is disputing.

The capital class will remove the jobs, capture the value of whatever labor is saved, and the workers who lose their jobs will be left with fewer resources and no realistic path to replace their lost income. They won't glide towards some utopian vision where everybody ends up working one day a week on their passion projects.

In some kind of abstract way these ML techniques provide potentially useful tools, but workers will not be the ones to see the benefits of more "productivity" that these tools enable.

The US can't even agree that, despite its vast wealth, health care is something everybody should receive regardless of employment. This country lacks the imagination to handle this situation in a way that improves lives for workers.


If the corporations will not hire you, start your own business.


Not every worker is so good that they will just get free from the job and they would instantly turn into mega entrepreneur god creating 12 companies in a month. Most of them will struggle to put food on the table, resorting to crimes and eventually ending up in worse jobs


Most of us have had to adapt, with nothing guaranteed. Look at the population under 50, very few in that group have ever had any kind of job security, except for the highly paid specialists. That's been the reality for decades. Thousands of people have lost their jobs because of the hackers on here making effective software solutions. Where was the solidarity then? It's only now when the hackers themselves are threatened that we suddenly have to think about the poor human.

With that said, we should do all to eliminate jobs, but not eliminate workers. Technology can also be used to produce more services for more people, not just producing the same more effectively.


Subsistence farming is more difficult physical labor, but at least it's more meaningful than being a spam artist or an adtech data engineer or a health insurance denier or any number of soul-sucking bullshit jobs.


When was the widespread unemployment in the US due to technological innovations


The problem is not unemployment, it is raising the bar for valuable labor. Arguably the standard of living has already declined for the average human individual. A factory job used to afford a single worker with modest education a house and a family. That said, generative AI will have far less impact here than, say, a robot arm.


That seems like a different (and real, but modern) problem compared to "we never would have left subsistence farming without a massive unemployment period (painful job loss) brought about by new technology" the the OP suggested.


The bar for viable labor is rising.

It is also easier that ever to educate yourself.

These two facts balance.


If you believe that anyone can learn anything, that might sound plausible to you. The evidence does not point in this direction, however.


Widespread unemployment occurred in the 19th and early 20th centuries as Americans left farms to move to cities.

They were desperate for work.

The unemployment in farms was caused by the Industrial Revolution.


I'm not a native English speaker and ChatGPT is the best existing tool to spell check and fix gramatical errors in emails. Copilot is another very useful application, and I think your dismissal of it is invalid. Even if we accept that Copilot hurts new devs (which I don't know if you have any proof but I'll make the concession that it is a possibility). A double edged sword hurts and helps the same person. I and half the programmers I know who pay a subscription for Copilot already know how to program perfectly well.


As someone on the receiving end of these types of LLM enhanced emails, I'd say just make sure you actually can be confident it is indeed communicating your message properly.

I work with a guy who has been covering up his poor use of the English language and it's been quite weird to be honest.

Fluent soudning emails full of convincing sounding gibberish is what I've been receiving. Unfortunately the guy just isn't good a communicating his ideas using written language and the LLM can't really fix that.


When I put in the prompt to correct spelling and grammar it rarely changes anything else. Sometimes I'll use it to reword things but I'll make sure the meaning is clear. I read a lot more English than I write so usually telling if something "sounds natural" is very easy, even if writing doesn't come as easy.


Does it help you learn better spelling and grammar, or do you just stop thinking about it? Is that a good thing long term?

Same with dev. Does it actually make you better, or does it make you not think about learning so much?


> Does it help you learn better spelling and grammar...

Yes. (Of course I still have a long way to go)

> Same with dev. Does it actually make you better...

Also yes. In terms of "scripting" it helped me tremendously. There were many little tasks that I wouldn't bother to automate without ChatGPT, because the time spent on looking up those niche APIs would outweight the time saved.

Without ChatGPT I would have learned 0 about them. With ChatGPT at least I learned a bit.


Did you though? Can you remember any of those solutions that ChatGPT provided or why? Or are you now going to ChatGPT more and more because easier?


For English writing I remember a lot of words and phrases that ChatGPT taught me.

For scripting, of course I don't remember the details it provided. I don't remember the details I wrote yesterday either. But at least I roughly remember things like "There is a simple way to visualize any f(x) with about 10~20 lines of numpy and matplotlib. It's not a daunting task at all". I believe these pieces of information improve my decision making a little bit.

I don't think ChatGPT improved the deeper programming skill of mine, like data struct/algorithm/architecture. But I see no reason why it would make me worse in this aspect either.


Not OP, but for me the higher level decisions stick, but not the bitty gritty detail. And I'm ok with that, I care less about the implementation detail and more about the higher level problem solving, and using these LLMs I feel make me better at that.


This is indeed a valid question, but I think it’s largely up to the user. For me, as a hobbyist programmer that sometimes writes code at work to automate certain tasks, I use ChatGPT to quickly create boilerplate/template type code AND to learn how to do new things. When I’m asking how to do something new, I try to actually learn what’s going on so that I won’t have to keep asking about that particular issue. But yeah, the temptation to just say “thanks, ChatGPT” and move on without learning anything is certainly there and could be quite harmful to one’s overall coding skills.


I think the interesting part of that is, is there money in it? I could see it being useful for hobbyists that don't care about programming that much, but would you pay for that?


I am not the OP you are referring to, but I am non-native and this is anecdotal but when I grew up and used the internet I learned how to spell way better because of the spell check in Firefox. I really doubt I would have been able to spell half as many words as I can correctly spell any other way. I have some random words still that I infrequently use that I always typo, but theres so many other words that I just know how to write out from muscle memory at this point. My only problem is I dont know how to pronounce all words, I sometimes find a word and get really confused, not too often, but often enough.

I think AI could help me if ChatGPT or whatever else had proper text to speech with a reasonably convincing accent, even if it sounds off, as long as the accents American I am good to go.

I am also a pessimist with AI but I still try it out and use it because I do see it being used more and more.


I'm a cautious optimist. If we manage to improve correctness further and set up some guardrails to avoid convincing it to do the work for you, AI can be an incredible augmentation for students and learners, completely transforming education to the level of everyone having something close to a personal tutor

I don't worry too much about the lack of correctness, as long as the learner is aware of the possibility - lots of teachers are sometimes wrong too, and a lot of information on the internet is incorrect.

But there indeed is a certain threshold of correctness. Below that threshold, the tehcnology is more harmful than helpful.

The threshold is also different for different aspects of education - for example for many uses in software engineering education current LLMs are already good enough, but they are nowhere near where they would need to be for e.g. law.


I have no idea what the thought process is for downvoting a post which is answering to someone asking optimists to contribute their thoughts.


> I don't mean banal stuff like Copilot, which is a double-edged sword that might be used against junior developers. I mean world-changing benefits, one step closer to the techno-utopia.

Every (tech) sword is triple-edged, and it's the third one that people trip up on because they never expect it.

That code generation is "banal" to you, or a threat to junior developers, suggests to me that it's already huge.

Likewise the job replacements you fear: that's only even possible when the AI doesn't suck massively.

The current state of downloadable models implies any corporate advantage is temporary.


A doctor for every person, a teacher for every child, available any time and for free.


A doctor is a lot more than just a black box taking the patients' descriptions and measurements, and running regressions on them. Doctors can touch, feel, understand, comfort in ways that our sensors or tensors (hah) can't.

Same applies for a teacher too, in various other aspects. Reducing important professions into statistical models is exactly the kind of crappification that the author's talking about. The logical conclusion of perfect sensors and tensors is not here, and the lacking substitutes along the way will be profit-driven, not solution-driven.


Doctors can touch, feel, understand...

Sure they can, but many don't either because of lack of time, or quite frankly, because many doctors are bad at their job. And even in the best case scenario we will never be able to provide doctors to 100% of the population. For many people the choice won't be AI or a (free) caring, passionate doctor who has time to understand you and answer your questions, it's AI or nothing.

Same with teaching. A lot of people simply don't have access to teachers, and if even the ones that do, might not have teachers that have the time and knowledge to actually teach what they want to learn.


This is an argument in favor of more human doctors and teachers, not replacing doctors and teachers with software.


The richest countries in the world cannot even produce enough competent doctors and teachers to fill their current needs. A world that produces enough skilled human doctors to meet everyones needs is even more science fiction than a world with skilled AI doctors.


> The richest countries in the world cannot even produce enough competent doctors and teachers to fill their current needs.

They can easily produce enough doctors, they just don't. A couple of reasons for this: schools inflate the amount of education required so they they can make more money and doctors go along with it (and a crazy amount of licensing requirements) to prop wages up by keeping the supply of doctors artificially low.

You could be an ICU nurse with 20+ years of experiences. Want to make a jumpt to becoming a doctor? You have to start ALL the way from the beginning of med school as if you an 23 year-old humanities major who decided to go to med school. Your 2 decades of hands-on medical experience counts for exactly nothing in the eyes of medical schools and certification boards. Does anyone really believe this is a good way to run things?


Speaking from one of the formerly rich countries (UK), we treat our doctors and teachers incredibly shabbily - long hours, low pay, terrible conditions. It's frankly a miracle that anyone over the last 15 years has gone into either profession.

Fix the low pay and terrible conditions and yeah, you'll easily produce enough doctors and teachers, but late-stage capitalism isn't going to do that...


Fix the low pay and terrible conditions and yeah

If the UK where to offer doctors the best pay and working conditions in the world, it could fix the UK doctor shortage, but only by 'stealing' doctors from other countries and making their situation even worse. To the best of my knowledge there aren't many empty slots at UK medical schools due to no one wanting to be doctors.

It's 'easy' for any one richer country to fix their problems simply by outspending and buying up resources from a 'poorer' country (in fact some people claim the UK's problems are due to other countries buying up all the UK doctors and nurses), but that doesn't solve the global problem


> there aren't many empty slots at UK medical schools

Also underfunded and treated shabbily (like all the educational establishments in the UK.) I should have been clearer, I suppose, and said that just improving conditions for the existing doctors and teachers is a stopgap, what's actually needed is a burning out of the hideous policies of the last 12 years and a solid return to a more socialist approach to government.


The richest countries in the world choose not to produce enough competent doctors because of capitalist incentives, not because it's actually impossible.


What do you mean?


By itself it's an argument for both; the argument for "we can't have more doctors" is "we want some of those people to do other things besides doctoring".


The fallacy you are falling victim to, which is common in these debates, is comparing an LLM teacher to a human teacher as a 1-1 replacement, when really you need to be comparing an LLM teacher to what a child has today outside of access to a human teacher: static books and today's internet + search engines.

It's very easy for me to see how an "LLM teacher" developed and trained specifically for that purpose could be of HUGE value over that status quo. That doesn't mean that the child's human teacher goes away, only that they now have access to a new amazing tool at home as well.


Unfortunately most doctors don’t have the time to be that hands on and at the end of the day are just taking your symptoms and comparing it to their flesh database of illness.

There is a lot of value from just having help diagnose/triage people with illness. Certainly not a replacement but definitely a complement to get access to healthcare to more.


> The logical conclusion of perfect sensors and tensors is not here, and the lacking substitutes along the way will be profit-driven, not solution-driven.

Quite possibly both; governments only switched to universal education instead of having 12 year olds in factories because it was good for the economy, even if some of the lessons are supposed to be good for the (for lack of a better term) "soul".


I didn't say they would be better. Most people on earth lack any access to healthcare at all. https://shorturl.at/joA23


AIs have been shown to have better empathy then human physicians.

AIs have more patience than human teachers.


> A doctor for every person, a teacher for every child, available any time and for free.

For free meaning that it is paid by quietly slipping ads into prescriptions or lessons?


If the OpenAI pricing is indicative, reading 16k token of medical history and giving a 4k token response will cost $1.50 on GPT-4 and 6.4¢ on GPT-3.5-turbo.

The lower of those two is roughly what someone at the UN abject poverty threshold will spend in 48 minutes 30 seconds on "not literally starving to death".


>spend in 48 minutes 30 seconds on "not literally starving to death".

I don't get that. In the poorest people get by on the equivalent of about $1 per day and not many starve. In fact the only part of the planet where the population is booming at the moment is sub Saharan Africa.

Chat GTP and similar will presumably have free tiers.


And if you use fine tuning and RAG you can cut the cost by an order of magnitude. Also, how much did it cost 2 years ago?


Was it[0] available at any price 2 years ago?

(Assuming you mean literally an order of magnitude: 0.64¢ is, judging by Amazon.com, less than the bulk price of a single sheet of unused printer paper, or two thirds of a paperclip).

[0] 3 or anything equivalent to it, given 4 obviously wasn't


I was making the point that this is new tech, not available to us at all a few mere years ago, so assuming a constant cost when making predictions is difficult. Assume inference prices will go down, not up.


Ah, in that case we're on the same page. I'm expecting at least a factor of 1000 to be possible given the apparent higher efficiency of the human brain vs current computers, which is of course terrifying given how good and cheap the various creative AI already are, while also seeming like a prerequisite for robotic/car AI to be all three of "good enough", "fast enough", and "within the limited power budget".


I understood the "for free" as "with very low marginal cost"-- and no matter how you socialize healthcare/education, that's not something that humans can match.


Well, if that doctor is fully programmable by the big companies it could just diagnose more diseases and write more prescriptions.


Why the "if"? Doctors are systematically bribed by gigantic medical corporations to write prescriptions for their addictive and lethal medication. Pain killer addiction (opioid addiction) kills thousands.


>A doctor for every person, a teacher for every child, available any time and for free.

I'm sorry... I'm supposed to trust my healthcare and child's education to a piece of software whose primary feature is its ability to effectively hallucinate and tell convincing lies?

And assuming AI is at all effective, which implies valuable (which implies lucrative,) you expect services built on it to remain free?

That's not how anything works in the real world.


No? It's exactly how everything worked so far.

Live performance (orchestra and operas) were for rich only. Beautiful paintings were for the noble and churches. Porcelain was something needed to be imported from another continent. Tropical fruits were so expensive that people rented them.

Now we have the affordable versions of them for everyone in developed countries, and the middle class in developing ones. Yes, often we just got inferior, machine-made or digital copies, but I personally prefer something inferior than nothing.


You're comparing the value of AI versus a human being with the knowledge and skill necessary to earn a medical degree to the value of hearing Mozart live or seeing the Mona Lisa in person to Youtube and JPEGs, as an argument in favor of AI?

>but I personally prefer something inferior than nothing.

Say that again when your AI physician prescribes you the wrong medication because it hallucinated your medical history.


> You're comparing the value of AI versus a human being with the knowledge and skill necessary to earn a medical degree to the value of hearing Mozart live or seeing the Mona Lisa in person to Youtube and JPEGs, as an argument in favor of AI?

Yes, and I think it's a pretty good analogy.

> Say that again when your AI physician prescribes you the wrong medication because it hallucinated your medical history.

I personally prefer something inferior than nothing. I just said it again.

When your human doctor prescribes the wrong medication, would you reach the conclusion that the world would be better without human doctors?

The fact is simple. Professional diagnosing is such a scarce resource that people buy over-the-counter drugs all the time. It's not AI vs doctors; it's AI vs no doctor.


When a human doctor prescribes the wrong medication, it's a mistake. One doesn't conclude the world would be better without human doctors because human beings are capable of thought, memory, perception, awareness, and when they don't make mistakes - and most don't most of the time - it's the result of training and talent.

Meanwhile, AIs don't possess anything akin to thought, memory, perception or awareness. They simply link text tokens stochastically. When an AI makes a mistake, it's doing exactly what it's designed to do, because AIs have no concept of "reality" or "truth." Tell an AI to prescribe medication, it has no idea what "medication" is, or what a human is. When an AI doesn't make a mistake, it's entirely by coincidence. Yet humans are so hardwired with paredolia and gaslit by years of science fiction that such a simple hat trick leads people to want to trust their entire lives to these things.

>The fact is simple. Professional diagnosing is such a scarce resource that people buy over-the-counter drugs all the time. It's not AI vs doctors; it's AI vs no doctor.

That's not a fact, it's your opinion, and I'm assuming you've got some interest in a startup along these lines or something, because I honestly cannot fathom your rationale otherwise. You're either shockingly naive or else you have a financial stake in putting poor people's lives in the hands of machines that can't even be trusted to count the number of fingers on a human hand.

I have no doubt the future you want is going to happen, and I have no doubt we're all going to regret it. At least I'm old enough that I'll probably be dead before the last real human doctor is put out to pasture.



AI physician prescribes you the wrong medication because it hallucinated your medical history.

The big question is will that happen more or less often than it does with human doctor? Human doctors 'hallucinate' stuff all the time, due to lack of sleep, lack of time, lack of education and/or just not caring enough to pay proper attention to what they are doing.


>Human doctors 'hallucinate' stuff all the time, due to lack of sleep, lack of time, lack of education and/or just not caring enough to pay proper attention to what they are doing.

No, they don't. If that happened anywhere near all the time, we would never have given up alchemy and bloodletting, because there would be no reason to trust medicine at all, and yet it works overwhelmingly well most of the time for most people. Meanwhile, AIs hallucinate by design.


> what will LLMs ever do for us?

Hallucinations are an engineering problem and can be solved. Compute per dollar is still growing exponentially. Eventually this technology will be widely proliferated and cheap to operate.


> Hallucinations are an engineering problem and can be solved.

I'd like a little more background on that claim.

As far as I've been able to tell from my understanding of LLMs, everything they create is a hallucination. It's just a case of "text that could plausibly come next based on the patterns of language they were trained on". When an LLM gets stuff correct, that doesn't make it not a hallucination, it's just that enough correct stuff was in the training data that a fair amount of hallucinations will turn out to be correct. Meanwhile, the LLM has no concept of "true" or "false" or "reality" or "fiction".

There's no meta-cognition. It's just "what word probably comes next?" How is that just "an engineering problem [that] can be solved"?


I agree it's more than a simple engineering challenge, but I do so because it is not entirely clear if even humans avoid this issue, or even if we merely minimise it.

We're full of seemingly weird cognitive biases: Roll a roulette wheel in front of people before asking them the percentage of African counties are in the UN, their answers correlate with the number on the wheel.

Most of us judge logical strengths of arguments by how believable the conclusion is; by repetition; by rhyme; and worse, knowledge of cognitive biases doesn't help as we tend to use that knowledge to dismiss conclusions we don't like rather than to test our own.


How is that bias weird? It has a straightforward explanation - the visual system has an effect on reasoning. This, as well as other human biases, can be analyzed to understand their underlying causes, and consequently mitigated. LLM output has not discernible pattern to it, you cannot tell at all whether what it's saying is true or not.


> How is that bias weird?

The people can see a random number that they know is random, and yet be influenced by it when attempting facts.

> LLM output has not discernible pattern to it, you cannot tell at all whether what it's saying is true or not.

LLMs are the pattern. This is a separate axis to "is it true?"


Are they not an inherent problem with the LLM technology?


That's what happened with the internet, which was supposed to be the new Library of Alexandria, educating the world, liberating the masses from the grip of corporate ownership of data and government surveillance, and enabling free global communication and publishing.

It's almost entirely shit now. Instead of being educated, people are manipulated into bubbles of paranoid delusion and unreality, fed by memes and disinformation. Instead of liberation from corporate ownership, everything is infested with dark patterns, data mining, advertising, DRM and subscriptions. You will own nothing and be happy. Instead of liberation from government, the internet has become a platform for government surveillance, propaganda and psyops. Everyone used to have personal webpages and blogs, now everything is siloed into algorithmically-driven social media silos, gatekeeping content unless it drives addiction, parasociality or clickbait. What little that remains on the internet that's even worth anyone's time is all but impossible to find, and will eventually succumb to the cancer in due time.

LLMs will go the same way, because there is no other way for technology to go. Everything will be corrupted by the capitalist imperative, everything will be debased by the tragedy of the commons, every app, service and cool new thing will claw its way down the lobster bucket of society, across our beaten and scarred backs, to find the bottom common denominator of value and suck the marrow its bones.

But at least I'll be able to run it on a cellphone. Score for progress?


> A doctor for every person, a teacher for every child, available any time and for free.

You clearly don't have a deep understanding of what doctors and teachers do.


Doctor, therapist, teacher, coach - and with the advent of private, fine-tunable models those can be private, local and in the hand of the people.


> private, local and in the hand of the people.

We've seen time and time again that "the people" prefers centralised, paid, convenient management of complexity.

If the majority of "the people" prefers paying a subscription for watching movies or listening to music, I doubt they'll make the effort to learn how to tune and run private LLMs locally, for medical aspects or otherwise. Not when there will be major companies spending billions on marketing more convenient options.

Guillermo Rauch's essay [1] still rings true: it's hard to forego efficiency (though it works just as well for convenience).

[1] https://rauchg.com/2017/its-hard-to-forego-efficiency


But I haven't seen any good argument as to why an AI teacher that is even cognitively equivalent to the real deal (and surpasses a human in everything a machine can) won't just become an intellectual worker. I'm not saying this eliminates the need for education, but it certainly erases its major component and that is to be competitive on the job market.


I don't know where this idea comes from that we can get more from language models then what we put inside. Thinking we can process any amount of data and get a competent surrogate mind out of it borders on magical thinking.


Who is we? The model creator or the user? Getting out what we put in is kind of par for the course in education, yes?


>Getting out what we put in is kind of par for the course in education, yes?

Yes, you put in a person + knowledge and get out an educated person. Is it reasonable to expect to put in GPU + text and get somehow a competent actor, however narrowly we define competence (maybe if we define actor narrowly enough)?


I think AI has the potential to level up the plain field.

Speaking from an entrepreneur's POV, AI gives me an unfair advantage with respect to the large whales. It lets us complete projects (specifically, software projects) 10 times faster and achieve things before I had to raise millions to achieve them.

I can't tell if the net negative/positive is + or -, but it's not clear-cut for sure.


I... don't believe you.

I'm building a product as an experienced engineer with 17 years in the field, and the only place LLMs would be useful without slowing me down is writing copy, whereas I would pay for a copywriter, which I will probably do.

LLM making you 10 times faster to build a product (of which coding is like 20%) needs better sources and proof, as it's a ludicrous number, and a whale with billions has access to much better tools than ChatGPT (i.e actual paid humans).

If by building product you mean "create a basic landing page in HTML", sure. But a landing page is not a company, nor a product you can sell.


Does AI really speed up projects by 10x? In my experience, tools like copilot help with the smaller things, but it's bigger things that really matter in a project.

For example, the right database schema or the right architecture I could easily see saving you 10x the development effort, but these are the things copilot is least able to help you with.

Am I misunderstanding something here? Or could it be that what you are finding is that your experience is helping you build faster, and this is being misattributed to AI?


When you say someone should list all the good world changing benefits to counter your suggestions, it means you're expecting world changing negativity.

That means you're envisioning a future where LLMs are so powerful that they actually cause huge societal impacts but somehow they can't detect spam, or make customer service more effective, make the general populace smarter?

You're asking a for an answer that you by definition wouldn't accept. To me LLMs are like cloud computing: cloud computing technically didn't change the world in that your average person knows what a load balance is... but there was a democratization that allowed many great things to be built at a scale that was previously not possible, and the results of that are what brought value.


Here's one positive use of AI: detection of shark species by drones to inform lifesavers as to whether a beach should be closed or not. This saves both shark and human lives, and is likely more cost effective than other methods (e.g. nets). Noticeably, it's the choice of data (i.e. not ingesting the labour of others without recompense), and how it's used, that make it very different to the various products that startups are producing. But it's the startups (and their owners) that will profit from misuse.


There are too many variables to predict the future. Yet, I believe that it will empower more 1-3 person businesses to compete with the big incumbents.

Imagine you could run teams of marketing/sales/support ai-agents and focus on your core business. "The next 1B, 1 Person companies".

Just 15 years ago you'd need teams of 10+ people to do basic ml tasks you could do today. Now you have an LLM or a SaaS service that outperforms those teams for the fraction of the cost.


I expect such gradients to be temporary. After all, there’s nothing that stops the 100,000 person mega corp to do something with an LLM either.


We will certainly see increased adoption at large corporations, but where LLMs will have a lasting equalizing effect is in areas where scaling has a logarithmic sort of benefit--very impactful going from 0-1, but less and less as scale increases.

To give a concrete example, if you're a standard 1-3 person business with a niche enterprise SaaS product targeting the enterprise, outbound sales is likely going to be a meaningful channel for you. Having a team of good SDRs is a huge step function for your org, and if an LLM can give you that functionality without you needing to hire, then they'll make an outsized impact on your business. However, the Google-sized org operating in your space likely already has a huge sales and marketing org. They've likely "saturated" the market in the sense that they're already engaging most of their prospects in one way or another. Adding the functionality of many SDRs via an LLM might give them some efficiency gains, but it won't unlock a new level of growth for them, nor will nullify the value the LLMs provide your smaller org.


Empower artists to make far more complex works that they would never have been able to do alone before.

One artist might be able to produce a complete animated movie controlling all aspects.


As someone suffering from severe inattentive ADHD, I can see a great potential in the rise of really smart personal assistants which could aggregate every single data point in my life and throw the information at me when needed.

But that’s the only thing I could imagine and it must be local not to be a privacy nightmare. Because I have no doubt that Google and Amazon are already working on that.

For anything else, I just agree with you and the author. It will just make capitalism worse.


This is kind of what kills me about the current crop of assistants. They all make my ADHD life harder, not easier. The Google Assistant, for example, cannot handle the few obvious things I ask. "Send an email to Jim." Nope. "Record my weight in Google fit." Nope. It won't look at any calendar that's not a Google calendar. It puts other people's flights on my schedule. It tells me I've got good traffic an hour before rush hour, but doesn't give me an update as it gets worse. I've probably wasted more time trying to get it to work than it's ever saved me. And it's twice as good as Siri was.

I'm always getting reminders about bills I've auto-paid, that Google has dug up from the bowels of my email, and yet I still find myself scrambling on things I forgot about.

I'm highly unimpressed by anything other than the incredible confidence with which these systems will lie. Online and phone support is bad enough, AI will not make it better, not because they can't make an AI that could figure out that you need a technician to come, but because corporate won't trust the AI to not deliver a pony along with them. And who could blame them? As soon as they give AI any power it will become the tool of the stainless steel rat. Instead, it will continue to be used to fend off support calls and fudge internal metrics, just like the last round of automated support.


> The Google Assistant, for example, cannot handle the few obvious things I ask.

Oh my, I think you are not ready for Siri.


If the model of streaming services is anything to go by, the LLM would be offered as a paid subscription with ads.


And once ads are involved, the perverse incentives will mean the assistant will begin pushing you toward advertised products.


I wonder how many people would just stop using the internet (or use it in a very limited fashion) because of the explosion of LLM generated content.

On a similar note, I watch little TV because of the exact same reason, because I have little control over what I want to see and most of the interesting stuff that I may want to watch may be spread across different channels and broadcasted at nearly the same time. Perhaps I should take the same step towards the internet, given that it’s becoming filled with junk that I don’t want.


This and Substack are the last two places I still regularly read Internet content.


I have decision paralysis and therefore I can only watch TV. Many of my friends think how can I watch TV when I don't even know the local language and I have no control over it, but its still less stress free for me since I can't navigate through the mazes built by Youtube & Netflix


> I watch little TV because of the exact same reason

Maybe you do. But for most people the reason they watch little TV is because the internet is simply better.

So no, not many people would just stop using the internet... unless there is something better.


The rush to just shit on everything is so tiresome.

Why write this article other than to to smugly say "told you so" in the cases where you turn out right. It is a zero risk take.

Looking at the advances in AI (Chess, Go, Protein Folding, MidJourney, ChatGPT) and your takeaway being "Humans will use this in bad ways" shows a ferocious lack of imagination.

I notice a desperate, but failing, attempt to lump the advances in AI to the same pool as Crypto greed because that was the smug naysayers nirvana.


Yeah it's like crypto created this weird crop of "debunkers" who think that being permacynical is good just because it was smart to be cynical about crypto. So now "everything I don't like is just like crypto currency"


I think it's about more than crypto. 'Enshittification' as described here has created a lot of distrust: https://www.theguardian.com/commentisfree/2023/mar/11/users-...


> The rush to just shit on everything is so tiresome.

I also get that feeling. It seems it is now fashionable to take down anything that might change things (I'm not saying that all change is good). Change resistance is something that always existed and we are already used to, but somehow this looks a bit different, more ideological.


> Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you

When I read things like this it makes me think the author hasn't used ChatGPT in their job yet.

Here is a really simple example of how I used ChatGPT this afternoon that saved me, I would estimate, about 2 hours of work:

I had 2 CSV files, with different formats but which (supposedly) had the same functional information in them.

I had a very complicated BigQuery SQL statement that worked on the first file format by importing it as a blob to a table then combining it with a bunch of other CSV files. I wanted to know how much I might need to change my query if I started using the 2nd CSV file (which takes much less time to export from the system that produces it).

The query of course has a big complicated SELECT statement, but also several common table expressions and joins, some of which use columns from the CSV file I was looking to replace.

So I gave ChatGPT the 2 header rows, and the big complicated query. I asked it to tell me likely mapping between the 2 CSV files for similar columns, and to give me a list of columns that appeared in one but not the other. I asked it to mark with an exclamation mark those columns which appeared in the query.

It got some of the things wrong but because I'm pretty familiar with the query and the files I was able to pick up on those and it was much, much easier to browse the output and pick out the errors than it was to break down the query and do all that analysis from scratch.

The whole process using ChatGPT took me about 15 minutes.

And I have wins like that I would say about once per day. I mean it: I'm saving probably about 2 hours work per day by using ChatGPT on average, on tasks just like this.

Now multiply that by all the shit that people are doing all the time and think about all the needs that will get met as a result of this increase in productivity that are not currently being met, and you have an idea of why AI is fucking awesome, ESPECIALLY given the fact that we need a decreasing working population to support an increasing retired population.


I stopped writing tests a while ago. I define the protocol, ask ChatGPT to implement it using some tech, feed that back in and ask it to write tests for it.

The implementation always needs massaging before, but the tests are almost always great, albeit it likes to produce pointless ones sometimes.


That sounds like a five-minute job.


I think this is why most managers take the estimates provided by developers, double them and then go up one unit of time.


"That's a $short_time job" is the signature phrase leaving a trail of unmaintainable, deadline and budget blowing code behind.


It's not, but if it was, I'm sure one would be happy to make it a two-minute job with ChatGPT.


The problem is, you may be the last generation who knows enough to verify the output of said "AI".


I’m in my late fourties’ and have witnessed only a handful of transformative technologies in my time. Nothing in the last 15 years has given me that “tingly feeling” of excitement — you know, that feeling that we’re on the cusp of something transformative — than the recent progress in AI.

While the new AI frontier might be led by prohibitively expensive (and closed) large language models, we’re also seeing great grass-roots progress at a smaller scale with modest models trained by the developer community. I trained a baby GPT the other day using llama2.c for my own use cases.

It’s Linux vs Sun/Unix all over again.


Late 50s here and I think it'll be transformative like nothing in the past million years. I mean zooming out at history the first 13bn years have been planets form, life evolves gets conscious but has to watch it's conscious loved ones age die in a tiny fraction if that time. Now we can kind of merge with AI and live on as long as wanted in some probably much enhanced form. And all a lot of people can say is it may make spam worse!

Incidentally I wrote about that stuff for my uni entrance exam ages 17 and it has always seemed kind of obvious and inevitable to me. It surprises me there are so many skeptics.


I have to disagree. Some things will get crappier, and some things will improve. Yes there's hype, but whether it's the same as crypto hype depends on whether there's actually anything useful behind the hype, and unlike crypto, I think it's pretty guaranteed there's something useful here (purely by getting value out of ChatGPT, infinitely more value than I ever got from anything crypto related).

> Finding and setting up an appointment with a therapist can be difficult for a lot of people – it’s okay for it to feel hard.

Uff, what about crappy therapists? If an AI bot tells you to kill yourself, it's a pretty crappy AI bot. But there are a lot of crappy licensed therapists too. There's also a lot of crappy articles you can find on Google. The world is full of crappy resources. AI can and most likely will be used in all sorts of medical use cases.


Regarding therapists, I absolutely see room for disruption. The majority I've been to were truly terrible, just bad at their jobs in very basic ways, and there aren't enough of them to meet demand. Human presence has value, but it's an open question how much value and whether it offsets things like giving poor advice.

I tried out one of the newer chatbots tailored for this type of interaction a couple weeks ago, and the discussion was better than what I got from probably 80% of the actual human therapists I've encountered as a patient.


Things might change but I think all author was saying is that right now it’s not advisable to try to get help for suicidal ideation (and probably not very useful and maybe harmful in general). I don’t really see that changing until Her, level AI that is “actually” self aware and possesses the capacity to understand nuanced micro expressions and so on and so forth.

Suicidal ideation shouldn’t exist but it does and you have a better chance with a therapist than without. It’s probably irresponsible right now to recommend an AI for treatment. That might change but I bet it’s further away than you’re thinking.


Like social media ruined the world despite it's initial promises, like crypto ended up with grifters of all kinds, AI will follow.

Yes, there are some narrow applications that will be GOOD. But will the good outweigh the bad? No, not at all. It never has.

We'd all be happier as a society if we went back in time.


I'm only in my mid 30s. I'm _very_ convinced that the world was a better place overall before 2008. Before smartphones, and social media.


Like good old late 80s, when the world was on the brink of a nuclear apocalypse?

https://en.wikipedia.org/wiki/Nuclear_holocaust#/media/File:...


Nuclear apocalypse risk is not a linear function of warheads. There is still more than enough warheads to cause near total annihilation and politically, we are closer to it than at any point in the 80s.


I'm an age-of-AI optimist. Not in the sense that AI will solve all problems, but rather in the opposite. The allure in the promise of so much power, so much profit, will be irresistible to those organizations that are already too big to fail and have the means to pursue the shimmer. But like this article articulates, it'll exposes even more dysfunctions than we're accustomed to. Within the gaps though, there will be opportunities. Particularly for those who modestly want to make a difference in a given community, without necessarily the ambitions to change the word™. The world doesn't get "better" or "worse", it's an ongoing and never-ending experiment. We don't discover the "right" way to do anything. We just try stuff and when it's annoying enough, we self-correct. AI has so much potential for annoyance that we should just rejoice in the resulting opportunities.


It’s pretty astonishing how rapidly everyone in the world went from ‘sky net’ to laughable garbage in a matter of a few months.


I think what people were tuning into more so than “how good are the models” was “how quickly are the models getting better?”

The jumps from ~nothing general purpose/consumer-facing to GPT3, 3.5, Copilot, GPT4 all seemed enormous and pretty much back-to-back. Extending that curve points to some pretty extreme destinations (positive or negative), but now the sentiment seems to be that curve was a bit of a mirage.

I intuitively share the view that that curve was a mirage (and a byproduct of years of R&D backlog + OpenAI’s release cadence) but that isn’t coming from any rigorous analysis.


I’ve been on laughable garbage from day 1. Drew is spot on here.


It all depends on the parameters and constraints. If you spend a lot of time defining, providing sufficient context, and clarifications, they work quite well. I think the reality is people's expectations were unreasonably optimistic and now they are unjustifiably pessimistic. The general public seems to oscillate between extremes quite rapidly while not appreciating the nuances.


I think that much of what is addressed here will not entirely be from AI, but may in the future be misattributed to it because the curves appear to line up. Don't forget that wage increases have been stagnant for a long time, and that inflation is always growing - we never see deflation in the so-called 'good times' [+].

The climate change movement for example is ultimately a movement to reduce the resource usage of the working class. Reduced resource usage should in theory lead to a reduction in economic growth - but it hasn't. This is because they simply attained their growth through other means - reduced salary (adjusted for inflation), larger tax, shrink-flation (selling something smaller at the same price), etc. They will always post record profits to appease investors, and that growth will directly come from your pockets.

AI is probably the only thing slowing this down, and it likely is a bubble. When it bursts, do you think these companies will go back to employing humans for customer services? Like hell they will, you just won't get any customer support at all. You may say "fine, I'll take my money elsewhere", but you'll find yourself picking the lesser of evils [++]. Anybody who tries to offer human interaction will simply not be competitive on price, and people are relatively poorer than they were - so they have no choice. It's not as if you can go without water, gas, electric, phone, phone provider, ISP, etc.

[+] The coming economic recession will also not be enough to reset this trend, and there is no political will to address it.

[++] The government may mandate that these companies have human operators, but it won't work, they'll just maliciously comply. One human operator, a call queue of thousands of people, "our lines are unusually busy at the moment", outsource the humans to the current poorest Country and give them zero power to deal with customer queries, etc. It would be exceptionally difficult to prove they are not providing a good service, and at worst they fine them - which might still work out cheaper than dealing with customer queries.


I agree with the message completely. The facade is new, but it can be very similar to the industrial revolution, and how that upset the order of the then-current times. If we want happier, more well-adjusted people, a better functioning world, we will need a better functioning system, of which AI can be a part as much as any other machine already is. No technology will bring us there however. If it's going to happen, it will happen because people will bring it to life.

Until that, things are just going to be as-is. Sometimes up, sometimes down, overall upwards hopefully, and sometimes advances will upset the status quo. And hopefully no tech will solidify the absurd rich-poor divide permanently.


I don’t fully get this meme. AI isn’t dangerous because (litany of terrible effects goes here). This is the tech equivalent of “ignore global warming; bad weather is dangerous now”. Every problem in scope now was science fiction three years ago. To say confidently that an intelligent computer virus is off the table is already presumptuous but to offer a litany of other AI dangers as your evidence is just weird.


Strong points, but I disagree with the timeline a little bit.

> A reduction in the labor force for skilled creative work

I agree and disagree with this. Stable diffusion is art, but creating the art is still within the realm of artists. Also, they'll still need copyediting, refining, etc. I think creatives will transfer or complement their skills with this stuff, like some are already doing. (Example: https://m.youtube.com/watch?v=VGa1imApfdg)

I also very highly doubt that fine art will ever be 100% AI. Uniqueness drives their value.

> The complete elimination of humans in customer-support roles

Definitely not. Human customer service is key for achieving high eNPS scores. People will always want to talk to other people, even if IVR and chat can address their needs.

> More convincing spam and phishing content, more scalable scams

Definitely, but it is well documented that the most common types of scams are made to be deliberately "off" to find easy marks more quickly.

> SEO hacking content farms dominating search results > Book farms (both eBooks and paper) flooding the market

Both of these have been happening for many years. OpenAI will make it easier to stand up boilerplate hello-world starters though (as OP called out). I suppose Google will downrank sites like this to prevent incentivizing this.

> AI-generated content overwhelming social media Widespread propaganda and astroturfing, both in politics and advertising

This is the one thing I'm actually concerned about. I hope that Reddit doesn't become people talking to other people via ChatGPT assistants. That would be a cultural net loss.


I agree with the sentiment overall.

There will absolutely be some great benefits provided by LLMs and the like. Alexa type devices that are more useful than a light switch. Auto spelling correction that actually works most of the time. Maybe a microwave oven that just has one ‘Heat this up’ button.

But I think the beneficial use cases will be a fraction of the overall use cases.

These technologies are going to be far more effective at enshitifying our world. Spam, scams, replacing artists and knowledge workers with tools that can produce ‘good enough’ output… and yeah, military capabilities that’ll allow humans to kill more humans more cost effectively.

I’m looking forward to the good stuff, it’ll be neat. But absolutely dreading the wave of horribleness that’ll inevitably come of this.


> AI companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the world’s digital infrastructure, and feed their hoard into GPU farms to generate their models. They might keep humans in the loop to help with tagging content, seeking out the cheapest markets with the weakest labor laws to build human sweatshops to feed the AI data monster.

Again, I've said this same thing months and yet the AI bros continue to deflect with more nonsense to justify burning the planet with their snake-oil garbage.

Drews points still stand and the Deep Learning industry has no methods of efficient methods of training, fine-tuning and inference and continues to burn down the planet no matter the amount of greenwashing they continue to project.


What a load of snide hogwash. First of all, how are LLMs snake oil? There is actual usefulness to be had.

And I can run inference on my laptop, transferring capabilities to smaller models is a thing, quantization is a thing, optimization is a thing. And, DL has been feasible for barely a decade, LLMs for a few years.

Are you talking about crypto by any chance?


> What a load of snide hogwash. First of all, how are LLMs snake oil? There is actual usefulness to be had.

It is an energy wasting snake-oil burning the planet. [0] [1] Especially when they confidently hallucinate without any transparent reasoning or explanation other than the regurgitation that it has been trained to do, or more accurately they are stochastic parrots.

The issue is fundamental to LLMs and deep learning and researchers still don't know why other than tweaking parameters and fine tuning / re-training it with GPUs still incinerating the planet with no viable alternative to such wasteful methods.

> And I can run inference on my laptop, transferring capabilities to smaller models is a thing, quantization is a thing, optimization is a thing.

We are talking worst case for inference not 'smaller models' which still need to be trained or fine-tuned to exist in the first place and for improvements. For the so-called 'serious' cloud-based LLMs, they need to continuously serve every inference and that requires a fleet of GPUs to serve lots of users as the parameter count of the model gets larger.

> And, DL has been feasible for barely a decade, LLMs for a few years.

Neural networks which are fundamental to LLMs have been around for decades and are still unexplainable black boxes which are incapable of transparent reasoning other than regurgitating responses that it was trained on. Unacceptable and useless for a wave of use-cases that require explainability.

> Are you talking about crypto by any chance?

Crypto already has viable alternatives to its energy wasting problem [2] available today right now. Deep Learning still does not.

[0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...

[1] https://www.independent.co.uk/tech/chatgpt-data-centre-water...

[2] https://consensys.net/blog/press-release/ethereum-blockchain...


> Contrary to the AI doomer’s expectations, the world isn’t going to go down in flames any faster thanks to AI. Contemporary advances in machine learning aren’t really getting us any closer to AGI, and [...] What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

This type of reasoning is really getting on my nerves lately.

Predicting the future is hard, yeah. But your predictions don't become systematically more accurate just by tackling "boring" and "capitalism" to them.

A lot of technologies can change our societies in emergent, non-boring ways. Climate change is an emergent effect of fossil fuel usage that you wouldn't predict by just looking at 19th century factories and imaging how they would evolve with "boring capitalism". The internet is extremely non-boring and has had profound effects on our society. Nuclear mutually-assured destruction is an extremely non-boring existential threat.

It could be that the dangers of AI is from the military, or the police, or terrorists, or from corporations seeking to replace labor, other conventional threats we already have a reference frame for, yes. Or it could be a completely novel from of disaster, like the equivalent of a school shooter getting AlphaFold 8 to make a novel virus that kills 70% of the population before we even realize there's a pandemic going on. Just because this isn't something we're used to doesn't mean it's fundamentally unlikely to happen.


As for generative AI: "mimicry is always sinister" (Friendship's Death (1987), Peter Wollen).


A one sided argument with sweeping generalities and something about minorities being killed in the process. Got it.


Seems we just can't resist hating on anything new, no matter what it is these days. Drew has some good ideas but they're often lost in his reactionary rejection of anything not his narrowly defined vision of UNIXy enough. His inexplicable dislike of containers being just one example.

There can be nuance to our view of new things. It's important we stress our ability as humans to think this way. LLMs are not all or nothing good bad polarities. They can be just another tool in your toolset. And that's fine. No hatred required.


Surely you mean "Plan9y enough". https://drewdevault.com/2022/11/12/In-praise-of-Plan-9.html shows how he prefers Plan 9 over Unix.

I think he doesn't like the way Unix containers are done, preferring other ways to get them. From that URL:

> Recall that everything really is just a file on Plan 9, unlike Unix. Access to the hardware is provided through normal files, and per-process namespaces do not require special permissions to modify mountpoints. Making a container is thus trivial: just unmount all of the hardware you don’t want the sandboxed program to have access to. Done. You don’t even have to be root.

Where does he express a dislike for the concept of containers? After all, he's working on a new microkernel OS, with a permissions model that seem designed to make containers easy to implement.


Yeah, but they are not being stressed enough or considered at a greater level. The same can be said with stuff such as Global Warming for instance, whereas every other company knows that stuff they're doing is terrible, but don't give a shit and we continue to follow as if nothing is going on. AI >right now< is not doing anything specifically dangerous, BUT people adding that "tool to their toolset" is not us, mere mortals but people who intend to use AI in crappier ways. Thinking about those things is not "shitting on anything new" but to think critically about stuff.


I think this article makes sense when analyzing the current technology of AI. But it doesn’t make sense if AI continues to improve. I honestly believe AI will lead to a singularity in the sense that the future of AI cannot be defined and is currently unknowable


I think people are just going to spend less time on devices.

All in all it’ll probably end up being a net-positive, although it’s a shame that it had to happen in exactly the way. The dawn of the internet was one of hope and optimism and the potential value that it held was an ocean compared to the eventual drops that it was mortgaged in pursuit thereof.

Search is becoming useless. People are becoming inoculated to social and its viral effect will slowly wane. People after exposure to all this value-less capitalism will eventually wise up, because that’s what makes sense, and will be left for us in terms of value will be the original oldies but goodies that we started with: Wikipedia, YouTube maybe, personal blogs, and commerce.

For many people this has already started. I don’t care about going online so much, and I’m much more interested in my community and what’s happening around me. My friends and I all use social but more as a tool and it’s increasingly becoming more local. When I meet younger people it’s even more extreme. They’re so cynical about tech that I’m convinced they’re going to usher in 3rd spaces and better urban planning and the likes when they grow up.

I’m not saying we’ll abandon tech just that we’ll only engage when there’s a legitimate value proposition. Ultimately that’s why there’s so much nonsense anyways, because it is legitimately hard to create actual value. On the long view though only value survives. I didn’t even mention “AI” but that will probably just hasten this process from a content perspective. In the future it’ll be around but we’ll just endure it, but we’ll also seek out meaningful interactions whenever we can.


I'm not sure if anyone should take the guy seriously. For a person who has "I don't want to talk about AI" in his profile, and banning everyone who dares to have a different opinion, he sure talks about AI a lot.


> In case you need to hear it: do not (TW: suicide) seek out OpenAI’s services to help with your depression. Finding and setting up an appointment with a therapist can be difficult for a lot of people – it’s okay for it to feel hard. Talk to your friends and ask them to help you find the right care for your needs.

In US healthcare including therapists are quite expensive.

Back in the day I was an indie dev and that was really hard on me. In my lows I thought I’d seek a therapist but at $500/half hour appointment that felt like a gut punch. Didn’t have any insurance then.

ChatGPT is instantly available and free. Yes, it’s not perfect but it’s better than nothing.

For a large part of US, mental therapists are not really accessible when they most need it.


> Flame bait

I'll take it.

> ChatGPT is the new techno-atheist's substitute for God

Not really, no ~~true~~ AInotKillingeveryoneIst says that ChatGPT (or GPT-like) is ASI. Please stop beating this particular strawman.


Your second thing-that-looks-like-a-quotation does not appear to be quoted from anywhere. I don't think you should do that.

(I am not sure whether your last line is your actual opinion or sarcastic, given the "no true ..." phrasing, but for what it's worth I think it's unironically correct: no one with any brain thinks ChatGPT is anything much like a superintelligence. There are people who expect AI to become godlike, for better or for worse, but the most they're saying about the likes of ChatGPT in this connection is "progress sure seems to be pretty fast these days".)


Why is this a strawman? Have you seen the Jesus AI on Twitch?

Seriously- Goes way back.

In the movie THX 1138 the population talk to a AI Jesus.


This part seems sketchy: “the long-prophesied singularity. The technology is nowhere near this level, a fact well-known by experts”

Which experts? Yann? Not sure he counts.


Just like in Rifters trilogy by Peter Watts, Internet will become unusable thanks to AI-driven spam, phishing, SEO, and other junk content.


Is that really because of the AI, or because of humans controlling them?

We already have incredible levels of spam, phishing, junk SEO content. Created from humans and scripts driven by humans.


Humans, of course. The AI is just better tool for this "job", better than anything existing before.


> SEO hacking content farms dominating search results

It seems like LLMs are destroying traditional search engines (Google) much faster than they are enabling new ones (Bing + GPT).

Are we going to enter a dark age of search where the signal is drowned out by the noise for a few years? SEO blogspam was bad enough when humans had to write it, now it's becoming impossible to avoid.


It would be nice if the search engines could put the main site for something first. They already do it with their own properties, so it's possible. But they've gotten very bad at it with everything else. For example, you want to search about a car, you'll get some blogspam, a few car magazines, a few local shops, and down the line the manufacturer's website. Now, sometimes the manufacturer doesn't answer a question, but it's amazing how much the search engines promote these SEO articles that repeat a question over and over without answering it.


This is basically MOLOCH.

We can't stop ourselves from 'crappifying' ourselves.

We are driven by local min/max in society that we can't break free from, until the system breaks.

Moloch https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

Past post on 'enshittification' from Cory Doctorow https://news.ycombinator.com/item?id=36611245


>...AI companies will continue to generate waste and CO2 emissions at a huge scale...

Oh come on.


just a typical degrowther take

downvoters: is the video game industry unethical for making billions of players expend exponentially higher compute (= emissions) for something as frivolous as slightly better graphics?


> is the video game industry unethical for making billions of players expend exponentially higher compute (= emissions) for something as frivolous as slightly better graphics?

Quite literally yes. See: https://en.wikipedia.org/wiki/Boiling_frog

The reason most people don't think so is because human brains are not wired to comprehend the danger that slow buildups of a negative create.

In general, the complete waste developers have been creating by allowing themselves to build extremely inefficient systems because moore's law, has been unethical.

Imagine if the single goal of car manufacturers became to create faster and faster consumer cars with higher and higher co2 emissions, even thought those cars sit in a parking lot 95% of the time and when in use are never exercised to their potential.


then I'm sure you also believe the logical consequence that all graphical user interfaces are unethical as well due to frivolously wasted compute.

out of interest, how did you make this post? with moral standards as high as yours, you certainly wouldn't run something as wasteful as a full browser, right? especially when hackernews' html is this easy to parse manually. i'd gues you used curl? and as for your hardware? what did you personally find the most ethical choice there? and about those unethical ISPs? and how did you verify that HN's backend is efficient enough to be considered ethical?


Morality is a spectrum, not binary. Yes, all of the things you listed make me and you a less moral person, than say some random dude living in the woods with no access to a computer (all other things being equal).


I mean that's an awfully convenient argument to say "morality for thee but not for me". Yes it's a spectrum, but it still matters to be consistent.


even the dude living in the woods is still having an impact on the ecosystem, which is defined as negative by this eco-extremist moral framework.

if you take it to the logical conclusion, it just means that all forms of human life are immoral, by nature, and the least immoral human is basically a feral animal, and the most immoral ones are the opposite of that.

so to be moral, just stop being human.

it's just so obviously broken and misguided.


You seem to be bothered by the idea that you may not be perfectly moral/ethical. You're a good person, therefore everything you do is by definition moral/ethical?

It seems as if you're attaching the same connotations to morality/ethics as you would to legality. If something is not perfectly moral/ethical, it doesn't mean you should never do it.

We over indulge, we waste electricity and water, we drive unnecessarily big cars because it's enjoyable. It's not moral or ethical, but it doesn't make you a bad person. When it's done en-masse, is encouraged, and corporations monetize it, is when it starts becoming a problem.


>It's not moral or ethical, but it doesn't make you a bad person. When it's done en-masse, is encouraged, and corporations monetize it, is when it starts becoming a problem.

>If something is not perfectly moral/ethical, it doesn't mean you should never do it.

i think that when emissions are brought up in an argument, it's not usually this more nuanced take (that I also take some issue with, by the way, as it's impossible to do anything not en masse at 8b population)

instead, they are brought up to imply in sort of a smug way that the other party is somehow unethical, and that oneself, having morals as pure as they are, would never do such an unethical thing. hence the other party, and whatever they advocate, is wrong and bad. and the op blog post is a good example of this.


> it's impossible to do anything not en masse at 8b population

you are missing some simple arithmetic here. yes, x times y can get arbitrarily big if x does (and y is non-negative), but then changes in y would just have even more of an impact. take x to be population and y to be emission per person and you should see your mistake.


and you’re missing the point. no one argued that it’s not worth making changes at the population level.

gp argued that stuff becomes problematic when done en masse. my issue with that is, if you disallow en-masse $thing, who’s going to gatekeep the tiny, exclusive, non-en-masse club allowed to do $thing?


yes.


as a musician, did you ever stop to consider the ethical implications of the emissions created by your particular frivolous hobby?


It is true. It is totally wasteful and it wastes lots of water as well [0].

Even before that, there aren't any efficient methods of training, inference and fine-tuning available today that are viable alternatives in the field of deep learning, especially after a decade of existing.

Drew's point still stands, unchallenged.

[0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...


>AI is defined by aggressive capitalism

You could have said that for almost all tech improvements in history - electricity, medicine, radio, cars, trains, plumbing etc. Capitalism as in people selling stuff for money is just how things get done. At least to begin with.


This still feels like it's missing the point most doomer technologists do even though it calls them out for it -- The world will be changed by AI, just like the internet, and, just like the internet, it will be full of problems, but problems we mostly can't envision or see.

The 'trump card' for all the AI negativity is education. Think of the 590 million Indian kids that live in poverty, for example. If they can get access to a computer and the internet, they will have access to on demand 24/7 first class education. They can even ask questions like they could of a real teacher.

The boon to human productivity and possibility for less suffering in our Capitalist earth can't possibly be outweighed by some boogey-man negatives which will probably never materialize anyway.


Maybe what we need is a hybrid communist/capitalist system. Governments should nationalize public stock markets and all large companies with a market cap above a certain amount.


This way the companies will be sure not to ever reach a certain market cap.

If they will be close, the companies will split, so that they'll never be big enough.


Seems like a good thing.


> What will happen to AI is boring old capitalism.

I see a lot of people want to blame capitalism, but look at any other system, and ultimately they all fail due to human greed. The only way to make capitalism work correctly is with regulation, because once monopolies and collusions are reached, the natural incentives disappear (i.e. the lowest cost service that delivers what the consumer values).

> Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

Agreed. You will earn less money (relative to cost of living), tax will increase, but yet people will still pretend your quality of life has increased - but they haven't. Many services you now can't reach a human - at all. Emails have disappeared, phone lines have disappeared - I now have to waste 5 minutes speaking to a chat bot that I know cannot solve my issue for it to maybe allow me to type text to what it claims to be a human.

> LLMs are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you.

In a sense, most neural networks can be modelled as some form of Markov Model. What's becoming more obvious is that the structures of these models is super important, and there is still a lot to be learned.

> Self-driving cars might show up Any Day Now™, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.

Cars are a decentralised transport (as much as a transport system can be), whereas a train is a centralised transport system. The internet is also a transport system, but with packets instead of people, and this has had great success with a mixture of centralised and decentralised transport mechanisms.

The biggest problem with trains is that you create a single point of failure and an unnatural monopoly. Your bandwidth is also heavily reduced due to safety considerations (you want to travel fast over long distances, but need to increase the safety margin to do so). Unlike cars or internet packets, you can't divert a train. One can imagine a new protest group "just stop energy" (instead of "just stop oil") quite trivially bringing an entire Country to a halt by placing cars on all of the tracks.

> AI companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the world’s digital infrastructure, and feed their hoard into GPU farms to generate their models.

Interesting to see that none of the climate activists so far have gone for clear winners like crypto mining, or AI training. Instead they would rather keep making the life of the every-day person miserable, as if it isn't miserable enough already.

> You will never trust another product review.

You find that people pay for reviews anyway. Somebody I know gets sent Amazon products to review, and they get to keep the products. The more positive reviews you give, the more you get selected for future reviews. The only way around this is reputation - I find somebody you trust who has reviewed a product. It's why Linus Tech Tips (LTT) and the recent review scandal was important - they have a reputation and it does inform consumers about expensive computing equipment investments.


Socialists really are a dreary miserable lot, aren’t they.


I told chatGPT to replace reference to AI with references to computers. The arguments seem just as valid (and wrong). Here is a snippet.

"Of course, computers do present a threat of violence, but as Randall points out, it’s not from the computers themselves, but rather from the people that employ them. The US military is testing out computer-controlled drones, which aren’t going to be self-aware but will scale up human errors (or human malice) until innocent people are killed. Computer tools are already being used to set bail and parole conditions – it can put you in jail or keep you there. Police are using computers for facial recognition and “predictive policing”. Of course, all of these models end up discriminating against minorities, depriving them of liberty and often getting them killed.

Computers are defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of computers are going to make the world worse. The computer revolution is here, and I don’t really like it."

The rest of the article.

There is a computer bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by computers. But it will probably be crappier, not better.

Contrary to the doomer’s expectations, the world isn’t going to go down in flames any faster thanks to computers. Contemporary advances in computing aren’t really getting us any closer to AGI (Artificial General Intelligence), and as Randall Monroe pointed out back in 2018:

A panel from the webcomic “xkcd” showing a timeline from now into the distant future, dividing the timeline into the periods between “computers become advanced enough to control unstoppable swarms of robots” and “computers become self-aware and rebel against human control”. The period from self-awareness to the indefinite future is labelled “the part lots of people seem to worry about”; Randall is instead worried about the part between these two epochs.

What will happen to computers is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots. Language models are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and computers will probably remain useful for writing cover letters for you. Self-driving cars might show up Any Day Now™, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.

The biggest lasting changes from computers will be more like the following:

- A reduction in the labor force for skilled creative work - The complete elimination of humans in customer-support roles - More convincing spam and phishing content, more scalable scams - SEO hacking content farms dominating search results - Book farms (both eBooks and paper) flooding the market - Computer-generated content overwhelming social media - Widespread propaganda and astroturfing, both in politics and advertising

Computer companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the world’s digital infrastructure, and feed their hoard into GPU farms to generate their models. They might keep humans in the loop to help with tagging content, seeking out the cheapest markets with the weakest labor laws to build human sweatshops to feed the data monster.

You will never trust another product review. You will never speak to a human being at your ISP again. Vapid, pithy media will fill the digital world around you. Technology built for engagement farms – those computer-edited videos with the grating machine voice you’ve seen on your feeds lately – will be white-labeled and used to push products and ideologies at a massive scale with a minimum cost from social media accounts which are populated with computer content, cultivate an audience, and sold in bulk and in good standing with the Algorithm.

All of these things are already happening and will continue to get worse. The future of media is a soulless, vapid regurgitation of all media that came before the computer epoch, and the fate of all new creative media is to be subsumed into the roiling pile of math.

This will be incredibly profitable for the computer barons, and to secure their investment they are deploying an immense, expensive, world-wide propaganda campaign. To the public, the present-day and potential future capabilities of the technology are played up in breathless promises of ridiculous possibility. In closed-room meetings, much more realistic promises are made of cutting payroll budgets in half.

The propaganda also leans into the mystical sci-fi computer canon, the threat of smart computers with world-ending power, the forbidden allure of a new Manhattan project and all of its consequences, the long-prophesied singularity. The technology is nowhere near this level, a fact well-known by experts and the barons themselves, but the illusion is maintained in the interests of lobbying lawmakers to help the barons erect a moat around their new industry.

Of course, computers do present a threat of violence, but as Randall points out, it’s not from the computers themselves, but rather from the people that employ them. The US military is testing out computer-controlled drones, which aren’t going to be self-aware but will scale up human errors (or human malice) until innocent people are killed. Computer tools are already being used to set bail and parole conditions – it can put you in jail or keep you there. Police are using computers for facial recognition and “predictive policing”. Of course, all of these models end up discriminating against minorities, depriving them of liberty and often getting them killed.

Computers are defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of computers are going to make the world worse. The computer revolution is here, and I don’t really like it.


I think a lot of people might agree with this also.

The reasoning when replacing 'AI' with 'Computers' is the same, and is also Valid.

This doesn't make a good argument against the article, supposing that with 'computers' the article is wrong, so with 'AI' the article would be wrong.

It is actually valid for Computers also, as well as for AI.


Drew DeVault confirmed Urbanist Bro

Completely unserious comment but I enjoy the Alan Fisher reference, it's funny seeing my online spheres intersect.

EDIT: serious comment.

The whole post is a bit of a doomer one, the thing is in the maximalist bad world that Drew DeVault poses, human interaction (customer support, human written articles and opinions) becomes a premium, meaning the pendulum will swing as people realise the mistake. A lot of people will hurt or even die in the medium term which is true but the world he posits seems one that leads to an unstable maximum.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: