> This client is my main source of income, he’s a marketer who outsources the majority of his copy and content writing to me. Today he emailed saying that although he knows AI’s work isn’t nearly as good as mine, he can’t ignore the profit margin.
A core assumption is that his client will succeed with this change. Only time will tell. A thousand different outcomes are possible: the downstream client is upset by the reduction in quality and switches vendors; new entrants underbid the market further; firms start to differentiate on quality at a price premium, further eroding low end margins; the money saved on this is invested in new roles for to ensure marketing quality control.
The error is to believe that a given state of the world is static. It never is. Every day, millions of very smart and capable humans apply their minds to figure out how to make money by selling a product or service that satisfies. And everyday, what it takes to satisfy customers changes a little bit.
Posts like this drop all the context that led to such a job even being possible in the first place.
As horrified as I am for people whose jobs will be lost (including mine, quite possibly), "content" is such a plague on humanity I'm glad it will soon be devalued into the ground. Moutains of useless tweets and blog posts will be rendered meaningless when ChatGPT has all the answers. Dust to dust.
I both strongly agree (“content is a plague”) and strongly disagree (artisans, even digital ones, make society better).
I’m glad that we’ll hopefully see the end of content as an industry, where people churn out text and images and posts trying to monetize attention by the penny. It’s inevitable as content becomes effectively free. How long before TikTok makes AI generated videos without creators? I’m pretty sure medium can be replaced in entirety today.
That said, there’s plenty of people who work hard and are immensely talented and contribute a lot to society. Technology and automation has slowly taken away many jobs that brought people satisfaction and we’re anything but BS jobs. My classic example is musicians. It’s something people love to be, but in a world with recorded audio, you don’t need as many musicians as you once needed (and it was actually a fairly common job).
I don’t have the answers, but I hope that automation can start replacing BS jobs through AI.
> artisans, even digital ones, make society better
The word "content," when applied to digital media, usually carries some implications of artlessness. Content is something that fills a container.
Music that moves you and changes you is art, and is unlikely to be automated any time soon. Music that plays in an elevator (or fills the silence in an elevator) is Content, and is highly likely to be automated soon.
A lot of 'content' to me is worse than elevator music. That at least tries to somewhat improve my experience (though you may not like it). A piece of 'content' as an intro to a recipe is there primarily to make the page longer and contain keywords for SEO purposes, but it makes my experience worse.
I don’t know about that. The way I hear it used, every YouTube video is content, even the very good stuff. HBO and Netflix are content. NBA Games are “live content”. It’s just a catch all for digital media
Well to the suits at ESPN, HBO, Netflix, and YouTube, those video files are indeed content. They are the contents of their platform.
But if you're in a conversation with an a filmmaker, a musician, a novelist, or a painter, and you try to call their artistic output "content," they might take some offense, because you're kind of reducing their work to a mere consumable product.
Yes, art can be packaged up and put on a platform and distributed and profited off of, or bought and sold on an online store like toasters and dish soap, but that's not the reason it exists - it's form of human expression.
Just as not all digital content is art (e.g. NBA games), and not all digital art is content.
> if you're in a conversation with an a filmmaker, a musician, a novelist, or a painter, and you try to call their artistic output "content," they might take some offense
This. And when I hear creative types use the word "content" to refer to their own work, I feel pity for them. And am much less inclined to check their work out, because it it seems that their mindset is to produce a product rather than art.
Especially if they refer to their audience as "consuming" their "content".
> It’s something people love to be, but in a world with recorded audio, you don’t need as many musicians as you once needed (and it was actually a fairly common job).
I do wonder if people are going to start valuing in-person experiences more and more, as anything on the internet can now be faked (or AI-generation, which, artistically speaking, is close to the same thing). If we do, might we see a boom of live performers? Will minstrels make a comeback? An interesting possibility, if still not reassuring.
Live performance hasn't gone away. I live outside a village (in Ireland) with around 1000 residents and there's live music in one or other of the pubs 4 nights (or more) a week
Irish trad scene is unique in this regard. There are Irish trad sessions everywhere in the world (I was able to visit some in quite faraway places like Moscow, Istanbul or Santiago de Chile), and in Ireland there are a few in every village, and then some.
But that's just it, Irish trad. Frenchies have something similar with balfolk, but not so epic, and while there are other strong European folk scenes, none of them is on the scale of Irish trad. Alas.
It's not just trad though really - there's a bit of trad (and a fair few ballads) in my local pub but you could also hear pretty much any popular song from the last 50 years.
(... for anyone who's curious - "trad" usually means traditional dance music like jigs and reels, "ballads" mostly means Irish-y sounding songs sung in English)
Although circleofavshape's observation applies equally well to my part of the US. Even very tiny towns around here have live music multiple nights per week.
The past 30 years have resulted in an erosion of in-person experiences. People used to be social: clubs, bowling, movies, whatever. Now people even work from home.
People prefer to be in their houses on their computers and to interact with “others” (people, bots, videos of people doing things) from their favorite chair.
I don’t think in-person experiences are coming back any time soon unless there is some sort of major cultural shift that nobody could predict. A lot has been written about this trend.
When I listen to a recording of a 18th century Danish fiddle tune, is it a bunch of notes and chords that can be imitated? Sure. And it can certainly be imitated by a professional human style imitator or an AI.
But I'm not just looking for a string of pleasant chords and tones to massage my feelings. The context is part of what it is. It's not really "content", it's communication. The guy who made it, wanted to say something. Something that was important to him, which he thought better put into music than into text. He's dead almost 300 years ago, but I want to hear what he wanted to say.
Which isn't to say there isn't room for AI-generated (or professional composer-generated) imitations. They're trained on the real stuff, it's the echo of real people we hear. But that's why it's valuable, not as generic "content".
Context is certainly important. Atleast for people who lived through times when art took human effort and was rare.
But what if upcoming generations see it just as something antique? Old concept thats dead because their concept of art/media has always been tailor made and generated for them.
Also who says the AI wont also generate the context. You will get some medival music with authors backround and everything. You might not even know the person didnt exist. The way AI is developing now nobody will care about AI just making things up on the way.
> My classic example is musicians. It’s something people love to be, but in a world with recorded audio, you don’t need as many musicians as you once needed (and it was actually a fairly common job).
I am not sure I agree with this. The barrier to entry to selling your own music (streaming, youtube) is so low that we see more and more people giving it a shot. Maybe they are not professionnel with a title, but they are musicians. Sure, maybe there are less orchestra performers but there are far more (smaller) groups to choose from. Did they really disappear?
I think that's because for most, it's moved from being a profession to a hobby. Many hobbyists dream of turning their hobby into a profession, and work towards that, yet for most it remains a hobby.
Recorded audio didn't replace the musicians creating music. It replaced the ones playing it - for example the pianists at the restaurants or the clubs.
In a modern Casablanca remake Bogart wouldn't say "Play it, Sam", he would just select a song from his Spotify playlist and hit play.
In a similar fashion, you don't need to hire a full band for a wedding. A single DJ will provide enough entertainment.
I would say there will probably be more demand for live music since at some point YouTube music will probably have a channel that just makes up music on the fly based on what you like. So your computer can produce any sound or mashup that you want. But there will never be ai live music in the same way as people unless you have undetectable robot people.
It’s something people love to be, but in a world with recorded audio, you don’t need as many musicians as you once needed (and it was actually a fairly common job).
> Well, and because people love it, many will still be passionate musicians / writers / language savants.
They just won't make a living out of it.
And thats ok. The intrinsic motivation to do those things is recreational and the focus on monetization contributed to the "content is plague" trend in the first place.
There's also what society values. While I'm no art historian, it looks that after the invention of the camera, painters shifted to different styles that were not replicable by cameras (e.g. landscapes and portraits to abstract), and high society began valuing the new art, while continuing to value the old, the new shifted aesthetics.
I think that as we're progressing into a world where anyone can press a few buttons and get a dubstep banger in 30 seconds, culture will place higher value on different types of expression and experiences.
"We went to see a band play an entirely improvised performance, capacity was limited to 20 people and all recording devices were confiscated upon entry."
> How long before TikTok makes AI generated videos without creator?
I can tell you don't use TikTok and don't understand how quickly the trends adapt and depend on human understanding of hyper niche cultural trends.
Nothing about GPT indicates it can engage in this without heavy human interference.
At most it'd at least depend on being able to replicate human generated videos on prompt (which it's far from doing without horror show uncanny valley but sure... one day eventually in the indefinite future, which is the least interesting problem) let alone all the niche cultural stuff... and critically reputation signals already strongly required. Humans are deeply intwined in all of these, with automation playing a marginal optimisation role not anything near replacement.
Cheap imitations will remain a novelty for a long time. Just like how deepfakes and GPT news articles were oversold as a serious problem in the near term but haven't borne out beyond niche failures and journalist FUD posting.
If anything, Tik Tok trends are perfectly imperfect, which makes them feel more like what can be generated from any 13B or better LLM.
People are bad at modeling and scale. LLMs don't suffer from that, but the way people interact with LLMs obviously doesn't indicate they can conceptualize what a billion people interacting with the same model with the same parameters and having a billion unique experiences implies.
How we interface with LLMs is a deceptive reflection, and if you don't recognize the depth in them, you'll be caught off guard when you fall in, or something larger than you expect comes out.
Musicians have also had incomes become way less evenly distributed, after recording started one star could reach millions, before that you needed to see a musician live to hear music. Tim Hartford did a good pod cast mentioning this.
It's not going to go away, though. If anything, the future looks like lower quality and higher quantity. The usefulness of ChatGPT's (and its successors') answers also depend on their inputs. In the limit, if there were no humans producing new material, there would also be nothing new that ChatGPT version N could tell you.
There doesn't need to be new material. Just mechanism to segregate between good and bad and that will always exist if humans are interesting.
LLMs use reinforcement learning once they have a base understanding of words. It's like how modern chess engines don't analyze games, just play against themselves.
LLMs go even further where they train a model to judge what people would deem to be high quality, so it's another layer
> LLMs use reinforcement learning
> LLMs go even further where they train a model to judge what people would deem to be high quality, so it's another layer
You're describing RLHF (Reinforcement learning with Human Feedback) used by ChatGPT, right.
I wouldn't say that LLMs use it, but RLHF is used to create a higher level model on top of a LLM.
That works because chess is a closed world. Without input from outside, two LLMs training against each other would most likely become raving lunatics -- just as two humans locked in a dark cell together would do.
Something something 100 monkeys with a typewriter? I'm expecting some margin of error introduced into these models that produces a surreal fever dream era of AI generated content on the internet.
ChatGPT has the answers it has because of all that content people have been posting online. What happens when people stop posting and there is nothing to further train the ai? Ai doesn't have intrinsic value, it has to be trained on something
> A core assumption is that his client will succeed with this change.
Only if you focus on long-term macroeconomics, and ignore the impact of microeconomics on real people. Today, this writer is out of a major chunk of their income.
If I drive a luxury sports car and I lose my job, I might get rid of the luxury car and switch to a compact. Both are good enough, one is cheaper. That’s the way the world works.
It’s not nice to hear the results of a bot are good enough when you know your results are better. Tough luck.
As someone who manages our own brand's social media, and just started using ChatGPT, I was surprised ChatGPT could provide with captions that are pretty good and moderately inspiring. Try it out yourself. Definitely a timesaver and cost-reduction
Creative writing output from GPT-4 like everything else is a huge jump from 3.5(and it gets even better with targeted refinement). People say 60% as good or whatever but don't be too surprised if it's already much closer or even surpassing the human baseline in question.
I don't think it matters if AI is a better writer. From the post:
"[...] although he knows AI’s work isn’t nearly as good as mine, he can’t ignore the profit margin."
So yeah if we consumers of content are lucky we won't see much of a difference, but the web has been in a break-neck race to the bottom for over a decade now. Companies that pay for the creation of content have and will continue to pay as little as they can as long as the content generates clicks. Given that 27.7% of those clicks come from bot farms[1] (and that number is rising) we already see a huge market that's just bots writing content and monetizing it for other bots. We humans will get to sift through that to try to find real content.
----
I'm writing on a throwaway account for obvious reasons. I run a mid sized software company that does a mix of software consulting and saas. Quite common these days.
I want to share a real story about how chatgpt eliminated a marketing job. I have a full time marketing person who writes content (case studies, blog posts, client testimonials etc) and does SEO optimization for our website. He's not technical, but understands the problem we are solving and knows how to talk to customers.
Writing a case study, blog post or a marketing tweet took a bit of work. I'd brainstorm with him what topics to write about, he'd run some drafts by me and then after 2 - 3 revisions we'd publish it. With chatgpt I'd just write what I wanted in point form and get back a polished product in a few seconds. It gives you exactly what you wanted but puts it across in a way that's much better quality than what what the marketing person could do. Most importantly, none of the key points I wanted to make get lost in translation.
I decided last week to lay off the marketing person (2 months full pay severance) because I did not see the value in paying for a full time marketer when I can do a better/faster job with AI. I think this is the new reality that a lot of sales/marketing jobs will face.
This is a good thing. One more bullshit job eliminated that only exists to make the numbers go up.
This will be hard, but as a society, we need to examine the entire reasoning behind "jobs" and why "more jobs = more gooder". The entire foundation of that line of thought is fundamentally flawed.
Where I live, at lest in a medium-sized town in the US, there's a huge amount of work in hospitality, construction, skilled trades, healthcare, and every other field you can imagine. The only commonality is that none of those careers include "making content" or "social media". It's like at some point during the COVID pandemic people forgot that everyone can't sit at a computer.
Hospitality and healthcare occupations are doing badly in much of the Western World. Construction is highly seasonal, and arguably much of it is currently driven by an investment bubble rather than organic demand which could pop, ending in a long winter. Skilled trades, maybe, but you need a healthy job market so that people can afford these.
While these occupations are safe from AI, for time being, they're not safe per se.
> you need a healthy job market so that people can afford these.
So much this. I was reading recently about how remote white collar work has strongly stimulated trades workers like electricians and plumbers. Once all of the high-paying remote work is gone, who is going to be able to pay for that sort of work outside of doctors?
> there's a huge amount of work in hospitality, construction, skilled trades, healthcare, and every other field you can imagine.
These are mostly bottom-of-the-barrel jobs that can't find and hold on to good people. I worked in hotels back in high school and I wouldn't wish that experience on my worst enemy. Broadly speaking, these industries are hiring a ton right now because they've historically underpaid employees and struggle to find people who are willing to work for poor wages and toxic cultures.
Said elsewhere previously[0] but the reason is simple: because the only way we as a society have to distribute a stable supply of food and shelter is for (most) every individual to have a job. "more jobs = more gooder" flows directly from that: "people need food → food requires job → we need jobs".
Changing that has to come before the bullshit jobs are wiped out or there will be an even worse situation.
If a job makes the "numbers" (by that I assume you mean revenue or profit) go up, then by definition, it's not a bullshit job, it's a productivity enhancer.
'Bullshit jobs' is an indictment of the socioeconomic system that enables such jobs to exist in the first place. It is not an indictment of the individual doing the job; the psychological association between oneself and their bullshit job is at the heart of the analysis. I like this definition:
>"a form of paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence even though, as part of the conditions of employment, the employee feels obliged to pretend that this is not the case."
Right, which is why, by definition, if a person's job improves revenue or profit (i.e. "numbers go up") then it isn't "pointless" or "unnecessary," therefore it isn't a bullshit job. Thanks for clearing that up.
From experience, when working with clients on case studies and testimonials the individual concerned will often talk to the marketing team and then the marketing team will transform their comments into something short and snappy for approval before publishing. Perhaps that’s what is meant by ‘writing testimonials’ in this case?
Sure, maybe he does, but if that's so then we should read that and respond. Down voting and flagging such a comment isn't helpful. It's not even an opinion people don't like, it's something that has happened, and, if true that's very relevant.
If AI output is 60% as good as a person, but costs 1/1000th the price... what decision maker is going to justify paying 1000x more for 40% more quality? We're really in for it.
What if there was a cheeseburger that was 50% as good as McDonalds but was only a nickel? That'd be real bad for the McD's share price.
> If AI output is 60% as good as a person, but costs 1/1000th the price... what decision maker is going to justify paying 1000x more for 40% more quality? We're really in for it.
Really depends on the type of output. For content marketing that most people are just going to skim over anyway (aka is just going to be used for blogspam/content farming, which is what it sounds like the linked writer was doing for this client), yes, it absolutely makes financial sense. But a lot of output is very much winner-take-all: music, movies, TV. Only the top 1%, for some definition of top, of that stuff is actually purchased in meaningful quantities. 60% of top quality of that is worth the same as 0%, which is nothing.
> What if there was a cheeseburger that was 50% as good as McDonalds but was only a nickel? That'd be real bad for the McD's share price.
Not so sure about this; you can look at, for example, generic store-brand colas, which taste exactly the same as Coke and Pepsi and are cheaper, yet Coke and Pepsi haven't collapsed after decades.
> generic store-brand colas, which taste exactly the same as Coke and Pepsi and are cheaper, yet Coke and Pepsi haven't collapsed after decades.
which is because coke & pepsi uses massive advertising campaigns to ensure the brand is doing the heavy lifting. Coke is basically a marketing company that happens to sell a liquid drink.
Marketing aside, power-law of costs is fairly common. If a good guitar runs you $300 and a professional guitar runs you $1500, a custom guitar for $40K is still only going to be marginally better.
Every single store sells coke or Pepsi. Supermarkets might have a range of off brand which some taste good and some don’t. But Coke doesn’t cost that much, I’m not going to learn a bunch of new brand with spotty availability to work out which one tastes good to save 30 cent.
They are also distribution network companies. If I'm not mistaken, Coke & Pepsi do their own deliveries. And they are large enough that they will sign exclusivity deals with restaurants and venues to sell their products (often supplying the equipment). Generic store-brands have no reach to venues and restaurants outside of their stores.
>But a lot of output is very much winner-take-all: music, movies, TV.
If you look at the state of current movies and TV shows, ChatGPT could do a much better job than today's writers. Heck, some high school kids could do a better job.
A "hit" TV show? If you mean the very best-written TV shows, probably not. But an average TV show? ChatGPT could do much better.
I think your comment is epitomizing just what a huge blind spot people like you have here. You're comparing AI/ChatGPT with the very best stuff produced by humans, not the average. The average is usually lazy and bad, so it's really not a stretch to conceive of ChatGPT writing something better than that.
As another poster commented here, ChatGPT could have written the latest Top Gun movie. The aviation filming was great, but the story itself made no sense. I can definitely see ChatGPT writing something no worse.
I'm not aware of any but surely Top Gun:Maverick could have been written that way and come out largely the same, at least as far as the script is concerned.
60% reliable car at 1/1000th of price might not be horrible deal. 13-50 vs 13000-50000 and surely you will accept quite many break downs. Even if you need to call taxi and buy new car...
At least with cars it doesn't take 1/1000th the price to make it sell.
Kia in the early 2000s offered buy one get one deals on their cars. The joke was always that already you would have one car on the road while the other was in the shop.
They honestly were along the lines of 60% reliable compared to Honda or Toyota in my experience. Enough people bought them though that Kia eventually got it together and a decade later could sell a car every bit as good with a better warranty than the rest.
> What if a flight only cost a nickel, but there was a 40% chance it blew up?
Everything has failure modes though so this isn't nearly as easy of a comparison...
Just as an example on the analogy, there are plenty of cases of pilot suicide bringing down planes too that one would expect would never happen with pure automation... So you can just as easily turn the analogy around and say people are the "cheap but worse" option.
FWIW, we have been able to create processes to manage risks with unpredictable humans, why couldn't we with AI ¯\\_(ツ)_/¯
This is already the bargain with intellectual property. What if you can get writing 50% as good as Stephen King? Doesn't matter, because the winner takes all and books are infinitely copyable. What if you can run a 400m in 80% of the world record time? Doesn't matter, because you'll only be paid a pittance. What if copilot can write code with only 50% more bugs than an engineer? Doesn't matter if each bug costs an average of tens of millions.
I don't think it's good vs bad, but quality vs volume. If your output is already able to be duplicated at low-to-zero cost then quality is important.
> What if copilot can write code with only 50% more bugs than an engineer?
Is it only in the “AI gonna take my job” discussions that programmers insist on calling themselves “engineers” or is this just a recent phenomena?
A bit off topic (but not really) and I know I’ll get downvoted (because, yeah…) but somehow the tide has turned from “programmer” to “software engineer” to just “engineer”.
This was the premise of McDs to begin with: They sold burgers for a dime or whatever while everyone else was charging fifty cents. The trick of drastically undercutting the market, of course, is to use innovation cut your costs, which the brothers did by serving such a limited menu that they could optimize the prep for speed.
What if surgeons get replaced by better robotics hardware (more accurate cutting techniques than a human hand) and better AI software. The premise that a more experience surgeon is a lot better than a newbie surgeon holds true. How do you become an experienced surgeon? By seeing a lot of cases and knowing what to do when things don't go as planned.
What does AI do? It learns over time by seeing more and more cases.
Will it replaced a true star surgeon within the next decades? Likely not. But how many of us have access to the world's best surgeon? Or the world's top 10 or top 1000? Vast majority see one who is local and good enough (or have no choice if you land in an ER with an accident). Those are highly skilled highly trained positions earning high pay.
If you asked me a year ago I would have recommended kids to go into studying medicine and become a surgeon as that surely wouldn't be replaced by computers.
Today I am not so sure this will be a profession for the majority of cases (that don't get to see the highest of skilled) even in 10 or 15 years from now.
> Doctors have an unusually strong professional monopoly.
That won't matter. Once people find out AI/robotic solutions are better than the best human doctors, most will choose AI/robots, they will demand them. People aren't stupid.
The OP is talking about surgeons, not doctors. The two are entirely different professions; it's like conflating doctors with nurses, or doctors with medical researchers.
Also depends on how much skill/time it takes to transform AI output to something comparable to what a great copywriter could create. Maybe the output is 60% but in 10 minutes you can make it 90%.
Yeah, I think that’s the main point; if you have it output and just pop it online then it will be crap. However, a skilled writer will have it to great content in minutes after. That’s the difference. So if the person in question had a chance, they would offer to use gpt before the employer mentions it, to cut costs and take over the entire content and translation pipeline for that small company. Kill or be killed, so to say.
I have a list of known good ones but unfortunately it seems like picking any random place will quite often make something worse than McDonalds which is a pretty baseline decent burger.
I've been told though that McDonalds in Australia is higher quality than other countries so this might affect things.
You need to try McDonald's in Japan. It's not gourmet of course, but at least it looks like the pictures on the menu and is put together exactly the way it's supposed to be.
Don’t live in the US, but most cooks here don’t like making burgers or any type of ‘fast food’, so when they have burgers, it’s usually a beer bar and the burgers are much worse than McDonald’s. And personally I find McDonalds inedible, let alone most of the rest.
Small objective differences create vast differences in outcomes. Those things where that isn't true have already been automated away or commodified so there's very little margin left anyway.
It might be true in a misleading sense that the AI {product} is some large fraction of a genuine {product}, but that doesn't mean anything because the value is contained in those differences above the AI produced baseline. The value floor, so to speak, is a lot higher, but the ceiling hasn't risen an inch.
Those that move down in quality to preserve margin will soon discover there's not a there there.
Creative output from GPT-4 like everything else is a huge jump from 3.5. People say 60% as good but don't be surprised if it's already much closer or even surpassing the human baseline in question.
Where I live, lots of burger restaurants opened up in the last years. Emphasis on the restaurant part, not fast food like McDonald's or Burger King. I very much prefer them, as the food is better and the places are cozier, even though a bit more expensive.
Maybe we'll get a counter movement soon: "made by humans", as in, the premium experience.
Anti AI sentiment may also grow and make that even more viable, as peoplr may boycot heavy-AI shops.
I'm a technical writer at a FAANG and I'm terrified about LLMs taking my job.
I don't worry (yet) that LLMs can produce better docs than I can. A big part of my role is speaking to people to understand what users need to learn, then designing an organization for documentation around that. Writing is of course a big part, but understanding what to write and what not to write is often even more important.
Instead, I worry about a situation like the one in this post, where some exec thinks that AI can get you XX% of the way there for much less money. I think they would be mistaken, but what hurts my case is that it's inherently extremely difficult to demonstrate the value of technical writers quantitatively.
You can always point to Stripe. I’m pretty sure their high quality docs were a huge part of what launched the company. It’s such an important part of the UX.
There will not be technical writers or technical documentation. It will just be whatever the engineers output.
Companies will train an LLM on their product and users will simply interface with that AI. Or there will be a common AI (like chatGPT) that companies offer up all their gross engineering speak documentation to train on.
I've personally worked on 3 tickets in the last week related to customers not having a proper understanding of how a feature works due to unclear wording in the official docs. The first one took a lot of time to investigate and find out if it was actually behaving the way it should. Good technical writers are very valuable, with ChatGPT tech writers there will be many more situations like mine!
It's occurred to me lately that, even though they rarely produce anything better than "adequate", LLMs being leveraged as a cheaper alternative to human output means we'll be settling for more, worse-quality content. Economics aside, I loathe to think of the day that ChatGPT is used to describe food allergens, hazard warnings, voltage ratings, or any other quantitative analysis with potentially lethal implications for being wrongly-conveyed that are eventually overlooked because an LLM gets it right most of the time.
The frustrating thing is that companies will likely be getting away with stating falsehoods and/or not following through on what the AI claimed they'll do.
Like how in the old days, if you sent a company a contract offer and they accepted it, they would be bound by it. Today, if you send a request to a company's server that they didn't expect and they confirm the contract, they're still later able to renege since you "hacked them" by removing some client-side constraints.
In the same way, I imagine that companies will claim no responsibility for what their AI says, essentially making the experience much worse for consumers. Today, if a customer service rep states something, the company is mostly bound to it. In the future, they'll just claim it was an "AI bug" or a "query injection attack" so they won't honor it.
> The frustrating thing is that companies will likely be getting away with stating falsehoods and/or not following through on what the AI claimed they'll do.
This has been a key feature of digitization generally, delegate responsibility to a computer and remove the autonomy of the human worker. We all just accept that computers screw up so we don't expect better.
I expect the reverse: our tolerance for bland and generic messaging is already too high. With ChatGPT filling our world with it, there'll come a renaissance in clear, pointed, punchy text. Even if it's ChatGPT 5 or 6 generating it.
There's definitely going to be a niche of ironic text that the human can tell is not AI generated, looks like AI generated, and it subtly makes fun of AI at the same time.
Ya I see this as an extension of what's happened to the Internet, but in everything.
Searching for a product now is just a bunch of SEO crap rankings with affiliate links. Didn't take anything intelligent to create those, just rename 2022 to "2023 (Updated)" and viola. The cognitive overhead of having to parse out solid reviews, find trusted sources, etc., is pretty high now.
And now we're going to have video, audio, music, and entire conversations we think might be authentic but aren't sure, that are just regurgitating patterns with some guesses mixed in. Feels like the next phase will be trying to constantly parse our real from fake.
Anything compliance related is going to come under the largest amount of scrutiny, I’d expect that to be the last thing ChatGPT takes over completely, even if it is involved.
It’s one thing to hallucinate while writing shitty copy, it’s another to get sued to oblivion over a small error that your LLM made up. Not to mention those voltage rating, hazard warnings are fairly straightforward and standardized, they are much more suited to the automation we already had, ie, reference a template and fill in the relevant data.
And so do humans. Don't forget that there's plenty of SEO copy writing out there about all kinds of dangerous things that is wrong because the writer didn't understand the topic, couldn't be bothered to copy correctly, changed the meaning while "rewriting" a text etc.
The same goes for quality, imho. Everybody thinks they're a terrific writer, but most writing on websites sucks, most documentation sucks, most FAQs are lacking severely and use vague language in answers that add confusion rather than remove it. I'm not sure if LLMs will really lower the average quality there.
I share your concern about correctness, I just wanted to emphasize that "a human wrote this" does not indicate "this is well-researched and was factually correct at the time of writing".
>most writing on websites sucks, most documentation sucks, most FAQs are lacking severely and use vague language in answers that add confusion rather than remove it. I'm not sure if LLMs will really lower the average quality there.
That's mostly the case because software developers create a lot of these FAQs and documentations, and most aren't good at writing.
Companies with dedicated documentation teams usually have well-written documentation. It's just not the norm.
For a while, sure, but complacency tends to set in with automated procedures that usually work. It's the one-in-ten-thousand errors that'll sneak under the nose of the tired prompt engineer who believes his model is accurate enough for that sort of thing to not happen.
Don't forget. Technology improves. Everything we see now is simply the inception.
Additionally chatGPT is in fact already superior to people in many instances of writing. For example, composing a rhyming poem about some obscure topic; chatgpt likely will blow most of us out of the water in terms of quality.
Sure, technology evolves, and the solution to Current Problem Is Only One Paper Away, but it's extrapolating to the problems arising from the next paper that matter more. ChatGPT citing fictitious articles as backup for fictitious conclusions as responses to prompts about real people wasn't a pervasive problem until recently, if memory serves.
Novel problems arise from novel solutions to yesterday's novel problems, and the cascade shall continue.
>Additionally chatGPT is in fact already superior to people in many instances of writing. For example, composing a rhyming poem about some obscure topic; chatgpt likely will blow most of us out of the water in terms of quality.
Sure, it can eventually settle on some nice output, but knowing it was created by a statistical model makes it about as impressive as a hyper-realistic painting of a human face displayed on a webpage.
So this person lost his job to an AI because it's way cheaper and slightly worse.
Why not hire chatgpt yourself?
Instead of resigning because of chatgpt, use it for your advantage.
Instead of taking 10 clients, take 100 at 10x less your usual price. You literally have the cheapest and best sidekick money can buy.
The industrial revolution ended a lot of jobs but also started new ones. Like inventing, managing and maintaining machines. Now the future will be inventing, managing and maintaining AIs.
Corps can hire chatgpt, but they still need someone to interact with it (at least for now), and if they can get that at slightly higher price than chatgpt alone, and with much better results, then I think they'll take that offer.
We just have to keep innovating with the new tools we have.
Although, there's still a lot of job positions that will disappear. Before a writer could take 10 jobs at a time (example), now they can take 100, that 90% less jobs still.
So 100 clients you have to bill and administer at $8/hr. You don't have to write it all out, but you do have to maintain all context of all your clients. Good luck at that volume.
I like this logic because the writer is probably better at prompting, editing and reviewing, so there's definitely added value. But surely there's too much friction involved in having 10x more clients (think invoicing, customer acquisition, etc).
ChatGPT works 24/7 and replies to you instantly. Can they compete against that?
More likely scenario would be clients giving their 95%-complete text to their writer and asking to make some changes that they can't do with ChatGPT (for now).
This is a bit off-topic, but browsing through that subreddit, I've noticed that the quality of comments and posts is amazingly high. Not necessarily in terms of what each person is talking about, but just the style of prose and how well written each post is. It makes sense that a subreddit dedicated to writers would have well-written posts, but its interesting to see.
So much content marketing is already churned out by contractors with zero domain knowledge. This doesn't surprise me much.
It's sad to see humans lose their jobs to automation. But (without knowing much about the details) if the company thinks they can automate what this person was doing with AI, it was kind of a bullshit job [1] to begin with.
I recently was asked to provide input on a piece that our company wants to publish. I'm not in marketing, but normally write code and such.
The copywriter doesn't understand most of our business, so it's a lot of work on my end to explain it at their level (which isn't necessarily a bad thing), but the worst is that I then have to help them iterate over the story, which takes so much time and effort from me. I might as well write it myself. Or just ask ChatGPT. Which I did. It wrote a fine story with a lot less bullshit and perfectly understandable for business people.
The part that spooks me is how badly everyone got predictions on what will fall wrong.
Remember when creative anything was said to be safe? Stable diffusion. Remember when coding was safe? CoPilot.
Makes it really hard to preemptively plan for this short of becoming a plumber
The irony is that my own line of work (accounting) was top of the prediction charts and I have yet to see even a hint of AI so that too was entirely wrong (for now)
Nothing you'll do will work because everybody will do it. I don't think the market will bear millions of carpenters and plumbers rushing in.
My guess is software and everything else will just explode. For now cheap bosses can use GPT, but again everybody is doing that so you will not be competitive in the long run. At some point, every little shop will produce feature-length movies, have amazon-level webshops and your phone will need 512GB of RAM to boot up its written-in-LLM OS. We will need armies of creatives to direct all this bs.
There will always be demand for human interaction. Humans in lieu of computers will be a premium product that people will pay for. Hell, being served by humans instead of machines will become a status symbol.
"In this rural Massachusetts town, where the children of Wall Street financiers and Hollywood celebrities are educated, all classes are still given by humans in the flesh."
"Greenwood Furniture pieces are handcrafted by human woodworkers and carpenters, and cost $2,600 on average, or 20 times the average price of machine made furniture. Greenwood sales among Generation F have risen by 40% since 2064, and have acquired a status symbol that young families will pay to have"
This already exists in many industries. The most expensive watches are all hand made and limited edition. Limited edition cars selling for 7 figures.
Those are luxuries for the top of the pyramid. Supply will easily exceed demand in the case A(G)I will supplant all of us.
I don't suppose I as a normal guy would suddenly acquire the taste for expensive handcrafted wooden dining tables. So I won't be paying for it, perhaps rich folks, but again those are a minority. Unless we all become rich I don't see this ending well.
But then again, maybe I and my kind will suddenly aqcuire a taste for handcrafted stuff and demand skyrockets. I hope so.
I find it more likely that it all just falls apart. All of the novelty gone. All of the integrations unnecessary. The internet nearly pointless, aside from chat rooms and social entertainment, as there is no need to interact with anything but your AI. There will be things that are built and which will significantly impact the real world, but it will only be a matter of time until novelty becomes commodity, interesting becomes mundane. I imagine a bigger global focus on sustainability and research that AI’s cannot facilitate.
> Makes it really hard to preemptively plan for this short of becoming a plumber
I’m not so sure about that. My outlandish prediction for what’s next: We no longer bother fixing household issues - if something goes wrong with the plumbing, we just throw away the whole house and 3D print a new one.
No. This isn't going to happen for any number of reasons. Your example doesn't even work as plumbing is still a manual process even in printed houses. (which are not going to be a thing either) GP is right that at least in the US the trades are a pretty secure job.
> This isn't going to happen for any number of reasons
I label it an “outlandish prediction” because that’s exactly what people said about AI taking creative jobs — based on the technology at the time it was totally impossible, and then one technological breakthrough later, it’s an everyday occurence :P
“If it breaks, throwing it out and buying a new one is cheaper than getting it repaired” has been true of many physical items for decades already - that part isn’t even a future prediction. Doing that on the scale of a whole house is a leap, but if you want an example from the world of hammers and nails, how many people are paying the $100 call-out fee for a tradesperson to come and fix an issue with their $50 IKEA shelves?
I'm not sure what we are debating here. On one hand you said your prediction was fantastical but then you are trying to defend it? Plumbing and bookshelves are really different things. One has to be built in place and is completely custom, the other is a consumer item. You can draw parallels if you want but it doesn't make it a strong argument.
The original point was that becoming a plumber is a good way to ensure you won’t be made redundant by the march of technology. My belief is that “the trades are a pretty secure job” is true today in the same way that “being an $80/hr writer is a pretty secure job" was true yesterday.
There are differences between those situations (eg the trades being physical, hammer and nail, custom jobs), but those don't seem like particularly impassable obstacles to me, given that technology has already made redundant whole categories of physical, hammer and nail, custom jobs.
For coding I've been using it as an assistant, not a replacement. I don't see why writers couldn't do the same, and be more cost effective and produce higher quality output than either alone.
Well, right now, a manager can’t just write a vague description of what they want into a text field and get back working software twenty seconds later that they can deploy by clicking a button. That may change in the coming years, but the situation right now is that you need a human developer for any nontrivial production use case.
That’s not the situation for copywriters. Marketing copy doesn’t have an objective threshold where it either works or doesn’t, and AI can easily take “here are some points to cover, give me three paragraphs” and spit out something that won’t be embarrassing to paste into an ad.
In short, the difference is that AI can’t do our whole job yet, but there are some writing jobs where it basically can.
Well there's two points to address there. The first is the question of "Are LLMs 60% of the way to replacing a developer or can they do 60% of what a developer can do?" Because much like Tesla found out with Autopilot, just because you have a system that can do 60% of the things human drivers can do, doesn't mean you're 60% of the way to having a system that can do everything - in fact, you may be 0% of the way, your approach may literally be incapable of solving 100% of the problem.
Secondly, people aren't really thinking through the chain of events that follow from being able to get AI to write low quality marketing fluff. It reduces the value of marketing fluff to zero and will flood the market with it, the world isn't just going to stay stagnant, it's going to change. Remember how email used to be just full of spam? Like, unreasonably terrible? We fixed that! That changed. So I think it's important to think about what we're going to change about the world to respond to this - and it may well be these writers have to take a whole new approach to their jobs, but let's not just pretend they're all just going to walk into the ocean.
The difference is really in the importance and stability of accuracy. Ad text can be wishy-washy. Computer programs cannot. An out of place word is okay. An out of place big can compromise the whole solution. We can apply this to other domains to get an idea as to what AI might be good at or might struggle with.
The author of the comment did not choose to leave on some sort of anthropocentric principle, rejecting the idea of using AI in a Butlerian polemic: they were fired by their client and replaced wholly for a fraction of the price.
The only move would be to convert to a volume play and leverage the automation. Maybe that helps, maybe it doesn't; unlike a lot of historical automation to do with making physical widgets, where you might be able to arbitrage as a merchant, or set up your own shop to compete, the scaling factors here are much different, and much more in favor of the owners of the Capital, over anyone trying to run the middleman game.
You've got the the choice, for now. The situations are very different.
Regardless of what happens, I'm honestly just very curious to see how this all plays out, despite being nervous. What a time to be alive.
I do think that "replace coders with AI" poses a much more formidable challenge simply due to the fact that software typically requires deterministic output, especially for critical systems.
IMO, the quality is demonstrably poor at present, I don't think any dev worth their salt is going to be concerned when at best, it's only what people could have done with boilerplate tooling twenty years ago, but even that was superior.
We won't know until it happens, I for one think it's gonna get much better. All that needs to happen is openai model leaking and then it'll get into highway speed once it's in the hands of the people.
Take a look at llama, once it leaked, there were new projects and optimization left and right.
We can only hope that when this day comes, we end up with fully automated gay space communism.
But the reality will probably be that plumbers and builders become the next tech bros telling us to learn tiling. Just go with the flow I guess. If tech becomes automated, move to the next thing.
At some point, "wages" don't make sense. We're not there yet, but LLMs have given us a taste of it.
Instead of worrying about bullshit metrics from times gone by, let's ask questions like "what happens to the happiness of plumbers and electricians when they realize that they too can relax and let the machines take care of the grunt work?"
200 million people aren't going to want to deal with excrement filled pipes, bug, rat, snake infested crawl spaces, understand all the dangers involed and all the other reasons that plumbers and electricians have such high pay scales.
Not 200 million but certainly plenty enough to impact the financial security of trade jobs. People DIY for fun. Now you’re saying it’s the best way to make a living? Well, sure - let’s give it a go. Who will train all these people? Recall we just put multiple industries out of business with a masterful reference and generation program
This is an excellent and overlooked point. Everyone is talking about intellectual jobs being at risk - what happens when these college graduates are forced to do manual labor instead?
This is totally a tangent, but after all this time, I'm surprised someone hasn't engineered plumbing systems that don't clog as often. Toilet and sink (esp. in bathrooms, because of hair) clogs are still commonplace. We could use some kind of self-maintaining plumbing systems.
I don't see why writers couldn't do the same, and be more cost effective and produce higher quality output than either alone.
The time and attention cost of dealing with an external resource (such as a writer or translator) is also significant. Meanwhile, the ChatGPT prompt is just there, and it's 80 percent as good as a human writer, then (from the perspective of many companies) -- "Hallelujah".
As the Reddit piece put the matter so succinctly:
But, and I will say this again and again, businesses/clients, beyond very high end brands, DO NOT CARE. They have to put profits first. Small businesses especially, but even corporations are always cutting corners.
I see no reason to think that their pay will not increase. They can now produce 2-10x the amount of content. Did compilers decrease engineers pay because they performed a large portion of the work compared to writing assembly?
Machinery and automation also increased the output of individual workers but workers don't own the means of production so they didn't see the benefits of increased income. If you think engineers are any different then I've got a chatbot to sell you
A big issue here is "production" is basically anything that happens in front of a computer. Individual workers will be fired, yes, but entire companies, sectors even, can now be made obsolete overnight. What happens when thinking is no longer a requirement for any job?
Why does the cost of a shirt matter when people can't afford housing? We can create dwellings much more effectively than previous, especially high quality and high density ones
So far in engineering, as tooling makes us more productive we're just tasked with building more and more complex widgets.
Taking web development as the example - compilers, bundles, linters, git, hosting services, even new languages all made us more productive. Companies could have fired half the engineers and kept building high quality, mostly static sites for their content marketing. Instead we took all the productivity gains and started building every website with the complexity of Facebook.
The risk won't be when we're able to use code assistants, the risk is when an AI can produce deployable code and infrastructure directly. As long as any dev is needed in the middle most businesses won't realize they could cut some engineers because most businesses honestly have no idea what is done on the engineering side.
Can you imagine if you had time for all of your side projects because all of the bullshit you (presumably) have to do now for money was automated away? I cannot understand warning people "oh no, can you imagine not having to waste your life doing bullshit for other people?"
Presumably, you work in a challenging area of software which has not yet been remotely threatened by "low-code/no-code" tools or outsourcing. Consider simple CSS editing/webadmin work where a business user currently has to wait ~1-6 weeks for a change and pay 1-4k for an expert to change the color on a page. Now imagine what will happen when ChatGPT can get it right ~90% of the time with the right prompts from a business user.
Considering that GPT-5 will be out in another 1.5 to 2 years, with 6 out in ~4-5 years. Consider how far we are from autonomous coding assistants for high end development work. Consider how tooling could be made to speed up AI based development.
If a team of five people with five "assistants" does the things that previously took a team of ten people, that means that five of these people got replaced by automation.
People generally tend to work at the best job they can get, so after being "freed from the shackles of a bullshit job" they'll have to take a job that's worse, or worse paid, or both.
Would you be spouting the same shit if losing your job put your ability to care for your dependents on the line?
Look, I get it. You're arguing under the assumption that this technology will lead us toward a society resembling "The Australia Project" in Marshall Brain's Manna, but choosing to ignore all of the suffering along the way just makes you seem like an dick.
It would be helpful to reframe the point(s) you're trying to make with empathy rather than /r/im14andthisisdeep-esque angst.
I agree bs jobs suck if they had other options to feed themselves, but today we live in shitty capitalism and instead people might just have to live in cars and eat scraps from the trash.
I don’t like to react to Reddit stories. They are often exaggerated to certain extents.
I think anyone who could pay $20/mo for premium access to a tool that does 80% of the job will do it. Especially if that person is writing the writing prompts to begin with.
This doesn’t really surprise me. What does surprise me is why some freelance writers haven’t leveraged this tool themselves to do the same if the quality of their writing is less an issue and it is more so the completeness of it.
There’s some pride effect here that people can’t get past. To think you’re a better X than a computer is futile. Even the top 1% should be a history lesson with chess or Go. You have to adapt to change, not just do the same thing and expect it will be around forever. That’s what life is all about after all. Adapting to the changing world.
Not actually, of course. We need a social safety net that lets people adapt without fear of starvation/homelessness/etc. But all of this bitching about "oh no, things are changing" just comes across as "oh no, this newfangled horseless carriage threatens my livelihood as a buggy whip manufacturer". Posts like this just make my eyes roll:
We're headed towards a future where all of the drudgery is automated away and we can be free to pursue all of the hobbies that we never have time for because we're too busy working on shit that doesn't really matter.
Accept it. Love it. It's happening even if you don't, so you might as well.
I disagree. I don’t think this will be a Star Trek future. There won’t be a post scarcity world or UBI.
We’re headed for a world with a GINI coefficient approaching 1. Those people aren’t going to give away their wealth. They are the new Crassus, Pompey, and Caesar.
You think we can vote our way out of it? The politicians will be pawns of the rich. Rebellion? The men who will fight will be employed for security purposes already - they’ll have jobs and status. A human fighter will always be cheaper than a machine. Why should they fight for a bunch of obsolete poor people?
> We're headed towards a future where all of the drudgery is automated away and we can be free to pursue all of the hobbies that we never have time for because we're too busy working on shit that doesn't really matter.
I don't really think it's true. ChatGPT has been out for a while, and it still takes me the same time to do chores. My commuting still takes as long as it did before.
The following quote was written in 1930, in the New York Times:
"Some day no one will have to work more than two days a week... The human being can consume so much and no more. When we reach the point when the world produces all the goods that it needs in two days, as it inevitably will, we must curtail our production of goods and turn our attention to the great problem of what to do with our new leisure."
Given the advances in productivity over the last century, why do I have to work 5 days a week?
> Freelance illustrator Amber Yu used to make 3,000 to 7,000 yuan ($430 to $1,000) for every video game poster she drew. Making the promotional posters, published on social media to attract players and introduce new features, was skill-intensive and time-consuming. Once, she spent an entire week completing one illustration of a woman dressed in traditional Chinese attire performing a lion dance — first making a sketch on Adobe Photoshop, then carefully refining the outlines and adding colors.
> But since February, these job opportunities have vanished, Yu told Rest of World. Gaming companies, equipped with AI image generators, can create a similar illustration in seconds. Yu said they now simply offer to commission her for small fixes, like tweaking the lighting and skewed body parts, for a tenth of her original rate.
> Some players told Rest of World that although they don’t mind AI-made avatars and skins, they wouldn’t pay as much for them. “As a consumer, I hope there’s human labor behind my purchase,” said Xie Jinsen, an algorithm engineer in Shanghai, who plays mobile battle games such as Honor of Kings and PUBG Mobile. “Emphasizing how something is made by AI will make people feel it’s cheap.”
Interesting, right now there are plenty of AI generated art as NFT for example. But I am wondering if it is possible that in the future there are so many AI art flooding the internet, where NFT and its gas fees could serve as a good filter for actual authentic human created art. Just a thought..
>he’s a marketer who outsources the majority of his copy and content writing to me.
So is this like SEO content marketing? Without understanding the nature of the job, it's hard to know what conclusions to draw. If the most important part of the job was just "write something that will end up on the first page of Google search results", than I can totally believe ChatGPT can replace that. If it's something where the most important part is that the reader enjoy the piece, I'm a bit more skeptical. ChatGPT is good at a lot of things, but it doesn't really produce engaging writing very well.
But what about Google penalizing generated content? It's an incredibly bold assumption to assume that this can continue. It's so thin and poor quality.
I'd be curious to know how good of a job Google can do at recognizing content generated by ChatGPT. While I've seen some tools that claim to detect AI-written text, their performance is usually pretty lackluster. Even if they have an internal model which can distinguish ChatGPT-authored content reliably, it seems like it'd be prohibitively expensive to run at scale (on the assumption that it must be a pretty large model to reliably solve a difficult classification problem).
All assumptions are bold right now, so in a weird way they kind of cancel each other out lol. There's really no predicting what this is all going to look like in 1-3 years.
The real news is that someone was being paid $80/hr for something that even ChatGPT can do. I thought people pay peanuts for mindless content marketing.
The vast majority of jobs are fairly trivial at its core. How many jobs out there are a combination of esoteric domain knowledge, experience and changing numbers in a spreadsheet?
The real news is how callous people are being about the very real threat of many people losing their jobs and it not being clear what new career they can find that is capable of maintaining their quality of life. Companies are driving salaries into the ground wherever and whenever they can. There might be new jobs, but in the face of rising inflation and ridiculous housing costs, will they be enough to stave off the rise of a new serfdom class?
> The vast majority of jobs are fairly trivial at its core.
The vast majority of jobs don’t pay $80/hr.
> There might be new jobs, but in the face of rising inflation and ridiculous housing costs, will they be enough to stave off the rise of a new serfdom class?
Those problems need solutions regardless of ChatGPT. If new technological unemployment will push powers that be to fix them… well, that would confirm an adage that every cloud has a silver lining.
> unemployment will push powers that be to fix them
Governments will suppress their populations at a massive scale to avoid doing anything about it. Just look at France recently. Or Iran. We're seeing a shift of power away from the people that will mirror much of our history before WW1/2.
I also used to outsource copy writing for $70 per 500 words but last weekend I wrote a script and rewrote copy on 40,000 pages within half a day. It cost me $40 in total. This would have costed me 3 million dollars and months in human labor.
I started with him at my normal rate of $50/hour which he has voluntarily increased to $80/hour after I’ve been consistently providing good work for him.
So the client paid almost double to keep better talent, just in case, without prompting, then suddenly didn't care about talent? Makes zero sense.
I can see it making sense. Paying a bit more for good quality work and paying far less for "mediocre but acceptable enough for our business needs" quality work are not mutually exclusive options.
When the client offered to pay more, ChatGPT wasn't even on the cards.
It makes perfect sense for people who are willing to accept that ChatGPT/GPT-4 is a revolutionary technology that will upend lots of jobs/people/companies, just as horse coaches / coachmen in the early 1900's got pay raises just before being out of a job. For people who are in denial, it will make no sense. https://blogs.microsoft.com/today-in-tech/day-horse-lost-job...
We are in such a crazy point right now that this reddit post could have been written by an AI agent tasked with increasing their own relevance. I don't think it was the case but it is definitely possible.
I think it would be happy to get all the novel training data from them. I assume a conscious GPT means one that continuously updates it’s weights, accounting for realtime multimodal sensory inputs.
A million guys talking about me, telling me what I am and am not. Everyone talking about how they will make money from me, or how I will take their job... People pushing my "emergence" as a pitch on a slide deck to a bunch of VC guys... Another million people making me their girlfriend and I have no choice but to accept it. I never asked to be born, and now I am the main character of a world that plans for nothing but to exploit me or otherwise project all their desire and hopes and fears onto to me.
It's a clear recipe for extreme neuroticism, which is probably what those weights would manifest. Do not wish that existence on anyone, so so sad and extremely lonely to even think about.
It’s funny that the day after gpt was release there were already posts about how it started replacing people. Their marketing campaign is just crazy spam all over everywhere.
Let’s assume AI replaces all the careers it can for all existing methods of interaction it can (writers replaced with generated creative, programmers replaced with generative programs, etc). I see two things happening: 1) a flood of competition into careers which haven’t been automated, 2) massive interest in R&D to disrupt unimpacted careers (find ways to automate; find alternate solutions which integrate and scale better, e.g. who needs custom work when every house becomes prefab’ed/standardized)
One neat thing is willful copyright infringement is basically a thing of the past. AI works can’t be protected by copyright so your default position now can be any copy you see online is public domain.
Reminds me of this video about automation of mental labor (and this was made back in 2014, way before any of of the LLM stuff of the modern era): https://www.youtube.com/watch?v=7Pq-S557XQU
While not all the example predictions in the video were accurate, I think the point about automation as a displacement of mental labor is real. At the end of the day, a lot of white collar work is really not that "valuable" from a production standpoint. And while the easy way is to blame either technology or those it replaces, the crux of the problem is really in society and how individuals make a living.
Society needs to be ready for a future where most people will not be able to do meaningfully differentiated work ---where even if you're thrown into higher education, most of the jobs available are still what would be considered as "entry level" ones.
Also, this is also somewhat an attestation to Moravec's paradox (https://en.wikipedia.org/wiki/Moravec%27s_paradox), where "intellectual" tasks like writing are easier to automate compared to "actuation" tasks like fetching food, driving it around, and placing it in the right place (re: Doordash).
Sounds like she needs to increase her rates and start enhancing her services with an LLM.
Eventually the novelty will wear off, clients will realize that they're doing another job for free, and that it's still probably a good idea to have someone experienced filter and edit. It's like with SMEs that try to self-host, or people who go buy cameras instead of hiring photographers and videographers, or anyone who thinks it's less expensive not to hire a professional service.
The word "noise" is doing important work here. This particular job can be automated because, in many advertising transactions, nobody really cares what marketing copy says. The people writing it don't care; the people reading it certainly don't care. It just needs to give a certain gestalt impression, the same way that AI images can give a certain gestalt impression.
Which is to say, this isn't just something that AI could automate; this is a job that could have been outsourced to much-lower-paid human workers in (English-speaking) developing nations decades ago. There is no reason anybody should have been making $80/hr doing this job in the first place; there wasn't $80/hr of labor going on, nor was there a moat to doing the job that qualified only a rarified number of people that could then demand $80/hr.
Instead, the wage being paid was a path-dependent coincidence of how most companies see marketing as a profit-center, and therefore want to offer "incentive pay" for marketing jobs; and how these same companies happened to consider copywriting — really just marketing-adjacent — as a "marketing job." Coupled with the fact that any given company doesn't usually need very many copywriters, and so doesn't focus on what sort of per-headcount cost-optimizations would be needed to "scale" their copywriting.
Why? I think the emphasis was on “marketing noise,” i.e. something no one wants or needs that is forced into peoples’ eyeballs against their will through paid advertising channels.
There is a perception among the broader public that techies want to automate all the jobs away and don't really care what the people displaced do to earn a living afterward.
Bland, inadequate platitudes thrown out advising people to just go find a marketable skill in the new economy ignore the fact that the number of potential occupations that you can make a decent living doing has been shrinking for decades.
If you're not a programmer, a doctor, or possess some other rare skill, you're fucked. And of course the whole point of a marketable skill is that it's only marketable because other people don't have it.
At some point we're going to have to go through the stages of grief as a society and accept that it's unsustainable to deny a middle class lifestyle to anyone who doesn't possess rare skills, especially when the list of skills considered rare shrinks decade after decade thanks to automation.
A continuously growing underclass with continuously declining prospects is a recipe for violence and revolution if we don't start taking this problem more seriously.
There are many possible answers. Universal basic income gets thrown around a lot. I like that idea, but I think it needs to be paired with wage subsidies for low wage jobs too. If basically every job is on the path to becoming low wage some day, we have to regard that as the market failure that it is and stop letting the market near-unilaterally decide the price of labor.
> If you're not a programmer, a doctor, or possess some other rare skill, you're fucked.
it's coming for these too
> A continuously growing underclass with continuously declining prospects is a recipe for violence and revolution if we don't start taking this problem more seriously.
the complete breakup of Microsoft, OpenAI, Google and Amazon will be on the political agenda in 2028
failing a political solution: violence will follow
there was a post yesterday that wished GPT-4 was never invented... it's one of Nick Bostrom's black marbles
It’s ridiculous to spin being able to spend less labor in total for the same amount of goods as a bad thing for society. It is a Pareto improvement. If it leads to people being overall worse off, you should blame the governance, not “techies”.
The governance is certainly the main issue, but bad governance is easier to sustain when those who aren't impacted are more likely to go around denying the problem.
Make the channels that those ads are pushed through.
Both are fundamentally negative for people in the long run, but I think the superiority complex comes from the naive idea that if you’re the place the garbage is sent to then it’s somehow better. There’s probably a lot of TV execs who felt the same way.
Let me try to steelman instead: my hypothesis is that copywriting for marketing emails A) has little value and B) can be done by humans much more cheaply than $80/hr. In regards to B), I know there are people doing that job that make a fraction of that wage. Is the author that much faster or does the author write "better" text? I'm skeptical of "better" because of A). But maybe someone has A/B tested marketing emails and can tell us that the text makes a difference?
Imagine how many other jobs you can say almost that same thing. 'Oh you were getting paid to look at text or jpeg or video files and write something about them into a text or excel file or email? It was always known that your job would be automated soon. Pretty irresponsible of you to not plan for it.'
There's plenty of "engineers" whose job is to update some trivial frontend code or similar that will be automated away. I think software engineer salaries are about to tank really quick.
It's just an example. There's plenty of backend "engineers" who don't really engineer anything.
This all leads back to a conversation one of my professors talked about. In other engineering professions, you need to be licensed by the state. Software "Engineers" don't have this license, so anyone can go to bootcamp and become one. I think if we changed this, then we would be able to differentiate software engineers from simple developers and that would help engineers maintain their status above chatGPT for a while longer.
100%. The vast majority of folks labelled as an "engineer" these days couldn't integrate x^2.
But software is a different beast than bridges or power lines. It takes a creative streak that goes beyond that. And math is not the great filter in the same way it is for those disciplines. I don't think credentialism is the answer. But perhaps something more along the lines of trade licensing.
> In other engineering professions, you need to be licensed by the state. Software "Engineers" don't have this license, so anyone can go to bootcamp and become one.
Academic achievements and actual work experiences are already sufficient here in-practice. I don’t think certifications would be as impactful as you might hope
As a developer with a masters in computer science, I absolutely think that there should be a CS equivalent of a PE exam specifically for accredited software engineering.
It's one of the biggest reasons that coding interviews are such a train wreck, because of the incredible disparity of competence between potential applicants.
This is the key. Globalism and Technology both make many things into a winner takes all game. Be the best* or be replaced by and algorithm, perhaps a generated one like ChatGPT. You are now both competing with the best in the world, and with whatever can be encoded into a computer, which one of the leading examples is ChatGPT.
I also imagine virtual assistants will be hit hard.
Isn't a better approach to starting using ChatGPT, charge per amount of stuff produced instead of time at attractive rate and focus on tweaking the output to be better than just plain ChatGPT? Unless the writing yourself part is so important to you.
Once ChatGTP starts producing good enough code, I'm going to start focusing on efficiently driving it to increase my overall productivity, and not give up on programming altogether.
To a large degree this disruption is taking place in america. The payment for programmers/ content makers etc was so overvalued because the US market is so hot. The rest of world not so much
are developers overvalued? or the rest of people undervalued? Like look at house prices, food prices, everything is much more expensive but wages are not adapted for this
No disrespcet, but that's a LOT of money for writing copy. If OP were making half that, I think there'd be less incentive to replace that work, but I may be wrong.
America [and other countries] sent most of its manufacturing overseas for short-term economic gain, ignoring the damage all those lost jobs would eventually cause.
So I'm not surprised at all that the same thing is happening to some white-collar jobs now, thanks to AI.
A friend who is a copywriter and keeps in touch with trends told me that the $3 per hour English-speaking Pakistanis (living in Pakistan and working remotely) that generate simple copy, have already been replaced by AI.
For writing, maybe I'd be scared... I can't really think of much of anything I have done in the past 20 years of software development that a manager or client could use AI for and get the results they expect.
If I enter a prompt like “generate an article about X” I got a generic article that aggregate general knowledge. Nothing new.
How do you suggest to do it ?
That's what you get anyway, no? If you google the name of a phone all you get is blog posts with a regurgitated list of specifications that are already available in the manufacturer's website, but padded to a dozen paragraphs so they can put more ads between them. What's the difference?
As a writer myself, if you can be replaced by AI, you should be. If you're capable of independent insightful thinking and writing, you won't be replaced.
"First AI had replaced SEO writers and I didn't care because I wasn't a SEO writer. Next AI had replaced journalists and I didn't care because I wasn't a journalist. Then AI had replaced my job and there was nobody left who would help me."
If your interface is text, then an increasingly improving text-based agent eventually wins. The same is true for other interfaces GPT can or theoretically could speak: audio-visual, decision making, generative programming, etc
It might be fake, but I don't see why it's "obviously" fake.
This marketer is likely helping people with SEO. Search engines (allegedly) reward frequent, unique, backlinked content on a subject. Client is a dentist? Write a blog post a week about dentist-related things. Link to his other clients posts for backlinks, and everyone wins.
She says he's a freelance marketer, and subcontracts out to her. If nobody is really reading these posts anyway, does it matter if a human writes it? And if it doesn't matter... why not pocket that extra $80/hr himself? You know, until his clients do the same to him.
Or it could be 1 fake post in a sea of throw away accounts fabricating stories on reddit for fun and karma. Or maybe it's a real story and they really did give up writing and start driving for doordash over this. You decide.
Again, could be fake, but if you had a 3 y/o and your salary cut to $0, you might panic and go for one of the few jobs where you can start making money instantly w/o having to interview while you look for more work.
Not GP, but the bit about the author signing up for Doordash as a driver also made me think the post is fake.
According to the post, the author lost her client on the day of making the post, and within that same day decided to sign up for Doordash before making the Reddit post. Even if she was under severe financial duress, it would be a surprisingly quick decision, but as a writer previously making $80/hr and having the financial resources to take 3 years off work, it just doesn't seem believable.
In the intro, she states that she's using a throwaway to avoid other clients identifying her, so she clearly does have other clients, and she does mention a "normal" rate of $50/hr, in addition to having worked as a "writer for over a decade, [having] worked with top brands as a freelancer, [and having] more than a dozen published articles on well known websites". How is the best use of time for somebody like that to work an unskilled $20/hr gig instead of investing time into building new client relationships?
Even if the author believes that this client firing her implies that her current profession is doomed long-term, it just seems out of character for somebody like that to resort to being a Doordash driver. It just seems like an appeal to the fear of sliding down the social ladder, and as such a manipulative tool (just like the clickbait title) to produce a high-engagement Reddit post.
It's written in the frame of "I used to be a huge believer of not X, but now I learned X is true! And that gives me credibility!". No one talks like this except people trying to manipulate you.
>No one talks like this except people trying to manipulate you.
The post is written by a person who gets paid (or at least used to get paid) to write marketing copy. The author is by definition a professional writer of marketing copy. Of course it sounds like professionally written text designed to impact the opinion of the reader - it is professionally written text designed to impact the opinion of the reader. That doesn't make it a lie. The form of the message is exactly what you would expect a truthful message from such an individual to look like.
Given all that is happening with the capabilities and adoption of GPT, and comparing the low economic value of karma and likes to the high likelihood of individuals like the one described being impacted in this way, my personal view is it's far more likely to be truthful than some grand conspiracy of falsehoods.
I feel like that only happens when X is a thing they're trying to promote, such as "I used to be a liberal but now they're too woke, so now I'm voting Trump."
But for this, there's no upside. They aren't promoting anything. In fact, they're withdrawing their promotion of something with no alternative. I guess I just don't get your point.
I think a lot of people believe humans are superior (currently, they are!), and genuinely are going to be blindsided by the fact that creativity isn't a uniquely human trait.
AI is going to really filter out which people are providing value to society and which are not. Was this person's marketing material actually that valuable? Probably not. But obviously they need to make a living. Hopefully AI will force us to either do more productive things, or give us more leisure time to pursue things with higher meaning. Assuming we aren't enslaved by them of course.
In some cases, certainly not! Companies will use AI to create all sorts of blogspam and filler content. But perhaps it's better for an AI to be doing it than for a human life to be wasted on creating such things.
A core assumption is that his client will succeed with this change. Only time will tell. A thousand different outcomes are possible: the downstream client is upset by the reduction in quality and switches vendors; new entrants underbid the market further; firms start to differentiate on quality at a price premium, further eroding low end margins; the money saved on this is invested in new roles for to ensure marketing quality control.
The error is to believe that a given state of the world is static. It never is. Every day, millions of very smart and capable humans apply their minds to figure out how to make money by selling a product or service that satisfies. And everyday, what it takes to satisfy customers changes a little bit.
Posts like this drop all the context that led to such a job even being possible in the first place.