The amount of 90% sensible, 10% ridiculously wrong computer generated crap we’re about to send into real humans’ brains makes my head spin. There’s truly an awful AI Winter ahead and it consists of spending a substantial amount of your best brain cycles on figuring out whether a real person wrote that thing to you (and it’s worth figuring out what they meant in case of some weird wording) or it was a computer generated fucking thank you note.
> The amount of 90% sensible, 10% ridiculously wrong computer generated crap we’re about to send
Agreed. Sooner or later a company is going to do this with its customers, in ways that are fine 95% of the time but cause outrage or even harm on outliers.
And if that company is anyone like Google, it'll be almost impossible for the customers to speak to a human to rectify things.
And the funniest is that actual people may be worse, but still it is freaking me out to be moderated by an AI.
Also when this is normal and ubiquitous come people who are playing it and AI will be just dumb to recognise, the real humans all fired, game over, stuck at shitty systems and everyone goes crazy.
In some cases it would be impossible, since sometimes it can output exactly what was written by a human, or something that sounds 100% like what someone would write.
But if you allow some false negatives, such as trying to detect if a bot is a bot, I think that could work? But I feel like the technology to write fake text is inevitably going to outpace the ability to detect it.
It depends on how people use the tools. For example the thank you note one -- if someone just prints off the output of this and sends it, yeah, that's bad.
But if someone uses this to do 90% of the work and then just edits it to make it personal and sound like themselves, then it's just a great time saving tool.
I mean, in this exact example, 70 years ago you'd have to hand address each thank you card by hand from scratch. 10 years ago you could use a spreadsheet just like this to automatically print off mailing labels from your address list. It didn't make things worse, just different.
> But if someone uses this to do 90% of the work and then just edits it to make it personal and sound like themselves, then it's just a great time saving tool.
This is still way too optimistic. Reading through something that's "almost right", seeing the errors when you already basically know what it says / what it's meant to say, and fixing them, is hard. People won't do it well, and so even in this scenario we often end up with something much worse than if it was just written directly.
There is a lot of evidence for this, from the generally low quality of lightly-edited speech-to-text material, to how hard it is to look at a bunch of code and find all of the bugs without any extra computer-generated information, to how hard editing text for readability can be without serious restructuring.
Just train another AI model to do it then! I'm not joking -- Stable Diffusion generates some pretty grotesque and low quality faces, but there are add-on models that can identify and greatly improve the faces as part of the processing pipeline.
Doesn't seem like a stretch to have similar mini-models to improve known deficiencies in larger general models in the textual space.
Gmail's autocomplete already works great for this, and it will only get better over time. The key is to have a human in the loop to decide whether to accept/edit on a phrase by phrase or sentence by sentence basis.
I would classify that act of editing as "completing the remaining 10% of the work." Somebody has to do it, whether you're doing it from the writing side as in your example, or making the reader do it from their side, as in my grandparent comment's example. But it's usually the last 10% of anything that's the hardest, so if someone abdicates that to a machine and signs their name to it (claiming they said it, and taking responsibility for it) they're kind of an asshole, in both the schlemiel and the schlemozel senses of the word.
I could extrapolate in my extremely judgmental way that the person who does that probably has a grandiose sense of how valuable their own time is, first of all, and secondly an impractical and sheepishly obedient devotion to big weddings with guest-lists longer than the list of people they actually give a shit about. Increase efficiency in your life further upstream, by inviting fewer people! (Yeah right, might as well tell them to save money by shopping less and taking fewer trips. Like that would ever work!)
But I digress, and anyway don't take any of that too seriously, as 20 years ago I was saying the same kinds of things about mobile phones... like "Who do you think you are, a surgeon, with that phone?" Notice it's inherently a scarcity-based viewpoint, based on the previous however-many years when mobile phones really were the province only of doctors and the like. Now they're everywhere... So, bottom line, I think the thank-you notes are a lousy use of the tech, but just like the trivial discretionary conversations I hear people having on their mobile phones now that they're ubiquitous, this WILL be used for thank-you notes!
You got it! After seeing a few tweet storms and articles that turn out to be GPT3 gibberish, I end up coming to HN more for my news because usually someone flags waste of time in the comments.
The software would save people 80% or the work and most are lazy enough to release it as is, instead of fixing the remaining 20%. That laziness will end up forcing legislation to flag and eventually ban or deprioritize all GPT content, which will result in a war of adversarial behaviors trying to hide generated stuff among real. Can’t have nice things!
By the fact that it was generated using GPT. Same way you would go about classifying something as e.g. not made with slave labour or made with a production process that follows environmental pollution rules. That you can't easily detect it from the end product is not necessarily an obstruction to legislation.
Not saying it should be happening, but if abusive misuse continues, it is likely to happen. Regulations could force labeling on content with punishments if mislabeled (content that makes sense turns to gibberish later on after you've wasted time on it looking for the main point). Flagging could be done by the community (HN style), etc.
In the sci fi movie "Her", the main character has a job with the "Beautiful Handwritten Letters Company", a service for the outsourcing of letter writing. It seemed bizarre to me, but now I can envision a future where people are so tired of not knowing if their letter is a fake generated by some descendant of GPT-3, and feel great relief knowing their note was instead written by a human third party.
Maybe? Is it really going to be all that different from the past thousand years where we've had 90% sensible, 10% ridiculously wrong[0] human-generated crap?