It is incredibly disheartening to see what was born a non-profit dedicated to guiding AI towards beneficial (or at the very least neutral) ends, to predictably fall into the well-worn SV groove of progressively shittier behavior in the pursuit of additional billions. What is victory for Open AI even supposed to look like by now?
I may be incredibly shallow but personally I am experiencing feelings of joy and validation.
Every time I tried to suggest that maybe, LLMs and GAN tools don't make creativity easier but lazier and emptier or that this technology area is parasitic off human culture, every time an OpenAI junkie told me, "hey, perhaps humans aren't much different from LLMs", or someone said artists are derivative too and don't really deserve any more protections or are "gatekeeping art"...
... my anger at the time is vindicated every time these greedy, cynical wretches that the US tech industry has raced to anoint are taken down a peg because of their own very obvious greed and expedience.
I am loving this.
I may also be shallow in feeling a measure of glee that Microsoft is racing forward to shoehorn this utter toxicity into every corner of their product range, just in time for their customers to fully understand how it reeks of contempt for them.
This is a sentiment I'm starting to see more of, and have really started internalizing in recent months.
For every creative task I've given an LLM in the last 2 years, if I cared at all about the output, I ended up redoing it myself by hand. Even with the most granular of instructions, the output feels like a machine wrote it.
I have yet to meet anyone who felt any kind of emotion from generated art, except for "wow, it's cool that AI can make this". That's because (imo) art comes from experience, and experiencing is absolutely not what LLMs do.
Meanwhile, my dad, whose AI experience amounts to using MS Copilot "two or three times," is sending me articles about Devin, and how it's over for software engineers.
> I have yet to meet anyone who felt any kind of emotion from generated art, except for "wow, it's cool that AI can make this".
Have you ever observed how difficult it is to _remember_ AI generated pictures?
I can think of only one AI-generated art thing that has stuck with me, and it's because of the enormous amount of effort the guy using it went to generating really genuinely creepy fake photos to go with a plausible but fake story (about a lost expedition in the early era of photography).
I thought at the time, OK, maybe people will do creative things with it. Maybe I am wrong.
Except that months on I can't remember any specific detail of any of the photos in enough detail to visualise it. Only the emotion and the feel, which could have been evoked by that talented person entirely without Stable Diffusion.
There is something about AI generated photos, in particular, that confounds my ability to remember the image (as a photographer)
That is very interesting, I'm picking up what you're putting down. YouTubers make rampant use of AI art. If nothing else, this era of YouTube will be recognizable from afar.
I do like that many people have learned to recognize the writing style and visual aesthetic, and are rejecting it.
> maybe people will do creative things with it
_Some_ people will do _some_ creative things with it, but most people will use it as a shortcut—as long as there's some kind of output, they couldn't care less about the quality. How much of correspondence is just an LLM summarizing what a different LLM wrote? If the internet wasn't dead before, this is surely killing it.
> I do like that many people have learned to recognize the writing style and visual aesthetic, and are rejecting it.
This is the thing that gives me hope -- inquisitive people who have no idea how ChatGPT does what it does can point out ChatGPT-generated text. It's more difficult with GAN-generated images but in the creative community I am part of, some people are very literate about this already.
I don't think this will hold for too long. We already had soulless art hanging on the walls of waiting rooms and bank branches, even before GenAI. Rather sooner than later AI products will be indiscernible - even today enough AI outputs are for many - so what then?
But quite a lot of people understand the difference, at a visceral level, between a painting made by an individual amateur artist and a painting made for selling at one of those Fine Art chains, or the difference between something rough and charming and a painting you might have seen in the 90s while trying to locate the loo in a UK branch of McDonalds.
People's instinctive artistic "literacy" is often surprising.
I think it’s more about uncanniness and some sort of latent, subliminal incoherence. Like maybe it is somehow disruptive to visual memory in a subtle but noisy way, because it doesn’t hang together quite right.
I have no science to back this up, mind you. But I struggle to recall details of these images (I also believe I have a limited form of aphantasia so it could just be my flawend noggin)
i’m sorry, i just don’t understand what you’re trying to say. you’re happy that the leading AI firm is full of shit despite promises to the opposite? what makes you happy about that?
I'm happy other people can see what has been obvious to me from day one.
It's not just schadenfreude (which I admit is unattractive, if beguiling.)
It also gives me hope that ordinary people are beginning to get to grips with the idea that they don't have to accept or be excited for new technologies just because they are new technologies, and that the people bringing new technologies don't have to be good people just because they are capable people. Seemingly smart people can be intellectually and morally lazy.
I have no obligation as a techie person to be excited about AI, or to be default-positive about the "leading firm", or to give the benefit of the doubt, or anything like that. There's no moral rule that one should be positive about new technology until it's proved bad. This is a classic tech industry false belief.
OK so the fall is not happening as quickly as Juicero. But it's a start.
They're gloating about being right about SV tech culture. Being right about the heel turn is some cold comfort, I guess.
But parent shouldn't feel too proud of their prognostication skills. OpenAI is a venture of Sam Altman and Elon Musk, so how could it be anything other than what it is? You'd have to be insanely naive about SV (and, more broadly, what "non profits" of billionaires in any sector even are) to assume this was ever born of altruism.
I'm not gloating. I'm just enjoying the spectacle.
I also don't profess surprise at who OpenAI have turned out to be. Rather I am surprised that other people are surprised.
It's not a heel turn, except in their wider cultural fortunes. It has been obvious to me from literally day one that everything to do with DALL-E and ChatGPT and onwards is bad for culture. There has never been anything other than creepy, dystopian, Black Mirror overtones.
But the valley falls for hucksters every time. And it's often the same hucksters.
> You'd have to be insanely naive to assume this was ever born of altruism.
Yet the vast, vast majority did and a still large proprtion continue to proclaim that these projects were born of altruism and continue to serve these altruistic goals and these people are most incredible altruistic humans to ever grace this fine planet of ours.
Why? Why is it inevitable that the AI world must do LLMs, must steal all of culture without recompense, and must deride human ability in order to defend the limitations of the replacement technology?
Answer is: it’s not. To all three. And collectively we can decide to be better. This is why artists are pushing back. One day perhaps the tech world will understand that they aren’t Luddites but instead champions for humanity.
Because progress in AI depends on a great dataset. A dataset that does not exist in the public domain. And the progress in AI is worth too much to stop because laws are not enacted yet. My quality of work has improved significantly since having access to chatGPT4. And this has improved by leaps again since chatGPT4o.
And I am loving the fact that LLMs and generative AI is showing what an absolute nonsense it is for any artist to think that just because they made a shitty drawing or song once other people are not allowed to also make that shitty drawing
I make art but I don’t sell it.
Don’t get me wrong, if a painter makes a painting that painting is his and he can sell it, I can’t steal it. But I can’t also paint that? Not download a copy of it? Not take a picture? How do you own the idea of flowers in a vase because you drew it once? That’s insane to me
Asking someone to license their voice, getting a refusal, then asking them again two days before launch and then releasing the product without permission, then tweeting post launch that the product should remind you of that character in a movie they didn't get rights to from the actress or film company is all sketchy and -- if similar enough to the famous actress's voice, is a violation of her personality rights under California and various other jurisdictions: https://en.m.wikipedia.org/wiki/Personality_rights
These rights should have their limits but also serve a very real purpose in that such people should have some protection from others pretending to be/sound like/etc them in porn, ads for objectionable products/organizations/etc, and all the above without compensation.
> But hiring another actor to replicate someone you refused your offer is not illegal and is done all the time by hollywood.
Probably this could indeed make them "win" (or not lose rather) in a legal battle/courts.
But doing so will easily make them lose in the PR/public sense, as it's a shitty thing to do to another person, and hopefully not everyone is completely emotionless.
> But doing so will easily make them lose in the PR/public sense, as it's a shitty thing to do to another person, and hopefully not everyone is completely emotionless.
If an actor is saying no and you have a certain creative vision then what do you do?
Johansson doesn't own the idea of a "flirty female AI voice".
Find someone else? You think this is a new problem? Directors/producers frequently have a specific person in mind for casting in movies, but if the person says no, they'll have to find someone else. The solution is not to create a fictional avatar that "borrows" the non-consenting person's visual appearance.
First time I hear about it, but reading about it, it seems that specific case actually changed the typical terms for actors to prevent similar issues?
> Rather than write George out of the film, Zemeckis used previously filmed footage of Glover from the first film as well as new footage of actor Jeffrey Weissman, who wore prosthetics including a false chin, nose, and cheekbones to resemble Glover. [...]
> Unhappy with this, Glover filed a lawsuit against the producers of the film on the grounds that they neither owned his likeness nor had permission to use it. As a result of the suit, there are now clauses in the Screen Actors Guild collective bargaining agreements stating that producers and actors are not allowed to use such methods to reproduce the likeness of other actors.[
> Glover's legal action, while resolved outside of the courts, has been considered as a key case in personality rights for actors with increasing use of improved special effects and digital techniques, in which actors may have agreed to appear in one part of a production but have their likenesses be used in another without their agreement.
If they didn't use her voice at all, doesn't seem like there would be a case or even concern.
Also, they proceeded to ask her for rights just 2 days before they demoed the Sky voice. It would be pretty coincidental that they actually didn't use her voice for the training at all if they were still trying to get a sign off from her.
If they used her actual voice for training the model that shipped then I agree with you. It seems like they used the voice from another woman who sounds similar though.
It doesn't "seem like" in this instance, "no no that is not what we did we commissioned someone else" without specifying who is their claim.
From technical standpoint, a finetuned voice model can be built from just few minutes of data and GPU time on top of an existing voice model, almost like how artists LoRAs are built for images. So it is entirely within possibility that that had happened.
I guess it takes more than a couple of days to organize things with an A list star, esp. if there's a studio recording session involved rather than just using existing material.
This strongly suggests they weren't trying to get her voice until the last minute (would have been too late for the launch) but, rather, they had already used the other actress, and realized they were exposing themselves to a lawsuit due to how similar they were.
It was a CYA move, it failed, and now their ass is uncovered.
Surely the company that has been gobbling up data and information without the rights to them or any form of compensation have suddenly turned a new leaf and decided to try and pay an actress that isn't involved.
Like, lets be real here. This wouldn't be the first time they would be using material without the right to them and I don't expect this to change any time soon without a major overhaul of EVERYTHING IN THE COMPANY and even then it will probably only happen after lawsuits and fines.
What I'm wondering is why are they doing that in the first place. Why is the best AI company in the world trying to stick a flirty voice into their product?
It pains me to say it, but I really think it pays dividends to consider the very obvious possibility that the people who are doing this are in general just not socially well-adjusted.
Everything about OpenAI speaks of people who do not put great value on shared human connections, no?
Hey, I like that artist. I am going to train a computer to produce nearly identical work as if by them so I can have as many as I like, to meet my own wishes.
Why is it surprising that it didn't really cross their mind that a virtual girlfriend is not a good look?
This is not an organisation that has the feelings of people central to its mission. It's almost definitionally the opposite.
Yes, it seEms a LOt of big Names in tech have this same problem. Curious that, isn't it?
I also think it is tipping their hand a bit. I know companies can do multiple things at once, but what might this flirty assistant focus suggest about how AGI is coming along?
...because human brains enjoy being talked to in a flirty voice, and they benefit from doing things that their customers like? Doesn't seem that mysterious
It is incredibly disheartening to see celebrities from traditional media expressing open disdain for the century's most revolutionary piece of technology.
Johansson was foolish to turn this down. This all sounds like she realized the mistake, regretted it, then sent her legal team to pursue this frivolous cease and desist out of spite.
I'm disappointed that OpenAI didn't see this for what it is, and decided to comply instead.
> It is incredibly disheartening to see celebrities from traditional media expressing open disdain for the century's most revolutionary piece of technology.
Even though it threatens their livelihoods and is parasitic off their work?
It's not disheartening at all: it's positive.
> Johansson was foolish to turn this down. This all sounds like she realized the mistake, regretted it, then sent her legal team to pursue this frivolous cease and desist out of spite.
Maybe I'm missing obvious, but you seem to think it's disheartening when someone decides to not collaborate with a corporation, and the right choice for the corporation to ignore what the person thinks, "force" the collaboration anyways?! That seems outright crazy to me.
I'm not talking about collaboration or lack thereof. I'm talking about Johansson's foolishness, and her subsequent tantrum after she rightfully regretted that poor judgment.
Explain why you call a famous actress foolish if she refuses to give a corporation permission to use her voice. Your entire argument is built on this opinion.
> to see celebrities from traditional media expressing open disdain for the century's most revolutionary piece of technology.
Really? AI has lots of potential but so far the big uses of the recent title wave have been an enormous increase in the creation of visual and text-based sludge, barely usable for anything serious most of it, by hustling online marketers and social media spammers.
Even where tools like GPT are used productively by people to simplify their business processes and so forth, every piece of information they claim has to be scrutinized for hallucinations to the point of them being useless as much more than idea generators for contexts where factual correctness isn't important...