Not if the damages are presumed (e.g., if the false defamatory statement published to a third party consists of a claim that the plaintiff has a venereal disease, is unfit for their profession, or is a sexual deviant). Then, damages are presumed to exist.
NathanBuilds on YouTube has a little robot which does the same thing with a large magnifying glass. A concept I am very eager to demo on an industrial scale.
If anyone at Spotify is reading this, I just cancelled my account. You guys deserve considerably less money than you have now, but this is the best I can do.
It's not a great move, but I don't think it calls for wishing harm on anyone, let alone a majority of people who at most had a hand in making something you did feel was useful enough to pay for.
It was possible to drive in silence before and it's possible now. Perhaps explore a radio station.
Don't denigrate him, he's done the one thing that is simultaneously most likely to affect change at a company, and also the hardest thing for an individual to do: vote with their wallet. He's right, Spotify does deserve less money for their godawful decent into the lower rungs of mediocrity.
Meh, when I boycott a place I just revoke my money and/or review the place accurately, but I don't posture about it. When I get bad food at a restaurant, I might mention it if it's really rough and ask for something different, but I'm not going to berate the server or kitchen staff; unimaginable. People have enough shit to deal with, like fearing impending layoffs and the likelihood that because the product didn't bring in enough money, they'll be out of a job for god knows how long if they don't turn things around.
I have personal grievances with my iPad, but I'm not about to say "If anyone at Apple is listening, I hope you do worse financially".
I presume they were a bit upset and chose overly broad language, it's not a big deal either way, I don't believe they truly wanted harm on anyone, it just seemed like a silly tone to choose.
> I'm not going to berate the server or kitchen staff; unimaginable.
Who is advocating berating staff? I don't see that in OPs post, nor my own. No one suggests being aggressive towards anyone. Saying the company deserves less money is a capitalistic statement aimed at the corporation, not its diligent employees. You're welcome to to silently withdraw your money from a place of business and leave them with no feedback as to why. But 'posturing' about the reasons of a boycott while also withdrawing money is a great way to attach feedback to an action the company might actually listen to.
You’re right that a random person’s voice is not IP, but SJ is not a random person. She’s much more like Mr. Waits or Ms. Milder than you or I.
I don’t believe the burden would be to prove that the voice actor was impersonating, but that she was misappropriating. Walking down the street sounding like Bette Midler isn’t a problem but covering her song with an approximation of her voice is.
You are dead right that the order of operations recently uncovered precludes misappropriation. But it’s an interesting situation otherwise, hypothetically, to wonder if using SJ’s voice to “cover” her performance as the AI in the movie would be misappropriation.
> You are dead right that the order of operations recently uncovered precludes misappropriation.
I don't think that follows. It's entirely possible that OpenAI wanted to get ScarJo, but believed that simply wasn't possible so went with a second choice. Later they decided they might as well try anyway.
This scenario does not seem implausible in the least.
Remember, Sam Altman has stated that "Her" is his favorite movie. It's inconceivable that he never considered marketing his very similar product using the film's IP.
"We hold only that when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."
Ford having multiple ads would not have changed the determination.
The voice doesn’t sound like her and the article shows there’s a fair amount of proof to back up the claim that it wasn’t meant to and that there was no attempt to imitate her.
And yet so many people think it does. What a weird coincidence.
> there’s a fair amount of proof to back up the claim that it wasn’t meant to
Sam has said his favorite movie is "Her". Sam tweeted "her" the day it was released. Sam wrote to ScarJo to try to get her to do the voice. OpenAI wrote to her two days before the release to try to get her to change her mind. A large percentage of the people who hear the voice thinks it sounds like ScarJo.
I think we're just going to have to agree to disagree about what the evidence says. You take care now.
I interpret it completely differently given that the voice actor does not sound like SJ.
1. OpenAI wants to make a voice assistant. 2. They hire the voice actor. 3. Someone at OpenAI wonders why they would make a voice assistant that doesn’t sound like the boss’s favorite movie. 4. They reach out to SJ who tells them to pound sand.
Accordingly, there is no misappropriation because there is no use.
The voice is different enough that anyone who listens to samples longer than 5 seconds side by side and says they can’t tell them apart is obviously lying.
All the reporting around this I’ve seen uses incredibly short clips. There are hours of recorded audio of SJ speaking and there are lots of examples of the Sky voice out there since it’s been around since September.
It doesn't even need to sound like the person. It's about the intent. Did OpenAI intend to imitate ScarJo.
Half the people of the world thinking it's ScarJo is strong evidence that it's not an accident.
Given that "Her" is Sam's favorite movie, and that he cryptically tweeted "her" the day it launched, and that he reached out to ScarJo to do the voice, and that the company reached out to her again to reconsider two days before the launch -
I personally think the situation is very obvious. I understand that some people strongly disagree - but then there are some people who think the Earth is flat. So.
I don't think the intent matters (though it's moot in this case because I think there is clear intent): If someone unknowingly used a celebrity's likeness I think they would still be able to prohibit its use since the idea is that they have a "right" to its use in general, not that they have a defence against being wronged by a person in particular.
For example if someone anonymously used a celebrity's likeness to promote something you wouldn't need to identify the person (which would be necessary to prove intent) in order to stop have the offending material removed or prevented from being distributed.
"We hold only that when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."
The passage you cited reads "we hold only that when" (compare with "we hold that only when") which I understand as that they are defining the judgment narrowly and punting on other questions (like whether the judgment would be different if there were no intent) as courts often do. In fact the preceding sentence is "We need not and do not go so far as to hold..."
It might make sense for intent to be required in order to receive damages but it would surprise me if you couldn't stop an inadvertent use of someone's likeness. In fact the Midler case cites the Ford one: 'The defendants were held to have invaded a "proprietary interest" of
Motschenbacher in his own identity.'. I think you can invade someone's "proprietary interest" inadvertently just as you can take someone's property inadvertently; and courts can impose a corrective in both cases, in the first by ordering the invasion of proprietary interest be stopped and in the second by returning the taken property.
No. (I did cite the Ford statement about "proprietary interest" which I think supports my argument).
I'm not familiar with all the case law but I assume that no case has been brought that directly speaks to the issue but people can and do discuss cases that don't yet have specific precedent.
I don't think that's true. I can't cite them off the top of my head but when I read about supreme court cases often a big point of contention of the ruling is whether they decided to issue a narrow or broad ruling. Sometimes they decide to hear a case or not based on whether it would serve as a good basis for the type (narrow/broad) ruling they want to issue.
And the legal universe is vast with new precedent case law being made every year so I don't think the corpus of undecided law is confined to well known legal paradoxes.
As for this case it doesn't seem that odd to me that the issue of intent has never been at issue: I would expect that typically the intent would be obvious (as it is in the OpenAI case) so no one has ever had to decide whether it mattered.
That's a different standard: "Can you tell them apart side-by-side" vs. "does this sound like person X" or "is this voice exploiting the likeness of person X". It's the latter question that is legally relevant.
The insurance company-driven model for industry specific safety improvement that worked so well for fire safety and auto safety has proven impossible because of three factors:
1. Cryptocurrency allows for unimaginably huge untraceable ransom payments that Amazon gift cards did not support,
2. No liability in tort for insecure software, and
3. Lack of computer security regulation (e.g., your car must have a seatbelt and ABS but your software can be arbitrarily bad without being prohibited).
The software company would argue that the lawbreaking hacker was a supervening cause, while the consumer would argue the criminal was foreseeable. In the case of security software, the consumer might have a point. In practice such a claim is not usually successful.
Fifty state insurance commissioners could make this more or less happen overnight, except to the extent firms are using something other than cyber coverage to pay ransoms.
In my eyes, this would do almost as much to improve cybersecurity as liability in tort for insecure software.
The “dial” on a rotary phone broke the connection in rapid sequence a number of times corresponding to the number dialed. If your dial broke, you could place calls by “flashing” the “hook” the right number of times with pauses between.