Not only that, but it opens the project up to having to deal with a trademark cease and desist letter and then having to rebrand. Preplexity would be obligated to send one in order to protect its trademark if they become aware of this. How are seemingly decent software developers so unaware of anything besides coding?
On my benchmark (NYT Connections), Phi-3 Small performs well (8.4) but Llama 3 8B Instruct is still better (12.3). Phi-3 Medium 4k is disappointing and often fails to properly follow the output format.
"when voice is sufficient indicia of a celebrity's identity, the right of publicity protects against its imitation for commercial purposes without the celebrity's consent."
Because it's meant to give the _appearance_ or _perception_ that a celebrity is involved. Their actions demonstrate they were both highly interested and had the expectation that the partnership was going to work out, with the express purpose of using the celebrity's identity for their own commercial purposes.
If they had just screened a bunch of voice actors and chosen the same one no one would care (legally or otherwise).
What OpenAI did here is beyond the pale. This is open and shut for me based off of the actions surrounding the voice training.
I think a lot of people are wondering about a situation (which clearly doesn’t apply here) in which someone was falsely accused of impersonation based on an accidental similarity. I have more sympathy for that.
But that’s giving OpenAI far more than just the benefit of the doubt: there is no doubt in this case.
> Sounds like one of those situations you'd have to prove intent.
The discovery process may help figuring the intent - especially any internal communication before and after the two(!) failed attempts to get her sign-off, as well as any notes shared with the people responsible for casting.
Not necessarily, because this would be a civil matter, the burden of proof is a preponderance of the evidence - it’s glaring obvious that this voice is emulating the movie Her and I suspect it wouldn’t be hard to convince a jury.
I am guessing it's because you are trying to sell the voice as "that" actor voice. I guess if the other voice become popular on its own right (a celebrity) then there is a case to be made.
It's horribly useless for most use cases since half of it is people probing for riddles that don't transfer to any useful downstream task, and the other half is people probing for morality. Some tiny portion is people asking for code, but every model has its own style of prompting and clarification that works best, so you're not going to be able to use a side-by-side view to get the best result.
The "will it tell me how to make meth" stuff is a huge source of noise, which you could argue is digging for refusals which can be annoying, and the benchmark claims to filter out... but in reality a bunch of the refusals are soft refusals that don't get caught, and people end up downvoting the model that's deemed "corporate".
Honestly the fact that any closed source model with guardrails can even place is a miracle, in a proper benchmark the honest to goodness gap between most closed source models and open source models would be so large it'd break most graphs.
Why is the link to this blog spam instead of to the paper or a better article? Hossenfelder lacks qualifications in neuroscience and is often confidently inaccurate.
reply