Hacker News new | past | comments | ask | show | jobs | submit | tiagopavan's comments login

That's a great outcome. I was the producer of "The Edge of Democracy" - line 14047 of the Excel file with 200,000 hours viewed. Although we were nominated for an Oscar and became a Netflix originals, they've never disclosed any numbers related to how successful (or not) our film was on the platform.


Very interesting. Now that you know, how do you interpret those numbers ?


If it’s okay to ask, does Netflix only pay a fixed sum (they have to disclose numbers if there are view-based royalties, right)? Is that normal?


View-based streaming royalties (“residuals”) were also something won in the strikes. They previously didn’t exist for streaming in the same way they did for cable tv or theatrical releases— no matter how successful the show. That’s part of the reason streamers didn’t want to release numbers in the first place.

There were still a lot of other factors then and now when it came to payment—mainly due to the fact the minimums are floors, not maximums—so it isn’t exactly a “flat fee”, but there is a “minimum floor” payment schedule based on a formula of # episodes, genre, # weeks, studio budget, season #, etc. Now it includes # views, when previously it didn’t.


I wonder if royalty agreements will affect which content the distribution platforms choose to promote.

If Netflix pays the creators of show A $X per 1000 views, and of show B 2*$X per 1000 views, I can see them choosing to display A more prominently than B


Wow! Thanks for sharing the insights.

Amazing how much data Netflix would have, but sharing it externally would hinder negotiations.

So you can't even get total views?!


There's an interesting stanford webinar covering this topic. It's what they call "Retrieval augmented in-context learning": https://www.youtube.com/watch?v=-lnHHWRCDGk


Thank you for your comment!

I hadn't come across Qdrant before, but I will definitely check it out. Although I've been experimenting with milvus.io lately, these vector databases weren't on my radar when I first started exploring embeddings.

Langchain looks incredibly fascinating too! At first glance, it seems like it could be my go-to library for prototyping. I appreciate your suggestion.


Since the release of the gpt-3.5-turbo model by OpenAI, I have been wondering how companies are dealing with the token limit to gather insights from a large corpus of text without having to retrain the model.

To understand how it works, I created an open-source Jupyter notebook that creates a chatbot using vector embeddings (which is the workaround for the token limitation!). The chatbot connects Zendesk's knowledge base to ChatGPT, which can answer natural language questions using only the appropriate context.

I hope this can be helpful to anyone who is playing around with AI models in general :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: