Hacker News new | past | comments | ask | show | jobs | submit login
Tyler Perry Puts $800M Studio Expansion on Hold After Seeing OpenAI's Sora (hollywoodreporter.com)
96 points by mfiguiere 7 months ago | hide | past | favorite | 189 comments



We have found at least one critical real world use case for AI – to be an easy scapegoat for all of a company's bad decisions, layoffs, lack of profits, shortage of funding. Are you a shit businessman? No, it was all AI's fault!


You may not like his movies, but saying Tyler Perry is a "shit businessman" is a bit of a stretch...


That's fair, he's an excellent businessman, but I'm not shocked that his particular oeuvre is vulnerable to AI... it is after all painfully formulaic and lacking any creativity.


Yes, but definitely there is a 'this time its different' feel every time some new demo comes


How can he be that "shit" if: - Is making drastic changes to react to market realities. (Probably before most other studios) - Is a self-made billionaire ?


Seems like quite the overreaction to a research demo.

Yes, it will get better. But generating anything close to coherent film / television show is a tall task.

For all of the progress in image generation we still seem somewhat stagnant when it comes to fixing hallucinations. Video generation is a much harder problem than image generation, so it's not super clear to me we're very close to this having a material impact on Hollywood.


Not for Oscar-worthy films, but definitely for a bunch of marginal stock-photography-ish material. Look at the state of generated videos 2 years ago to get a sense of the trajectory.


TV commercials will probably be the first to fall. They're short, and don't really need to be realistic at all -- just eye-catching. Right up generative AI's alley.


Great example. The best commercials will still have big budgets and location shoots but generated videos will raise the quality floor for local commercials.


He's a hypocrite, he says he's scared of the impact of the industry. But he IS the industry. He's gonna be putting set designers, sound stagers, et al out of work if he opts to use AI.

AI is just going to make the rich... richer. We do need some protections in place.


“I am very, very concerned that in the near future, a lot of jobs are going to be lost. I really, really feel that very strongly.”

I'm very concerned about the impact of this thing that I get to control, and benefit from, the impact of.


> We do need some protections in place.

i find impossible to believe that people that are currently working in the music/film industries will not take advantage of these new technologies (see designers that went from pen and paper to software).

i think a decent analogy is music. software has completely taken over music production. but musicians are still making great music, analogue or otherwise.

these new technologies might even open up these industries to a lot more people since creation will no longer be hindered by learning different types of software.

yes, i'm an optimist :)


And given the

- the behavior of big tech in the last 10-15 years minimum - The arguably decrease of quality of movies as is from corporate ignoring artists who already had to protest just to prevent themselves from being degraded from writers to AI prompt editors. - history of artists in film since... forever

I'm honestly very pessimistic. In theory, this means less artists in a VFX studio can charge the same amount (which as is, is way too little. Remember that multiple award winning movies had their VFX studio shutter months after the award), and each artist gets paid a proper living wage, now that there are less hands to re-distribute the money too. In reality, this may shut down what remnants of VFX there are, reduce in house artist, and the remaining artist make even less despite now being as productive as 10-50 artists from the decade prior.

A lot of optimism for such tech would need to come from trust, and every company involved have spent decades eroding that trust and then digging further underground.

>i think a decent analogy is music. software has completely taken over music production. but musicians are still making great music, analogue or otherwise.

music is a decent analogy. No one makes money making music anymore. You're an entrpreneur peddling merchandise as an emotional response from music that people listen to on Spotify that pays pennies (or less, if you sign on a record company). Modern music is the classic "being paid in exposure" trope in action.


> No one makes money making music anymore.

when was "making music" a money maker? or for that matter, when was creating any art form a money maker? i think making money out of art is an anomaly, not a rule. and i think that's good, as money corrupts art.

the problem as i see it is that lots of us live in a capitalist world. and in that case it's very hard to hold my beliefs and still lead a fulfilling life.

i don't have the answer to this issue. but i think there was a similar issue a few decades ago, when making money from art was considered "selling out".


Money maker for individuals? Music started dying out around the time big bands fell off in demand, and I'd say by the early 2010s it became unviable without some side hustle or connections. So a very slow but steady decline. The whole American Idol craze wasn't that long ago, in the grand scheme of things. Other arts have their own individual histories as trades as well.

>i think making money out of art is an anomaly, not a rule. and i think that's good, as money corrupts art.

Until we live in a post scarcity society, or until the arts is some random eccentric billionaire hobby, most art will be trading a craft for compensation to survive. That isn't a recent nor local phenomenon. The arts made from love that somehow succeeds only on its own merits has always been the minority. As you said, it's hard to hold your beliefs (I.e. Make exactly what you want) and still pay the bills. There isn't enough time in the day, or maybe there is now but corporate demands more of our time than ever despite thst.


sure musicians are making great music, but how many of them are getting paid? and how much? the quality is irrelevant here -- best songs ever written are on spotify making their creators $0.00031 per play.

its like how excel didn't eliminate payroll or accounts-payable, but now instead of needing 10 people you need 3, and will still attempt to pay them peanuts.


The flipside with this particular application is that we're going to be seeing some excellent indy movies produced by enthusiasts who never had the means before.

But yes, there's about to be a massive shakeup. You can blame Perry, but someone has to fund these tremendously expensive human efforts. If everyone else is underselling him with cheaper AI productions, where does that leave him?


we need a universal basic income and progressive tax rates urgently if we want to avoid the world turning into a Gotham City kind of inequal shitshow.


> we need a universal basic income and progressive tax rates urgently if we want to avoid the world turning into a Gotham City kind of inequal shitshow.

Not really. IMHO the idea of universal basic income is not an actual solution, it's a soporific to passivate people until all their power is drained from them. What we really need is a Butlerian Jihad, to make technology and technologists subordinate to society, instead of letting society be subordinate to technologists and their technology (and the capital they serve and/or control).


Gladly is only your opinion, while facts from pilots demonstrate that UBI is an activator to do anything, regardless of the activity, paid or non paid.


AI is not good enough to take over most jobs. The hype is overblown and lacking in evidence.


> AI is not good enough to take over most jobs. The hype is overblown and lacking in evidence.

And I'm guessing once the evidence is strong enough to convince you, it will be too late to do anything about it.


We'll cross that bridge if we get there.


Yeah, last time that happened, Meta became a trillion dollar company before some parts of the world got some semblance of privacy back. The bridge is currently burning and much of the world still doesn't notice its being burnt alive.

I'm not gonna trust the pyromaniacs with the next bridge if I can help it.


What makes you think this time will be any different?

> The bridge is currently burning and much of the world still doesn't notice its being burnt alive.

Citation required.


Its just a matter of time


I kinda agree with both of you. Given infinite time (and assuming humanity is around and willing to work on it for as long as it takes), I suppose we _should_ be able to automate anything. I honestly don't know how long that's gonna take, but I'm sceptical we'll see it in the next ten years.

Anyway, if we see it coming, it'd be silly not to prepare for it in some fashion.


It's odd that he's winding back on soundstages .. I suspect this is on hold while they reconsider the soundstage tech stack.

The other comments are correct about fully generative imaging from AI .. having that fluid and tight to a directors vision is still some way down the road.

As mentioned in article what will change is being able to generate fully digital sets with AI assistance, character makeup and "look" to tie onto a real actor, background non player AI extras, etc.

This sees a AV media future with a lot more work in giant green rooms and AI production .. and the $800 m is most likely "on hold" while the big picture is revaluated - fewer carpentry shops, more server farms.


First, labor laws isn’t the answer to technology. It would be like regulating the camera because it put portrait artists out of work, or steam drills because it puts tunnel diggers. Art improved dramatically in a short period of time with the advent of the camera, became reflective of what the camera couldn’t do - reflect the nature of humanity back to us.

Second, if you’ve done much with generative imagery you know the raw experience is difficult to control to a specific visual outcome. Fine tuning, in painting, image to image, control nets, all these things help. But it’s easy to make something stunning, but difficult to make your vision come out from the machine. It’s a tool, like any other. It’ll require rethinking what’s better to do with generative ai and what’s better to shoot directly. But direction, cinematography, and acting are about extreme control to achieve a specific outcome. Generative AI doesn’t help that. However what I’ve seen it really do well is take imagery and add detail and nuance, interpolate scenes, and other feats that will likely make post production much more powerful, requiring fewer reshoots, etc. Creating CGI will be simpler, especially bootstrapping, and l will bet it’ll be extremely heavily used preproduction to create visual story boarding.

But completely replace humans ? The camera didn’t replace artists, and if you’ve seen a tunnel boring, there’s no lack of humans there. A strong back isn’t as important as a strong engineering mind, but that’s a pressure on labor that’s been building for centuries.

These new tools won’t change the dynamic, it’ll just up the game.


> First, labor laws isn’t the answer to technology.

Laws have always been created to deal with new technologies and their fallout. Labour laws are no different.

You should read about the musicians strike of 1942 [1]. When the record player was invented some people thought 'why pay a singer for more than one performance if technology allows you to record them and sell infinite copies'? No one would try that today with Taylor Swift for example.

[1] https://jacobin.com/2022/03/1940s-musicians-strike-american-...


No law was created in your link, except ones that made it impossible to create laws like the one you're implying was created.


By "try that with Taylor Swift", you mean I can trivially torrent a copy of The Eras Tour?


What's the market for a talented saxophone player in a major metropolitan area? It's not zero, but thanks to Spotify, it's not as big as before the advent of recorded music. Recorded music didn't just up the game, it completely changed it. Now you have ultra mega stars that make economic impacts to an area when she tours, but the market for a talented saxophone player in a major metropolitan area just isn't what it once was. I'm not arguing for labor laws, but we'll just have to see where this goes.


I would argue the biggest issue for talented musicians is economic inequality and the rapid increase in the cost of living. If you want to do music as a main job you need to survive on your partner's income or have generational wealth. There's just no room for someone who is talented to keep themselves afloat playing gigs without having 10 roommates and a day job.


The biggest issue for everyone is economic inequality. You could make the same point about an increasing number of mainstream jobs.

The underlying question is: what are the values of the culture? Predatory selfishness is thrilling [1] for the winners, but ultimately self-defeating - literally suicidal from a sustainability POV, physically and socially.

The neoliberal definition of "freedom" is exactly synonymous with predatory selfishness, so everything else can only suffer.

But it's not everyone's definition of freedom. Plenty of people have other motivations, and this economic model aggressively disenfranchises their values.

[1] In a surprisingly unsatisfactory way.


Er, Taylor Swift?

Your analogy is wrong, Spotify, or really the phonograph, didn't put musicians out of work. The television did, because once that came along "just" playing well wasn't enough to be a star, on camera presence became more important.


I think the automobile put the saxophonist underemployed. As people moved away from the city core in the suburbs live music venues suffered. The radio, recorded audio,and TV cemented it, but people would always frequent live music if the venues existed. And they do - where there are enough fans to support them. This is why we have a lot of live music places in NYC, etc, even with Spotify and instant gratification.

But I also don’t think the life of the musician is all that dire, at least compared to the portrait artist and tunnel digger. There are plenty of jazz clubs in any major city, and live music that pays enough to at least supplement a day job is a thing. Even at the height of live music your average musician would have needed a day job to get by.


>Second, if you’ve done much with generative imagery you know the raw experience is difficult to control to a specific visual outcome. Fine tuning, in painting, image to image, control nets, all these things help. But it’s easy to make something stunning, but difficult to make your vision come out from the machine. It’s a tool, like any other. It’ll require rethinking what’s better to do with generative ai and what’s better to shoot directly. But direction, cinematography, and acting are about extreme control to achieve a specific outcome. Generative AI doesn’t help that. However what I’ve seen it really do well is take imagery and add detail and nuance, interpolate scenes, and other feats that will likely make post production much more powerful, requiring fewer reshoots, etc. Creating CGI will be simpler, especially bootstrapping, and l will bet it’ll be extremely heavily used preproduction to create visual story boarding.

The problem is not that there will be 0 humans needed. The problem is that something that used to take 100 people, might be done with 5. And that 95 is still creative jobs, jobs that require education, skill, grit.

You make a living either by hunting and gathering, or creating value for others and exchanging that value for things you need through a generic value token of money. The bar for being able to generate value that can sustain you (let alone a family) was already increasing day by day. But with the sudden spike of generative AI, the bar shot up faster than any time in known human history. With things as they stand right now, the amount of education and experience you need to generate value for others is going off the charts. It will be unfathomable in a few years. Yes, humans will be needed, but not as many to provide what we need. Yet those people will not simply stop existing. What is in the cards for them? That is the problem.


My hope is frankly this nudges us into a post scarcity world where turning the crank to eat isn’t so damn meaningful. A WPA / CCC style program might be useful, tapping people to enrich the world rather than press buttons for food pellets. But these are fantasies and unmoored in the present. We will see.


Given the current state of the world, I don't see this ending in an optimistic lens. At least not in the US.


That's exactly what they do with cameras, you aren't allowed them, and in venues where you're legally allowed to have them thru confiscate it. You can't take a photo of the Eiffel tower either.


I disagree, we are clearly in the end phase. Generative AI (not necessarily LLM) is clearly in a completely different category from the things you describe. Suppose it becomes such that it’s equivalent to a human using all the tools of today, but instantly. What would there be to do? I guess we still need to mine for the natural materials, so there’s that.


Portrait painters and tunnel diggers said the same things. Impressionism, expressionism, modern art, etc weren’t obvious outcomes. The thing AI lacks, and will lack for the foreseeable future as we have a much weaker grasp on its mechanisms, is agency and at the root of that creativity. Most movie making today is a lot of mechanical craftsmanship and a lot of that will probably disappear. The bar for good will be raised dramatically. But the demand for the best won’t disappear, it’ll likely amplify as the stakes get higher.

I will wager you we will see a dramatic realignment in the sorts of work that go into production, but the demand for those skills will be so high and the value so immense that we will see a profusion of new work, better output, and more employment in the arts.


Creatives are already paid very little, so if what you’re saying is true that is not a good sign despite your enthusiasm.

Most of white collar work is very much rooted in the “mechanical” tasks.


And those tasks will emphasize the creative in the future. At one time a human computer made a ton of money. However they’re a novelty today. Mathematics and computer science are a much more creative endeavor using mechanical computers than someone who could tabulate large numbers rapidly in their head.

I do think rote learning is about to seriously take a step back in the world and the future won’t reward the ability to turn the crank nearly as much. The white collar readjustment will be realizing memorizing facts is much less valuable than understanding and synthesizing.


We will have to agree to disagree. The real world is about solving actual problems, not solving arbitrary “creative” tasks. And even if it were, go look at salaries in “creative” professions vs engineering.


"Actual problems" like crypto scams, NFTs, and both mainstream and "new" manipulations of value tokens that don't even exist? Or an industry like health insurance, which doesn't even need to exist?

As for salaries - you can be a hopelessly mediocre engineer, doctor, lawyer, etc, and still earn good money. You can even be a bad professional who is actively incompetent and still earn good money.

The fact that salaries can be hyped so they don't accurately reflect social value is a symptom of the problem, not an explanation for it.


Your examples are strange. Why do you think crypto or NFTs are actual problems I was referring to?

Why shouldn’t health insurance exist? Most lawyers make terrible money. Doctors don’t make a lot of money in most countries.


> It would be like regulating the camera because it put portrait artists out of work, or steam drills because it puts tunnel diggers.

Modern farm equipment put a whole lot of unfree* agricultural laborers "out of work", but most of us consider that to have been a good thing.

*serfs, slaves, indentured servants, etc.


I’m not sure I agree it’s hard to create specific things. The tooling, imo, feels infant, but promising. And it can layer upon itself.

Right now control net can do pretty good character pose control, but you have to make the pose. Well… next step is to generate the pose with a request. Same for faces.


When you're editing in an NLE, you can pick the exact frame where your transition begins. When you're editing in DAW, you can change the volume by tiny fractions of a decibel, for a single frequency. Just about every piece of mainstream media (e.g., excluding social media) has had every single frame, and every second of sound, polished to absolute perfection (or at least one sound engineer or editors perception of it). How is AI going to attain the level of polish (via control) we expect from mainstream media?


I’m not clear what you mean.

If you’re imagining that AI tools mean it has to be a straight text -> output model that’s not correct. They are very much iterative transformations.


There's an enormous golf between just being iterative and control down to the last decibel/frame. Professional creative software is design around this fine-grained control.


Ok but what’s your point? This is not outside the capabilities of AI enabled tooling. You can adjust whatever you want, at whatever granularity you want.

The goal is of course to make this unnecessary, because adjusting a single element doesn’t benefit much from ai. But there’s nothing say you can’t?


"Not outside the capabilities" isn't a strong argument, we want amazing, ten times better than the current tools, e.g., to use the language typical when describing what it takes to beat an incumbent tool.

The point is to look at the patterns in how creative work is performed, it's generally about fine-grained control. E.g., the commonality across a musician, film editor, animator, music producer, photographer, painter, etc... if you're making the claim that suddenly AI is going to replace this need for fine-grained control then I'd put the burden of proof on the supporter of the AI tool to describe why the fine-grained control suddenly won't be important.


I’m not making that claim. I’m claiming that AI tools can do fine grained too.


I thought that's what you were saying here? "The goal is of course to make this unnecessary, because adjusting a single element doesn’t benefit much from ai."

(And re AI also doing fine-grained yeah addressed that above: Just showing up isn't impressive, it needs to be absolutely amazing at fine-grained control to be a real creative tool in my opinion.)


Ugh. Ok

1) you can go as fine grained as you want. This is not a limitation.

2) going fine grained beyond a certain point is stupid. You don’t edit individual pixels in high resolution art. The tools allow it, but it is not a part of practical workflows.

3) the goal is of course to make fine grained adjustments not needed, because the fine grained details are well executed with high level executions. Same as experienced in (2).

If you still think this is worth debating I’d request a specific example of a practical detail that you think is fine grained and not servable by AI.

A frame of a film is just an image and is easily handled. A decibel is not a thing.


This of course all comes down to a philosophy of creativity and art. In my philosophy of creativity and art, fine-grained details is where the art gets made. You can disagree with that, but I'd point to as evidence the design of the software and other tools that creators use are designed to facilitate fine-grained edits. So if you think that fine-grained edits aren't where the creativity and art comes from, then I'd ask why do you think all the tools are designed to offer that level of control? (E.g., DAWs, NLEs, 3D modeling/rendering, photo editing, vector editing, etc...).

There are also "coarse" tools, and a market for them, e.g., iMovie and Canva, and I'd put AI tools in that same category of course. There's nothing new about coarse tools, but those are general for a different audience, and for a different production level.

I.e., another way of saying this is that important thing is not "high-level of execution", Steve Albini didn't become a sought after producer by being technically great. In high-level creation, technical excellence is usually table stakes. He became sought after by having a particular vision for his music production, and that vision was executed by having fine-grained control over every single audio element of what he was working on.

You can believe AI will negate the need for all this, but my point is simply that would be a mighty large coup to how high-level creation works today, ergo extraordinary claims require extraordinary evidence.


Blah blah blah. Please point to a specific example. Ideally for art or video so we have ai reference points


People make fun of this because it's Tyler Perry, but go to Atlanta and see how many people he employs. Both directly and indirectly.

All the major tech companies are laying off due to their ambitions/utilizations of AI. This is a major employer in a major city seeing the writing on the wall.


> All the major tech companies are laying off due to their ambitions/utilizations of AI.

Citation needed.

Tech companies are having layoffs because they spent the last decade under the assumption that money was free and profits were optional. Now they are hitting reality and investors/shareholders are demanding returns. AI has nothing to do with it. Heck the wave of layoffs started well before ChatGPT even launched. If anything AI is actually fueling another bubble and extending the charade. Look at the tens, maybe hundreds of thousands of people that suddenly work "in AI" now.


Citation Given:

Googles layoffs in ad sales are attributed to their leaning into AI ads

https://www.theinformation.com/articles/google-plans-ad-sale...


"are attributed"

By who? What is the source?


The Information. They do credible reporting. Did you read the article? I can't make them out their sources.


This feels more like a play for attention than an actual doomsday announcement. The guy is savvy and knows that being in the spotlight right now with AI next to your name is a win.


100% this. He is going to be the first one laying folks off. The interview specifically mentions eliminating the need to shoot on location, and his use of AI makeup. I guess he didn't feel bad about the makeup artists he put out of business.


Well, if Tyler Perry, the greatest technology prognosticator of our time, thinks Sora will supplant human filmmaking, it must be so.


You should probably go to Atlanta and see how many people this man employs both directly and indirectly.

He doesn't have to be "the greatest technology prognosticator of our time". He is a large capital owner and employer.


He's a very, very successful film producer who rose through the industry against significant odds. Your sarcasm is misplaced.


And if a very, very successful AI researcher who rose through the industry against significant odds got headlines for talking about the mechanics of film production, I'd discount him, too.


It’ll take some time for sofa tech to be production ready.

Big difference between getting it to generate a plausible snowy Tokyo scene vs actually controlling the detail and style on a manner that is consistent and controllable.

No point in having a movie where every perspective and angle has a slightly different style


Sure but an $800M investment is supposed to pay off over 10-20 years. It seems like the industry will change in more like 1-2.


Yeah that’s fair.

I wouldn’t want to be in that biz right now


> No point in having a movie where every perspective and angle has a slightly different style

Michael Bay begs to differ ("Transformers: The Last Knight has aspect ratio changes throughout the film, often randomly, mid-scene and between shots.")


Yeah I don’t think openAI can get away with a “just add more explosions and lens flare” strategy :p


I fear we are going to slog though decades of low quality content that looks amazing, but the meat of the story is just empty. Truth is that most production try to tell a human story at their core and being able to have AI generate it off a small description is going to leave it feeling empty.


It could have the exact opposite effect.

If movie-quality content is easy to generate, then what is left to separate the good from the bad?

Probably story, characters, originality, etc.

Generative models could, across the board, commoditize the aesthetic leaving only substance as a discernible trait.

This could be positive for the world in both encouraging people to create things of substance and consume things of substance.

When instagram pictures are so easily made perfect, why bother posting? Or watching? What credit do you get for looking the way you do?

When shallow political arguments are made in a comment, why not just assume they are computer generated?

We could become blind to vapid content like we are blind to billboard ads.


What happened to the quality of journalism once it went online? When news media had to compete against bloggers and social media. It went into the toilet.

What happened to the quality of video games once game frameworks became easily available? The market got flooded with low-effort garbage.

Youtube and Tiktok and other social media platforms make global video distribution easy, but most of it is uninteresting and uninspired, when not low-effort clickbait or ragebait. Making things easy doesn't lead to more substance, only exponentially more of what sells, which is shallow, low effort aesthetic.

What point is there in encouraging people to create things of substance when that substance will only be consumed, commoditized and shat out by AI? Every time you see authentic, human generated art online now, it just gets assimilated. It's a sign of something forever tainted and blighted from the collective human soul.

We can't be blind to vapid content when the systems and algorithms that generate our culture optimize only for it. The only hope is that somehow AI fails to follow through on its promise. Maybe it will never be able to create content of high enough quality, quickly enough or at a low enough cost. Given the evolution of technology, and the capability of LLMs in particular, that seems unlikely. Only another AI winter can save us. As long as AI remains deeply integrated into all possible electronic and online means of communication, creativity and expression (as it seems destined to be) then human expression just feeds the machine, and becomes as impossible to find in the ocean of machine-generated slime as non-commercial content on Google.


This was already the case for human made art before? The vast vast majority of 'art' online, including youtube videos, wasn't even worth spending a few seconds looking at in 2021.

So we go from 98% not worth looking at to 99.9% not worth looking at, or whatever the figure is.


Given the choice between garbage made by humans and garbage made by a machine, I'd still prefer the former over the latter, even when the AI stuff appears more technically polished.


Most of what is made is not that good. For every shawshank redemption we have 20 'leonard part 6' movies or worse. People just do not realize how much content there is out there in the big budget space. In the low budget space that ratio is even worse. Most of it is not good. We are going to see the volume of low effort items cranked to 11.


And the Shawshank Redemption movies will stop being made. Or if they are being made, they will be ignored.

I read a blog today and it was high-quality long form tech journalism - something I haven't seen for a long time, tucked away on a no-mark site in a far corner of the Internet.

I didn't agree with all of the content, but it was so good to see something written at that level. 50 years ago it would have been normal for tech publishing. Today most tech journalism is gossip and clickbait. It's not thoughtful and referenced long form analysis of where the industry may be heading, written with original critical insights.


I picked that movie because it is considered one of the best movies on IMDB. However, it did very poorly on release. It was years later it did much better. Gems are made. But they also do not always get the recognition they deserve. With AI we are about to get a lot more rubish to dig thru. However, we may be surprised too. At this point is kind of too soon to tell. But my gut reaction is 'a small bit of good tons of bad'.


Anyone who's dealt with multiple 9s knows how big of a jump 98->99.9% is. This isn't making the point you think.


I wanted to respond to this because there's some truth to it. However, I think the mechanism of action is more layered and the outcomes varied so that both can be true.

So, yes, there are a lot of new video games because it's easy. It's also true that the most aggressively marketed and therefore quite popular games are bad. Mobile games now make up over half of the gaming revenue. This is a problem not of volume but of incentive. These popular games have created a cycle of [heavy monetization > more ads > more popularity]. They exploit people. This is a problem but a different kind of problem.

On the other hand, the indie games on steam have never been better. They're extremely creative, and fun, and they invent. They riff off each other. And most new things in gaming come first from here. The only thing not healthy about it is the lack of money they make. These games are so cheap that they have a hard time paying the developer's rent.

Platforms and cynical games with better marketing and monetization have stolen their lunch and poisoned their audiences.

The point is that bringing down the barrier of creation did allow much better creations. However, marketing and exploitation ruined a lot.

I won't go into it as deeply, but you can imagine the same pattern played out everywhere else.

YouTube has great content and would have more if it was more discoverable. It's hard to find because the algorithm made for maximizing marketing dollars suggests things that we don't tell the algorithm we want but what the algorithm thinks will keep us watching. It resists helping us. Because marketing money.

With these new generative models, we can imagine decreasing money needed to create but we will all compete for the same finite source of attention. Like with games, the good stuff will still be there, but it might be hard to find.

A positive sum game (creation) turns into a zero-sum game (attention auctions) with negative externalities (ragebait, mobile game addiction).

Those that control who gets attention will be able to soak up all the gains from the positive sum game. It's already been happening with app stores and google ads. And there's no reason it shouldn't soak up all the margin gains.

The enemy is not bringing down barriers - that's the good stuff. It's reining in marketing and attention platforms so that we can all benefit from these creations instead the benefits being greatly attenuated and, in some cases, made negative but the attention gatekeepers.


This is already the case with animated movies for kids.

When I was young, they were not many, but most of them were amazing. And there was a real story. Now, on Netflix there are thousands. But most feel like completely empty and stupid. Recently after watching one with my kids, I'm much more careful in the selection.

Sadly, it looks like it's the case with everything that will be available on the internet soon.


Amusingly enough “didn’t get released on physical DVD” is becoming a bit of a quality bar; second only to checking if the library has it.

There are some very nice animated movies coming out of smaller countries - but finding them amongst the drek is the hard part.


This describes much of Hollywood's output for at least the last 10 years, probably a lot longer. You'd expect removing or lowering the barriers to entry & the cost would result in an explosion of content - there will be a curation problem, but the quality of storytelling will not automatically be worse than the current state.


Yeah I started writing a post criticising the smeared out average of popular media output that AI will put on the screen and realised Marvel films have been selling like hotcakes for years now. It seems like people actually love the idea of just mooshing every boring trope into a big couldron, stirring it up and sampling the pathetic goo that results.


> I fear we are going to slog though decades of low quality content that looks amazing, but the meat of the story is just empty

Already happening without AI, see superhero movies.

What we are actually going to get is millions of stories, the vast majority of which will be terrible but a small percent that will be revolutionary.


Most major shows and movies released lately already feel AI generated since they were written in a corporate boardroom, if anything AI could be an improvement as it will trend towards giving regular viewers/fans what they want.


except for fully ai generated everything, it'll still be people driving an ai to generate the content, so the advancement in technology will let more people tell their own stories. if you want a tv series of you and your friends getting up to hijinks, you can self fund and make just that. you could always do that before Sora, but if the costs are knocked down an order or magnitude or two, it becomes a more attractive proposition to a wider group of people.


When film transitioned to video in the early 90s and many film school projects changed from $40K to $4K almost overnight, we didn't see a mess of new film makers because making good art is hard and almost always requires study and practice, even if the process is inexpensive (Picasso's brushes and black paint cost very little and were not difficult to acquire, it was his skill that was difficult to acquire.)

We're not going to get better movies and shows by unleashing the TikTok generation on movies and shows. We'll get more, but not better because they are simply not capable of better. They don't have the training and the practice and it doesn't matter how cheap the tools if you don't have the ability, the skills that come with an education and a lot of trial and error. These skills are absent in all but the rarest of untrained persons so your friend's tv series is going to be shit, along with any tv series or movie created by people without the skills of a trained tv or movie maker. We don't need more shit. We have enough with the Disney, Apple, and Netflix and I should also include YouTube and TikTok.

Wider is not better in art because art is a developed skill not a party trick. As someone who went to art and architecture schools, I promise you that the camcorder did not bring a grand new wave of democratized citizen cinema. It just didn't and neither will so-called A.I. because, once again, art is a skill, not trick. There are no shortcuts and no secrets to bypass learning and practice.


I didn't go to art school but that's hard to take. things like the blair witch project, Napoleon dynamite, the original mad max, and rocky were made on shoe string budgets and were the result of, yes, art skills, but also camcorders. not to mention that there is, actually, good stuff on YouTube. some of it the product of artists with art school degrees. there's gonna be a ton of ai generated shit out there, no doubt about that, but it also makes it more accessible. But okay, what I'm hearing is that it's going to be cheaper for Hollywood to pump out really good ones, putting people out of a job until they can retrain, from carpenters who were set builders to vfx artists, and Sora is just gonna let the low end pump out bad stuff and the high end will maybe use it to still pump out good stuff. in today's attention economy, I don't know what that means. there's a whole lot of SoundCloud out there and a lot of it's bad but some of it's good, and, like you said, it's a skill not a trick, so theres still going to be cream that rises to the top, even in a morass of ai generated dreck.


There are ways around this. You could use a negative prompt of "empty, two-dimensional" to generate a story with more depth.


>but the meat of the story is just empty

I assume you've never seen a Tyler Perry 'Madea' movie before


The movie franchise that grossed $570 million US?

https://www.boxofficemojo.com/franchise/fr4115107589/

I think I can see where the meat is.


Wait, I thought we were talking about AI. You just described Marvel Studios


I don’t agree. I think it will create new experiences such as pairing it with the Vision Pro and “live” through movie instead of watching one.

We’re unlikely going to be watching movies like we have been in the next 10 years.


Can anyone here estimate what it costs OpenAI to generate, say, a 5 second video using Sora?


At a guess, less than $0.50?

> Pay-per-use: $0.01-$0.10 per second of generated video, based on existing tools and computational demands.

https://medium.com/@christianray.drapete/openai-sora-underst...


Thanks for the link! Although this appears to be more focused on what it will cost the end user, as opposed to what it will cost OpenAI.


Which is your question, I didn't miss the subtle difference. (and you pointed it out nicely :) If we assume openai is making a profit on queries, it means that their cost to run this is less than what they're charging, at which point, the guess for what they're charging is a ballpark for what it costs. Training is impossibly expensive, inference is totally cheap.


That is actually a great question.


Just came to say that spending $800M on a sound recording studio expansion was literally insane even before the current AI boom.

Quite simply, there's no practical combination of top-tier hardware that you can buy which will put you even close to that number, even if replicated 8-10 times as the article implies multiple rooms.

That tells me most of the money would be poured into vanity aspects - literally gold toilets and the like - which would impact the ego, not the sound.

It's easy to spend $1M on gear. You would have to work really, really hard to spend $25M on gear. I'd bet that everything Steve Albini has in his two rooms is worth under $10M.


Soundstages are studios for movie production.

The word “sound” implies that it’s soundproof so you can record sound. This makes a soundstage generally much more expensive to build than the alternative, a silent stage.

$800M isn’t crazy if you’re building a dozen of these and all the supporting infrastructure.


Not only are these aircraft hangar sized buildings with soundproofing, they’re also likely wired for hundreds of amps of lighting, including truss and catwalk systems to he able to position and distribute that lighting.

I could see it adding up pretty fast.


And all of that at Southern California real estate prices and construction labor costs.

Yeah, it's not gonna be cheap by a damned sight.


His studios are in Atlanta, Georgia.


Well, if I had read the interview it quickly would have been apparent that he's talking about video production.

Mea culpa; I honestly thought Tyler Perry was a music guy.


Wait, so you didn't read it and posted this comment?


I did mix Steven Tyler and Joe Perry from Aerosmith on my side and couldn’t see the link with Sora… before reading the article. Feel both ashamed and amazed.


Yes, and by the time I realized I was talking nonsense, the window during which I could edit or delete my comment has closed.

Just owning my mistake.


From the article: "And I think the only way to move forward in this is to galvanize it as one voice, not only in Hollywood and in this industry, but also in Congress."

Assume AI does pan out and widespread change is soon upon us, making a very significant percentage of labor unemployable. So one solution widely touted is universal basic income. Is that even possible? The US government prints money and then collects some of that money as taxes. Remove a large part of the taxes and it is just printing money. Maybe that works? I haven't seen any analysis, especially on the short term changeover. Maybe in the future only corporations and AI's will pay taxes and everybody else lives for free and starts a restaurant for extra cash? Congress today seems unlikely to pay to support people whose skills have become obsolete, but maybe if it happens to a lot of people at the same time...


Yes, but there is a line of people and corporations wanting government money. Right now, the top of those groups include Intel[1], because they feel they did not quite get enough to re-shore. It annoys me, because I recognize it has to be done ( moving fabs to US ), but it does seem very.. opportunistic.

[1]https://www.pcgamer.com/intel-is-asking-for-an-additional-do...


So long back when internet was rare and expensive, sms would cost per message, there would be fewer messages and mostly from people who one could reply to. And then in time of abundance when they get generated and forwarded in bulk I see that they get deleted in same fashion.

I think same will happen with AI once we get tons of autogenerated TV shows and movies watching 3 min of 40 min show or 5 min of 2 hr movie would become common. People would bulk delete unwatched movies/shows and other material from queue without ever playing it. Or maybe by then person entertainment AI bot would be tasked with processing 5000 hour worth of media every week and summarize it to owners.

The idea that 5 million dollar movie would generate 500 million dollar revenue would be hilarious just from plain economic analysis.


The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

- Douglas Adams, Dirk Gently's Holistic Detective Agency


Over the past four years, Tyler Perry had been planning an $800 million expansion of his studio in Atlanta, which would have added 12 soundstages to the 330-acre property. Now, however, those ambitions are on hold — thanks to the rapid developments he’s seeing in the realm of artificial intelligence, including OpenAI’s text-to-video model Sora, which debuted Feb. 15 and stunned observers with its cinematic video outputs.

I had no idea Tyler Perry movies made so much as to necessitate such a big studio


He has a large body of work, producing others' films as well as his own, plus a large TV series output.

https://en.wikipedia.org/wiki/Tyler_Perry#Filmography


[flagged]


Well the Left is generally opposed to the bourgeoisie so I don't see the disconnect there, but to call HN "far left" is laughable.


As an aside, this website has one hundred and sixty seven trackers and loading them caused my CPU temperatures to spike so high that my fans kicked into high gear.


Fantastic case of cognitive dissonance. Heis afraid that this is going to impact artists and uses it as the reasoning to stop paying artists. Amazing. Can't wait for someone as talented as Mr. Perry to produce another quality movie on his MacBook.


The latest Indiana Jones used AI to make Ford’s face young for a whole lengthy scene.


$800mm = a lot of jobs


"Oh well" \s


I, for one, welcome our new silicon overlords.


Anyone saying:

   - it’s only short videos
   - it’s not fully consistent
   - it is hard to direct to get what you want
…is looking at where it is now, and failing to consider where it will be in literally a year or two.

Five years from now parents will be mashing up Winnie the Pooh and Avatar, starring their children, in immersive 3D, for the Apple Vision Pro 4 or the Quest 7.

Tyler Perry is making a wise choice.


>Five years from now parents will be mashing up Winnie the Pooh and Avatar, starring their children, in immersive 3D, for the Apple Vision Pro 4 or the Quest 7.

five years MAYBE expert creators will be able to do this in a satisfactory way after a lot of polish in editing out the edges and hundreds to thousands of prompts. And even then we're talking more about 5-10 minute short, not a feature length film.

five years for a parent with no technical/artistic expertise, sure. some will make some 10 second shorts on par with a current college freshman's early film project. minus understandings of the fundamentals of film, of course.


I really don't know how to reply to this.

Five years ago, there was nothing in existence that could respond well to a prompt like "Write a short story about my six-year-old daughter Emily traveling undersea to rescue her favorite sea horse, who's been kidnapped by the Shrimp King. Make her have several adventures along the way, engaging in side quests and making friends who ultimately help her succeed in her quest"

Today ChatGPT responds to the above with no difficulty, and part of the reason is that it has been trained not just to output reasonable responses, but aesthetically pleasing ones.

There is every reason to believe that video output will be more responsive five years from now than text is right now. Sure, it's possible it will be a little stilted, or a little generic, but it's possible it won't be.

Obviously we're both speculating, but I find it hard to believe that one minute today won't turn into hour-length in five years.


Well there are two issues right now:

1. Text has come car, but I don't think anything other than blogspam sites have heavily leveraged AI as a commercial product. There's a huge gulf of difference between "this looks okay enough that the common user won't question it" and "this is helping professionals make money for products consumers the self seek out". AI is 90% of the way there, and it has another 90% to go, just like any human.

2. With all that noted, Video is a lot more computationally expensive than text generation. Which is multiple magnitudes of more information to edit/refine. It will only magnify the problem since the common user is a lot better at recognizing defects in visuals than in text (where native speakers famously don't process every word while reading).

I'm not saying that this won't massively improve efficiency. But I think we both will agree that this tech won't lead to a large studio making content in months so much as letting a very lean team make content in 3-5 years for much less money when they lay off 95% of the staff.


The only parts of movies or TV that this will be useful for is non-character-driven filler that doesn't move the plot along. It will be great for things like action scenes and car chases, or staring wistfully at the ocean. But they still need actors and a story, and to tie those together seamlessly with whatever is generated. We're very far away from doing that in a way that will look or feel good.

Special effects are nothing new. But we all know special effects don't make a good movie.


Think more about the backgrounds used in shows like Mandalorian. Think about the size of the stage/studio/set that Mandalorian filmed on. It's going to get smaller, and the backgrounds and other details used in music videos, tv shows, commercials and films will be far more virtual than even today.

All the components will still be needed, but the levels and origins of them all will change.

If you go back and look at how the median productions have changed at 5 year intervals (going in the past), it'd be a poor choice to commit $800M to something that's probably going to remain exceedingly fluid for a while.

The SFX industry is probably the most prepared to shift, as it's already screens, pixels, and servers. The days of green dots are already (mostly) over (see the ILM SFX reel for The Creator -- wholesale human detail replacements without tracking dots).

The change will affect everyone else more.


You aren't thinking about generative voice AI or LLMs. Putting it all together, it is not surprising that creatives are afraid they will be done in under 5 years.


Very true, today, but how far are we from something that takes a book as script and spits out a full movie? We have a huge library of ultra cheap forgotten novels that will make excellent prompts for killer personal productions once technology allows it. Who will ever go watch a movie if they can upload the book to their server and get the movie in minutes?


Maybe an entire movie can be done on a green screen if the tools are good enough to get the details



Have you not seen Star Wars Eps 1 - 3?


Have you seen any of the movies that have come out after them?


Yes. What's your point? We've moved away from green screen and now shooting realtime "projections" on OLED screens that allow the actors to see the environment and moves in sync with the camera to show parallax in the image.

if you think we're still shooting green screen for anything with a budget, you'd be a bit out of touch


Somewhere in-between Star Wars Eps 1-3, and today's projection movie sets, are a whole slew of movies that were basically all green screen, and weren't as bad as Star Wars Eps 1-3.


> weren't as bad

that's one hell of an endorsement, and not a good one. pretty much sums it up. 300 and Sin City at least styled them to look like the comic they were based, but the Willy Wonka and Alice all just were a bunch of nope from me. these movies just look/feel uncanny valley, and we just expected to accept it.

compare Mad Max Fury Road. yes, there's some digital additions, but it feels real because so much of it was real.


ouch


Teenagers with Limewire: pirates. OpenAI with terabytes: pioneers. Guess file size matters more than file type in the court of public opinion.


This is a shallow take. Teenagers are copying bits for their own gratification. OpenAI built a fascinating tool that enables other people to create things by transforming bits.

Put another way, one group enables people to make things. The other does not.


How do you know teenagers were not using the content to create their own music?

Funny how BigCorp gets benefit of the doubt. By the way, despite the name, OpenAI and teens are both doing it for their own gratification.


Some people were making really cool remixes! It’s not all or nothing, but if you can’t see the difference between making a tool for others to use, and copyright violations, I don’t see value in continuing the discussion.


>if you can’t see the difference between making a tool for others to use, and copyright violations, I don’t see value in continuing the discussion.

If a tool I used is suspected of containing copyright violations, it'd get sued. This happens even in software as Google v. Oracle has shown (among dozens of other cases. Maybe hundreds by now).

And lo and behold, OpenAI is getting sued on suspicion of copyright violation. "tools for other to use" isn't a defense against copyright violation, and never has been.


OpenAI built it for a combination of their own gratification and cold hard cash. What do you think motivates AI researchers? What do you think motivates OpenAI employees now? Sam Altman? Microsoft?


I try not to guess what motivates complex animals such as humans when in an abstract discussion like “what motivates Sam Altman”. Do you have any inside knowledge of what motivates him or are you guessing because you have correlated things you don’t like with a company or individual?


I try not to guess what motivates complex animals such as humans when in an abstract discussion like “what motivates teenagers”. Do you have any inside knowledge of what motivates them or are you guessing because you have correlated things you don’t like with a certain demographic?


Not trying to be inflammatory, but it's not really about teenagers, or about OpenAI's intentions. We can look at what they are doing.

One group is downloading things other people made, sometimes transforming them - but we certainly haven't seen an explosion of remixes at the scale of OpenAI creations. The other group, OpenAI, makes tools that ingest copyrighted material and enable people to make a huge number of more complex transformations than the original "remix" culture, where the inputs are usually quite visible.

FWIW, I don't even really think the content pirates have as terrible a name as in GP's comment. I certainly have no criticism of them, especially considering that's how I got my start in technology. It's fine, it's just not as cool or as widespread as GenAI.


>but we certainly haven't seen an explosion of remixes at the scale of OpenAI creations.

yeah we have. It's just that there was no Twitter/Facebook/Instagram/Tiktok/Vine/Youtube/Reddit and 20 other sites with more people on them today than there were on the internet 20 years prior. But if you browsed the Livejournals and other relics of the early 00's these aren't hard to find at a proportional mass. This drove a lot of MySpace to the point where the post-Tom era chose to try and pivot into the music service angle over a Facebook competitor. And a lot of that was possible thanks to being able to easily access rips of CD's.

Ironically enough, the main thing holding back music from being as profitable as photos was the music industry itself. They were so aggressive in shaping copyright and hoarding everything into Vevo that they lost billions as new media shaped itself. Squandering talent instead of grabbing that talent for themselves to profit from, trying to remain the trend setter instead of expand or leaning into emergent genres, surrendering that waning control to a subscription service (which consistently remains unprofitable) instead of themselves establishing a platform to profit from (from in-house talent and indies alike). So many missteps and it ends in artists no longer being able to make money from the music themselves.

>The other group, OpenAI, makes tools that ingest copyrighted material and enable people to make a huge number of more complex transformations than the original "remix" culture, where the inputs are usually quite visible.

under what metric? It's weird to talk about "remix culture" and argue that AI can transform it futher... at which point it's no longer a remix and arguably an original song. Which people already do.

Some artists are fine focusing on remixing, but remixing for others is a step towards building the talent to make their own music, and hopefully the remixes establish a brand others want to follow.

>It's fine, it's just not as cool or as widespread as GenAI.

I don't think any people starting their careers in tech in the 90's-early '10's would be here if "cool" was a preliminary for their happiness.


Open AI is building things for their own gratification. Open companies are building things for others to create things.


>Open companies

sure do wish we had those.


Stability is more than good enough


limewire teens were pioneers too


What does piracy have to do with generative AI? I don’t understand your analogy.


Useful generative AI is only possible by the same collection of copyrighted material previously deemed illegal. I imagine a model trained solely on material explicitly marked for AI use would be significantly worse.


On the flip side - almost (say, 99.99%) of Human Engineers, Artists, Technicians, Scientists, etc... have mental models trained on copyrighted material.

Nobody (to my knowledge) has ever said that you can't train on copyrighted material - what you aren't allowed to do is copy or directly plagiarize. Something that all the generative systems are going to great pains to remove from their system where possible.

Are they doing a perfect job - nope. But they'll get better, and this is good - copyrights are supposed to prevent replication, not use of their material.


Lots of people have said that it's illegal to train on copyrighted material without a license to it.

Also, performance licenses for movies, plays, recorded music, and copyrighted scores are all required. The lack of copying is not relevant there, the performance alone can be infringing.


I'm intrigued - can you point me to any credible commentator or article that makes the argument "it's illegal to train on copyrighted material without a license to it" - that seems entirely contrary to the spirt (as I understand it) to copyright law, which is it grants you a right to copy.

I like that you bring up music - almost every musician you have ever listened to (some exceptions of course) developed their talent by learning from others - chords, bridges, etc... And I'm just as certain that close to 0% of them had a "license to learn" from their material.


I'm talking about ML training. I think human training is expressly covered by fair use (i.e. copying for educational purposes). Sorry for the confusion, I misread your comment.


>Are they doing a perfect job - nope. But they'll get better

I lost all optimism with tech hoping "they'll get better" quite a while ago. No, it's time to regulate them before they burn the bridge this time.


It uses a lot of copyrighted content without permission.


What permission would be needed? It's READING it.

Are there problems when it reproduces it verbatim? Absolutely, but that's not what the copyright maximalists are talking about. They're saying it's a violation of their right of reproduction when someone simply reads their work.

I remember when people were upset that someone would (gasp!) link to their site without permission. Especially if it was a "deep link".

Expanding copyright in this novel way obviously is going to lead to a whole slew of problems, least among which is going to be ensuring regulatory capture by the largest of the largest simply because only they will have the money to license reading.

It's a shakedown by industries that no longer have a viable business model.


>What permission would be needed? It's READING it.

and then storing it in a database, as shown by the ability to nearly replicate images with enough prompting before they band-aided a fix over it (which does not remove it from their database). It's clearly not just "reading". You can argue the same for a human mind, but it's a lot easier to peer into a mind of code for now (and honestly, by the time we can accurately read brainwaves LLM's won't even be in the top 10 of ethical concerns anyway).

All that aside, web scraping has been legally contentious for over a decade. This mass scraping for commercial LLM usage is honestly making a horrible argument for that already dubious factor.

>Expanding copyright in this novel way obviously is going to lead to a whole slew of problems, least among which is going to be ensuring regulatory capture by the largest of the largest simply because only they will have the money to license reading.

It's probably for the best, since at that point at least the owners of the data are getting paid (though there's other grey areas to iron out. Especially with User-generated content being sold as if the site "owns it", while being legally exempt from being sued for hosting it). the opposite effect just means the corporations win indirectly instead of directly, with less money flowing around. A company that can outspend the competition can also out spend on hardware to process faster, scrape more, and polish he final effects. There's no endgame here where the corporation loses and the indies win, short of some absolutely radical policy changes.


Generative AI is computational plagiarism of diffuse - but still copyrighted, sources.


> computational plagiarism

Plagiarism is copying without attribution. Transformative works do not qualify as plagiarism.


when does it credit/attribute who/what/when/where the data it copied from?


You don't need to credit for a transformative work. LLMs don't regurgitate like you think they do.


> LLMs don't regurgitate like you think they do.

"Copilot has been found to regurgitate long sections of licensed code without providing credit — prompting this lawsuit that accuses the companies of violating copyright law on a massive scale."

https://www.theverge.com/2022/11/8/23446821/microsoft-openai...


Like I said, they don't regurgitate the way you think they do. Doesn't mean it can't regurgitate data if it's overfit to the training data.


>they don't regurgitate the way you think they do.

Copyright is about the outcome, not the destination. Short of edge cases where two contemporary inventors in different parts of the world funnel upon the same novel idea at the same-ish time, the method doesn't change how we interpret infringement.

So the fact that it's capable of doing it is enough to bring about legal suspicion.


[flagged]


For someone not from the USA, what's a culture war movie? Are you talking about America being the good guy in all the movies? Or the thing where there are more black and gay characters now and conservative people don't like it?

More importantly though - why would AI change the prevalent ideology in movies? Won't the same people who already make movies keep making them, just using AI?


“Culture War” is a dog whistle for any movie that includes women, gays, or blacks. Notice the comment you’re replying to was written by a throwaway account.

There is no culture war in America. There is only a group of people who feel threatened by the existence of content that isn’t targeted squarely at them. There is only a group of people who are terrified that, if they aren’t the majority voice in absolutely everything, they will end up being treated how they treat out-group members.


I think it's poignantly obvious that there is a shift in making entertainment that is above all, inclusive and preaching, rather than entertaining. This is the most glaring in the context of historical dramas. No, I am sorry, but Victorian English Aristocracy did not consist of Africans, Indians, and Southeast Asians. There were no acceptable forums for the 'yaas queeen slaaayyy' attitudes of today. Queen Elizabeth wasn't black. Hannibal wasn't Sub-Saharan African. These productions don't even try to be historically accurate. They're just minority power fantasies. Poorly dressed up shows with scenarios in which the minorities prevail over the racist white people sensibilities of the day. It's genuinely pathetic.

>Notice the comment you’re replying to was written by a throwaway account.

And this brings up the next point: There has never been a time in media entertainment where both the industry and it's professional critics have been so openly hostile, so fervent, so downright dogmatic in insulting and hunting down their detractors to punish them for holding the wrong opinion. They encourage their audience and supporters to go out of their way to harass and silence anyone who disagrees with their content. The fact that people don't want to be associated with that toxicity doesn't make them cowards or racists. Terms like "dog whistle" have come to be dismissive phrases of any criticism. You are attacking the speaker rather than earnestly trying to address the argument. You are dismissing an opinion because it comes from a throwaway account. Nobody is terrified of your shitty entertainment. They're terrified about losing their jobs because people can't take an insult over the internet and have to make it personal when their favorite manufactured outrage garbage show gets criticized.


I use a throwaway account because any talk whatsoever about these things is enough to get fired from most left-leaning startups. So deal with it.

Original comment was upvoted but still got flagged, which is evidence of exactly the type of sensitive bullshit I’m referring to.


>which is evidence of exactly the type of sensitive bullshit I’m referring to.

disagreement =/= sensitivity. This kind of stuff is what many come here to avoid, because ultimately it leads to flame wars, and half of HN's policies and guidelines more or less come down to "don't flame".

You can talk about such issues without being flagged, but you need to actually talk about them, not mud sling. Maybe not without downvotes, because I'm not going to pretend HN doesn't have its kneejerk topics, but flagging a good 95+% of the time comes for good reason (at least on comments).


Tolerance for everything but intolerance?


Congratulations, you have rediscovered https://en.wikipedia.org/wiki/Paradox_of_tolerance


>There is no culture war in America.

ehh, the culture war is by far the most annoying part of modern internet discussion. Casting it off as "one side if right one side is wrong" is exactly why we have such "wars". No war has ever had a side that rationalized itself as the villian.


A war implies there is something at stake. There’s not. Log off. Go outside. Volunteer in your community. Hug your kids. Make love to your partner. And remember these two immutable facts:

(1) The terminally online will always be outraged about something.

(2) You have a finite amount of time on earth and then you’re dead.


Basically, Americans have become obsessed with "diversity", so now Hollywood tries to have an exact amount of races, sexuality , genders, etc in every movie.

So each scene must have at least 1 Asian, 1 African American, 1 Non-Binary and so on. This takes priority over story building, acting and all else and thus majority of these movies turn out to be a flop.

Additionally, it weaponizes the right (i.e conservatives) to mock the left and in turn people on left (i.e liberals) so we end up with with both sides calling each other nazis, extremists, wokes,.. all of this is amplified by social media.

It is both comical and sad and the same time.


[flagged]


We're giving you a better life, so we're allowed to abuse you! Sounds like a great society to live in.


Movie industry is $77B, television is $300B, and video games are $250B.

Movies are likely shrinking since their revenue was theaters and movie sales, and we all are moving to subscription which priorities TV series and short form videos on TikTox.

Scott G was guffawing at the Oscar kerfuffle because movies dont matter anymore and it’s a shrinking industry. To some degree he may be right.

So independent of AI, movies are in for a rough ride. Which makes me sad, I love the arc of a movie in an engrossing 2-3 hr chunk, and usually much more interesting cinematography and better production value over tv. But I also like manual transmissions and writing letters, so I accept I’m a dinosaur on many fronts.


In what way are Tyler Perry’s movies culture war movies?

They’re not “woke” at all. They’re all incredibly conservative and Christian. They’re just made for a black audience, which has clearly made Tyler Perry very, very rich.


Hm. Doesn’t AI make it easier to crank out any kind of movie?


Not a novel one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: