I know I'm in the minority here, but let me explain why it's not terrible audio mixes or bad actors.
The overall trend in TV and movies has been towards greater realism in all aspects. Less stage-audience sitcoms, more single-cam. Less "look at this actor" and more "look at this real person".
Actors aren't mumbling, they're speaking how real people speak. Trust me, screen actors have more stage training than ever -- they know how to enunciate but if they do, the director tells them to stop because they seem like they're giving a fake theatrical performance rather than being a slice of real life.
And because more and more people have huge TV's with great contrast, 5.1 surround systems, or are watching with AirPods with spatial audio... TV is becoming more film-like both in range of brightness and loudness. For many people (like myself) this is a godsend. It's like I'm at the cinema every time I watch an hour-long drama. It's amazing.
We could go back to low-contrast everything and overly enunciated actors, the way sitcom TV was, designed for small screens and terrible speakers. But that just feels so... backwards and limited and fake.
HOWEVER, I do firmly believe that modern TV content should be made more accessible, and that it's high time to build adjustable "end-user compression" into all TV's and video players. Let the user apply automatic volume equalization so quiet parts are just as loud as the loud parts when you're watching TV in your living room with lots of activity around. (And with 5.1 signals, you can even always keep dialog louder than sound effects and music.) And brightness equalization so you can see what's happening in the dark scenes of Game of Thrones when you're in a sunlit room.
> Actors aren't mumbling, they're speaking how real people speak. Trust me, screen actors have more stage training than ever -- they know how to enunciate but if they do, the director tells them to stop because they seem like they're giving a fake theatrical performance rather than being a slice of real life.
But if it translates into me not being able to understand them, that’s less realistic, because in a real-world scenario I’d be able to understand them, since our ears are adapted for the real world. A real world slurrer is about as easy to understand as an enunciator through movie speakers — at least, with how the meth-addled mixers do it now.
Bottom line: speech being incomprehensible is not realistic, and is not accomplishing some wonderfully artistic gritty realism.
Edit: To put it a different way: they may be speaking the way real people speak, but once you put it through a speaker, it will translate into something less comprehensible than that speech would be in a real-world scenario, and thus less true to how that situation would be in real life.
To the contrary, it is realistic -- no, you wouldn't understand them in real-life either. Movie speakers aren't slurring anything, as any audio engineer could easily prove to you.
My mother complains about how she can't understand anything when there are Irish or Scottish or even northern English characters in a show.
Decades ago, in an American TV show those characters would have been speaking an understandable American accent with just a few accent "suggestions" to indicate they were "foreign".
Today, it's considered absurd if they speak in anything else but their actual full accent.
It's realism. My mother can't understand it and has to use subtitles. I understand it all perfectly, but I've traveled a lot and learned other languages.
But that's what subtitles are there for. And not all TV/film has to be culturally accessible to everybody, with fake ways of speaking.
>To the contrary, it is realistic -- no, you wouldn't understand them in real-life either.
Yes, I absolutely would, because, in practice, people speak in a way that others can understand them, or else they have to quickly adapt. (If they’re not comprehensible, there will be a contextual reason why.)
>Movie speakers aren't slurring anything, as any audio engineer could easily prove to you.
That wasn’t what I was saying. My point was that conveying sound through speakers has inherent differences from a person actually being there, and our ears/auditory processing are optimized for the latter, taking advantage of things that aren’t present with speakers. So playing “the same” speech is going to be inherently less comprehensible, requiring some kind of compensation.
These brilliant sound engineers, as judged by actual audiences, are turning comprensible speech-situations into incomprensible ones, some way or another.
>My mother complains about how she can't understand anything when there are Irish or Scottish or even northern English characters in a show.
That’s a separate issue, whern there are genuine cultural differences with between the listener and the situation. That’s not what’s happening in Tenet.
> But that's what subtitles are there for.
No, it’s not. Most sound engineers would consider it a failure if someone had to use subtitles for their own dialect, and most cinemaphiles consider the presence of subtitles to be a failure in itself, and everyone agrees that having to read off the screen to follow worsens the experience.
I don't know what to tell you... but in real life people misunderstand each other all the time. And don't adjust.
Listening to dialog in a movie theater is generally crystal clear in terms of audio engineering, coming from dedicated center speakers with very little audio artifacts due to the space. Directional sound waves are directional sound waves, and no there's nothing our ears are optimized for that isn't reproduced by speakers, for dialog coming from a few feet or more away. Muddiness is introduced when downmixed into stereo on crappy TV speakers in living rooms with a lot of sonic reflection from hard surfaces. If you listen with spatial audio AirPods with noise cancellation, for example, you'll get something much clearer akin to a theater experience.
My point is that subtitles help people who have trouble understanding similar dialog in real life, or in bad acoustics. It's an accessibility option.
When I listen to content with my AirPods, I never need subtitles at all. The audio is perfect. When I watch a movie in a friend's living room with kids running around, we absolutely put on subtitles. Because it's a terrible audio environment. So it all works out.
>I don't know what to tell you... but in real life people misunderstand each other all the time. And don't adjust.
Anyone who consistently speaks incomprehensibly and doesn't correct is soon cut off.
Mishearing does happen, but is the exception, and having repeats will take away from the presentation for little narrative benefit. Therefore, when putting it on the big screen, they present the interaction in a way that avoids blowing time to have characters repeat themselves.
So yes, in a sense (that you weren't arguing), you are correct: in real life, there will be more mishearings (and repeats). But IRL, you also don't normally have to carry on after not hearing correctly. To the extent that movies are accomplishing that, it is a departure from realism.
>Listening to dialog in a movie theater is generally crystal clear in terms of audio engineering,
As judged by the repeated complaints of numerous people, to the point that periodicals are covering it, no, it's not, it's really really not. Perhaps you hear things okay but most people don't.
>My point is that subtitles help people who have trouble understanding similar dialog in real life, or in bad acoustics.
But these are people that have no trouble in similar dialog in real life! Hence why this is being covered, and why people are upset. Did you notice the title? "Why do all these 20-somethings have closed captions..." You must have missed the subtext that, "20-somethings are not a special class with hearing disabilities". If they can't hear it, you can dismiss it was "lol hard of hearing" or some exceptional case.
>When I watch a movie in a friend's living room with kids running around, we absolutely put on subtitles. Because it's a terrible audio environment.
But people are using subtitles when there aren't distractions, and they didn't need to do this 20 years ago. I just watched Seinfeld on Netflix, and the dialog clarity was thousands of times better than any more recent production. I turned off subtitles, which was unusual for me. How come it wasn't such a "terrible audio environment" back then?
Because sound engineering practices have regressed, and you shouldn't be rationalizing them.
I'm sorry but that is just not true. That kind of naturalistic acting has been the dominant trend in movies since Brando. TV may have been more stage like but definitely not movies.
You're right that it started with Brando and has only come more recently to TV in past decades. But I don't think I said anything to the contrary. I even said "TV is becoming more film-like".
I don't watch a movie to get confused as I may in real life. The director is confused if they don't understand the fundamental rules of cinema - speak clearly and face the camera.
No matter the size of the screen, bad acting is bad.
> the fundamental rules of cinema - speak clearly and face the camera
You're literally describing the fundamental rules of cinema... in the 1940's. Back in the studio system, that was exactly with they did, so clearly they even used a fake "transatlantic accent" with heightened enunciation to do it.
And these rules (minus the accent) are still followed to a large degree in middlebrow TV fare like what you watch on ABC or the CW Network. It's also more present in comedies.
But if you're watching a prestige drama on HBO or Netflix? If you're watching The Wire or Succession? If a drug dealer or cop suddenly starts speaking clearly and facing the camera, all believability is instantly shattered.
Your rules for "bad acting" are outdated by about seven decades. In fact, "method acting" in the 1950's was precisely a reaction to the stilted "speak clearly and face the camera" acting of the 1940's. By your standard, Marlon Brando would be a bad actor, since "speak clearly and face the camera" is the polar opposite of what he did.
The overall trend in TV and movies has been towards greater realism in all aspects. Less stage-audience sitcoms, more single-cam. Less "look at this actor" and more "look at this real person".
Actors aren't mumbling, they're speaking how real people speak. Trust me, screen actors have more stage training than ever -- they know how to enunciate but if they do, the director tells them to stop because they seem like they're giving a fake theatrical performance rather than being a slice of real life.
And because more and more people have huge TV's with great contrast, 5.1 surround systems, or are watching with AirPods with spatial audio... TV is becoming more film-like both in range of brightness and loudness. For many people (like myself) this is a godsend. It's like I'm at the cinema every time I watch an hour-long drama. It's amazing.
We could go back to low-contrast everything and overly enunciated actors, the way sitcom TV was, designed for small screens and terrible speakers. But that just feels so... backwards and limited and fake.
HOWEVER, I do firmly believe that modern TV content should be made more accessible, and that it's high time to build adjustable "end-user compression" into all TV's and video players. Let the user apply automatic volume equalization so quiet parts are just as loud as the loud parts when you're watching TV in your living room with lots of activity around. (And with 5.1 signals, you can even always keep dialog louder than sound effects and music.) And brightness equalization so you can see what's happening in the dark scenes of Game of Thrones when you're in a sunlit room.