I have a hard time believing this. Captions demand more attention if anything. I can passively "watch" a video by only listening to the audio, but I can't passively read.
I think that the phenomenon described in the article may actually be a symptom of a much deeper social change. Listening and auditory comprehension were critical skills for communication and preservation of knowledge in prehistory. Spoken word is inherently ephemeral. As civilizations developed or adopted writing systems, and the population became increasingly literate, text supplanted spoken communication and oral history in many areas. There are many obvious benefits to that change, but I believe that we also inadvertently sacrificed our listening and auditory comprehension skills in the process over many generations.
Text messaging/SMS is increasingly preferred to phone calls, with many of the younger generations experiencing high levels of anxiety if they're required to call someone.
This is completely anecdotal, but I've also observed several others who are unable to follow verbal navigation instructions - either spoken from another person or even live step by step instructions from a navigation app. They only feel confident if there's a visual representation.
I think we've mistakenly classified that behavior as having a "visual learning style", when it is more accurately the result of our species losing its ability to process auditory language.
I think you're right in absolute terms, that captions require more attention than audio generally, that's only true if you're choosing one or the other, when most people are doing both.
With audio, if you missed a word, because you were focussing on something else for a moment, you have to rewind. If you have captions on, you can glance quickly at the screen, read the word or two that you missed, allowing you to 'recover'.
That allows you to pay even less active attention to the audio, because you know you can always 'error-correct' later.
> I have a hard time believing this. Captions demand more attention if anything. I can passively "watch" a video by only listening to the audio, but I can't passively read.
I do this. I can parse what is happening on screen and read the captions in a fraction of the time the thing actually plays out, then I got a few seconds to do something else. Usually I would be reading HN or something while I tune out the video for a few seconds, before glancing at it again to catch the next bit.
Listening to a video and reading something else at the same time doesn't work for me. When I do that, I usually forget what I was reading or miss something in the video. Interleaving works much better for me.
Also I don't normally do that, just when there's some particularly boring part that I don't want to skip but which doesn't demand my undivided attention.
> I do this. I can parse what is happening on screen and read the captions in a fraction of the time the thing actually plays out, then I got a few seconds to do something else. Usually I would be reading HN or something while I tune out the video for a few seconds, before glancing at it again to catch the next bit.
That kind of context switching really does sound terrible to me. In my case, I speed up the content itself by 3 to 5x speed so that I can process more information at once and any boring bits are basically sped through. It's helped me retain a lot more information than simply watching at 1x speed.
This is an interesting take actually.. If we can assume that younger generations spend more time communicating electronically via some form of text messaging, at least the majority so, are you saying that that just becomes the "default" form of communication naturally?
Where are older generations would have spend far more time communicating verbally..
That makes some sense to me anyway. Would be super interesting to see some further studies done. And to philosophise the results that could mean.
I was thinking along those lines, yeah. Extrapolating that out, I can imagine a future where writing is the only form of language. It'd make for some interesting dystopian fiction, if nothing else.
I'd be interested in some formal studies as well. My thoughts aren't well researched or anything. Just ideas.
I can read considerably faster than people speak. I can keep up with a scene by glancing at the image and reading the text while focused on something else primarily. I can’t do that with audio.
> Captions demand more attention if anything. I can passively "watch" a video by only listening to the audio, but I can't passively read.
If the dialogue is 60dB below the explosions (or even the music at times) and you're doing something like cooking with a 70dB noise floor and 80dB peaks, then you have zero chance of getting the dialogue without blowing out your windows.
I won't pretend I'm fully multitasking, but sometimes I want a comfort show on in the background that I've seen before while I play a game on my phone/iPad. In those cases it's nice to be able to look up and see what was said if you just missed part of it. But I also leave subs on 100% the time I'm normally watching for the obvious reason of not missing anything, it's not just for background TV.
> Captions demand more attention if anything. I can passively "watch" a video by only listening to the audio, but I can't passively read.
If the dialogue is 60dB below the explosions and you're doing something like cooking with a 70dB noise floor and 80dB peaks, then you have zero chance of getting the dialogue without blowing out your windows.
It's simpler than this. If you're watching a movie/show on a phone or laptop in a shared area, it'll probably be at low volume, so you can't hear the dialog. I've watched entire series with CC for exactly this reason.
With social media videos, CC is even more important cause nobody wants their phone making noise in public, and any wise app will mute by default anyway.
I disagree, I have a harder time paying attention to people speaking than reading. Reading is faster, much faster than waiting for people finish a sentence.
Then again, maybe you're right, I don't process auditory language and when I was in university, I skipped classes to instead read the textbook back to back without a teacher distracting me by yammering around.
I can browse my phone/watch instagram videos with a NFLX show playing. If something interesting happens I can look up and quickly catch up on the dialogue by reading the caption.
It's not so much the verbal aspect rather than the memory aspect. Written instructions are more soothing because you can refer to them verbatim and they stand no risk of being forgotten.
I have a hard time believing this. Captions demand more attention if anything. I can passively "watch" a video by only listening to the audio, but I can't passively read.
I think that the phenomenon described in the article may actually be a symptom of a much deeper social change. Listening and auditory comprehension were critical skills for communication and preservation of knowledge in prehistory. Spoken word is inherently ephemeral. As civilizations developed or adopted writing systems, and the population became increasingly literate, text supplanted spoken communication and oral history in many areas. There are many obvious benefits to that change, but I believe that we also inadvertently sacrificed our listening and auditory comprehension skills in the process over many generations.
Text messaging/SMS is increasingly preferred to phone calls, with many of the younger generations experiencing high levels of anxiety if they're required to call someone.
This is completely anecdotal, but I've also observed several others who are unable to follow verbal navigation instructions - either spoken from another person or even live step by step instructions from a navigation app. They only feel confident if there's a visual representation.
I think we've mistakenly classified that behavior as having a "visual learning style", when it is more accurately the result of our species losing its ability to process auditory language.