I telecommuted for years and almost never used a webcam. A shared desktop has way more potential for something really interesting and valuable being displayed.
Watching hoards of people hop on webcams to transmit choppy video information about their face and home seems like wasted bandwidth to me.
Edit: addendum, get a headset, transmitting voice clearly with some decent noise cancellation is really important. I buy the cheap logitech h390, like 25 bucks each.
Been remote for ~6 years now. I've used my camera maybe a dozen times, tops, during that time.
+1 for investing in a good headset. Get something that completely covers the ears, and/or has some noise cancelling. I bought a Logitech gaming headset with a good mic and it's made a BIG difference, esp. on days where I'm on 4+ hours for calls.
Ironically, what matters most for your own audio quality is that other people use headsets — because feedback cancellation is an extremely difficult algorithm which often malfunctions, and if someone wears a headset that algorithm doesn't have to run.
But I'm shouting into the void about that. Most people won't adapt their behavior when it doesn't impact them personally.
Having done both for a long period of time, having the “face time” really makes a big difference for communication type meetings. However, it’s not as useful for instructional or co-working type meetings where you really only care about the screen sharing.
Getting a headset is important, but it doesn't help with latency. That huge amounts of latency are somehow considered acceptable in these communication channels drives me mad.
We optimized for cost by making things packet-switched.
The latency for phone calls on copper wires back in the late 90s was great, but I am definitely glad that a modern video call doesn't require paying, say $3.00/min/person in the way that long distance did back then.
People are getting really excited about the future of WFH, but oh my god will it be awful for those with managers who need to see your butt is in your seat when at the office. Why should we expect this to change when the home becomes the office? Daily stand ups are now on zoom!
Anything with a condenser microphone and a wired headset would do, just please don't use Bluetooth headsets. The Bluetooth device is in HSP mode for duplex audio with worse bitrate than a landline phone.
One of the nice things about it is that you can also filter the output on your end - so if you're in a meeting with someone who is sitting outside near a busy road, or they have a roommate who is gaming on a mechanical keyboard, you can filter the noise out without missing something if that person decides to speak up.
You would think this sort of functionality would be implemented in sound cards but I can't think of any significant advancement in audio processing in years.
Aw man, I'm quite often found wanting audio compression - narrowing of the difference in loudness of sound output - but neither Win10 nor Kubuntu seem to be able to do this ootb.
It would be really good to have on a TV too (no more super-loud explosions interspersed with barely audible speech), and I'm amazed that YouTube seemingly doesn't normalise sound levels at all.
As a relatively new Win10 user I find access to mic levels being buried in device manager to be crazy. Also, not having a default output mix, apparently ...
At least games seem to offer a good range of sound settings.
Youtube does some loudness normalization, but it only changes the volume for the entire video, not for specific sections. It also only makes videos quieter, it will never make them louder. You can see this in the "Volume / Normalized" item of the "Stats for nerds" menu.
> As a relatively new Win10 user I find access to mic levels being buried in device manager to be crazy.
Right click on the speaker icon on the bottom right notification area, then click "Open Sound settings"
If you want the legacy sound control panel that you may have been used to from pre-Win10, then from this dialog click "Sound Control Panel" on the right. (Or Win+R "mmsys.cpl")
Should be doable in any Linux Distro using pulseaudio (which i think kubuntu does):
wiht module-ladspa-sink you can apply arbitrary ladspa plugins to your stream, there are plenty.
And i think alsa also hast native compression support, but not sure.
If the theory is correct, Zoom fatigue exists because videoconferencing is a worse experience than in-person conversations. Media quality, latency, the inability to use most of our motor/sensory apparatus, all contribute to micro-frustrations which accumulate over the course of a meeting.
On the other hand, screen sharing with interactive control when working together on a shared task is actually better than sitting next to someone on their computer. In person, I can only talk and point at their screen. With interactive screen sharing, I can click, type, and even draw live on their screen.
I spend hours a day in interactive screen share sessions (quasi-pair programming but not really) and never feel the effects of Zoom fatigue. But when I have to use a product without the ability to easily draw or interact, or have a meeting where it’s just about faces in boxes, I immediately feel extra “drag”.
I’m curious to hear if anyone else has had the same experience.
If this is correct, there may be a way to sidestep the issues of Zoom fatigue with better tools and processes (e.g. don’t talk about work, instead do the work together).
> the inability to use most of our motor/sensory apparatus, all contribute to micro-frustrations which accumulate over the course of a meeting.
I wonder if VR or AR would help with this. If your stand-up was in a VR room then you could look people in the eye, and maybe even move around or gesture with your hands. Wouldn't it be ironic if something mundane like remote work turned out to be the killer Virtual Reality app.
This is an interesting comparison. I'm using uBlock Origin so I'm not sure how the ads compare. Layout-wise, theconversation.com is a little more narrow, and the links are not as highlighted. But the non-black font is a deal-breaker for me. (I'm sure there's an extension, but I find myself doing Inspect -> Uncheck Font Color a lot!) The only other difference is that PopSci.com seems to have less clutter in the columns, so for me, it's the better reading experience (before I give up and toggle reader view!)
Another difference is that with The Conversation, when I scroll to the bottom, I get a dialog that says, "Get The Conversation’s newsletter to understand the news" and "Subscribe now". Popular Science doesn't seem to have a dialog like that.
>The solution to Zoom fatigue is to eliminate meetings where the purpose is to share basic facts & information.
Save meetings for collaboration, relationship-building, and working on thorny problems.
Just an fyi to avoid derailing the topic...
The "fatigue" the author is talking about is not about frequency of useless and redundant meetings.
Her usage of "fatigue" is specifically talking about bad sound quality and some ideas on how to change the acoustic environment to improve it.
Whether everybody in the press uses "zoom fatigue" the same way I can't say. In any case, it's the fatigue from suboptimal sound environments is how the author of this thread's article is using it.
Thank you! Props for polite correction on someone straw-manning the OP. Otherwise I likely wouldn't have looked into this one.
I've definitely noticed mental fatigue from the changed aural environment in my house. I normally work from home, but now that my wife is also WFH, I've realized that hearing her on zoom at the same time as I'm in a meeting (or trying to concentrate) just melts my brain. Can't actually comment on zoom sound quality as have a pair of headphones that doesn't drive me completely nuts, and didn't notice anything about it in the previous few years of WFH...
“bad sound quality and some ideas on how to change the acoustic environment to improve it.“
I hope that if video conferencing stays more popular we will see a lot of progress there. Cell phone cameras have shown what can be achieved with enough computation so I hope the same can be done for video conferencing. The current solutions are still very primitive as far as sound and image quality goes.
I wonder if premium virtual conferencing solutions will pop up. People might pay more to be guaranteed calls would go over dedicated networks and high performance servers; the service provider can send the participants custom hardware (perhaps even appliances) to improve quality at the ends. I'm unsure how much of the quality degradation is at the last mile, which this sort of service won't be able to help much.
As someone who has worked with remote teams for years and has spent many, many hours in Zooms and the like, my advice is to get used to it because it's great.
I used to dread conference calls. I can't stand listening to a room where I can't see faces. I never know what people are thinking.
What I really can't understand is the people who hate turning on their cameras yet will greet me with a smile and a handshake in person. What's the difference between a camera and being in person?
> What I really can't understand is the people who hate turning on their cameras yet will greet me with a smile and a handshake in person. What's the difference between a camera and being in person?
There's a huge difference! You can move around in-person without making sure you're "in frame", subtle body language isn't lost to the shitty framerates and compression of most webcams, speech doesn't get messed up from packet loss or shoddy echo cancellation code, and you are in a shared environment which makes it easier to communicate without having to STARE at the other person/camera the entire time you're talking to them.
For me seeing people on webcam is a totally different experience than from seeing them in person. I don’t get any of the signals I get from being in the same room so for me it’s basically useless and even distracting.
I think with some image computation it should be possible to give a much better video conferencing experience. Add better backgrounds and maybe have several cameras and compute a more 3D image vs the weird angles we see now.
Yeah, I was talking about this with a friend recently. I think that straight video conferencing is now pretty commoditized and cheap to execute. Facetime, Meet, Duo, Teams, Zoom are fairly undifferentiated. And you see products like Slack just drop video calling in without a lot of fanfare. I think there's going to be another generation on very near horizon where we see video software that is much more fit to purpose. One-on-one calls is not the same use case and business meetings, presentations, fitness classes, classroom situations. Streaming video and audio is easy, but there is a lot of room in the user experience and modes of interaction to build more useful products than just turning on a camera and microphone. Solving things like "eye contact" or the equivalent could be done. And we definitely shouldn't stop at just trying to model in-person interaction and really look at what the medium allows that wasn't possible before.
“Solving things like "eye contact" or the equivalent could be done.“
I bet if you had several cameras you could compute a video feed where people have direct eye contact instead of seeing them staring at a screen during a conversation
You can track gaze with just a single web cam with something like webgazer.js although it's not super precise. There are companies like Tobii that make dedicated gaze tracking sensors in multiple form factors. The trick is figuring out what to do with that information.
> What's the difference between a camera and being in person?
That's your 'matter of taste'; for me it's 'what's the difference between text chat and voice chat' (actually, I find text chat far more efficient than voice chat). I know many people who won't turn on their camera while they are not shy in public and you know many people who like voice better over text, so I guess that's just what you like or don't like?
Most people brush their hair, put on make up, or wear nice clothes, when they go out. I've seen some people online after a few weeks and some people look quite different at the moment. Some people are embarassed. Some don't care. Some joke that they had to put on a shirt.
That's a great point. I think a lot of this "fatigue" is due to the huge eye contact problem most video setups have. I have solved it by having a secondary monitor and camera placed far enough back, but your typical laptop camera user can't do this...
All of these video fatigue articles ignore the eye contact problem. From a paper I wrote on mediating over video:
The most important element of body language is eye contact. “Gaze is vital in the flow of natural communication, monitoring of feedback, regulating turn taking, and punctuating emotion. The lack of eye contact shows timidity, embarrassment, shyness, uncertainty and social awkwardness. (Edelmann and Hampson [1]).” Having a camera on top of a monitor creates the appearance that participants are looking down. If you do look up into the camera, you aren’t looking at the other participant’s faces! Our minds are programmed to interpret looking down as gaze avoidance. Seeing someone look down makes them seem disinterested or even dishonest.
It is a hard problem to solve. I set up a studio in my office where I have a second monitor and external camera back far enough away so it works. I have looked for solutions and they are generally inaccessible. Room sized immersive systems from Cisco, etc solve it, but they are too expensive for the plebs. I have seen some goofy hacks using see through mirrors and video prompters. There are some productized versions of that but they all seemed to fail. The latest apple phones use ARKit to solve it by manipulating your video, but I have only read about it as a beta feature for facetime.
There is probably some money to be made here, but the gating factor is general awareness of this gaping hole...
Common wisdom is that you want your camera to be just above eye level slightly looking down on you. The camera pointed up at your face is "unflattering."
I am 6'4" tall and so everyone is extremely used to looking up at me and it even can feel very strange seeing a camera looking down at me ;P. But I am more asking about stuff like "there is research which shows that if you are looking down you look like you are hiding something but if you look up it looks like you are arrogant or distracted or [fill in the blank]", not "is this a flattering camera angle" ;P.
What's the evidence for this? I believe there is a psychological effect that people report more malaise when you ask them about it. Back when I first studied psychology I self-diagnosed myself with like 5 different mental defects. Confirmation bias and hypochondriasis?
Consider that as audio quality gets worse and worse eventually becoming indecipherable (from latency, dropouts, distortion...), it takes increasing effort to understand what is being said. The curve is non-linear, but the direction of the correlation is clear.
There is no question that auditory fatigue is real. The question is only to what extent poor audio is a contributing factor to "Zoom Fatigue".
There are a bunch of articles about it. National Geographic [1], Harvard Business Review [2], BB [3]. One of the theories is that we have to work harder to pick up on non-verbal cues, which consumes energy.
A data point from the BBC article: "One 2014 study by German academics showed that delays on phone or conferencing systems shaped our views of people negatively: even delays of 1.2 seconds made people perceive the responder as less friendly or focused."
My team is spread throughout the US and for a few years now my professional life was wall to wall Google Hangouts/Meet. It's been largely business as usual for us.
I wouldn't say there aren't other factors. General anxiety about ones health and paycheck, parents are now largely unpaid teachers, 24/7 news coverage of generally bad news.
It probably all adds up and we just blame it on technology. 5g towers, video games and now Zoom.
No. Like I mentioned elsewhere here, almost everyone I know hates it and I myself hate it as well, while I am a social monster (put me somewhere and i'm chatting to everyone in no time flat).
I wouldn't say I hate it, but I would say I find it tiring.
One of the great things about regular phone calls is you can multitask. Walk around the house and straighten up papers, load some dirty dishes in the dishwasher, etc. With video, it feels more like a performance.
I don't really get the point of keeping video open. I'm always either looking at emails, a shared desktop, an issue tracker, or some kind of document while we are on calls. Seeing people sip coffee and pick their nose doesn't really add anything except bandwidth.
Then let that be a presenter's view. Don't make everyone suffer through it. When I'm in a room with people in a meeting, I'm focused on the presenter. I'm not looking at everyone in the room at the same time.
Haven't messed with Zoom too much, but I find that in Webex and Hangouts, the default is to focus the main "camera" on whoever has been talking for the past few seconds. It doesn't engage immediately so there are no rapid cuts between feeds just because someone cleared their throat or clicked a mouse while someone was speaking.
Of course it isn't perfect. It depends on people taking turns to speak (as they should, but can't always do if latency causes two people to start talking at once). Still beats the "Brady Bunch" window I only see when I do social meetings with a handful of friends on Friday evenings.
With those it's less of an issue since we're often on the couch in front of the TV with a webcam mounted on top or in the kitchen making dinner with the laptop open on the table anyway.
From my experience: get headsets for people, they are cheap. Get dual monitor setups and foster culture of active screen sharing - that’s very powerful thing.
Picture quality tends to be so bad that there’s not many non-verbal cues transmitted. Video may cause fatigue - you feel the need look smart. When video is off you can lean back, stand, walk circles, draw things, stare out..
Check PulseEffects, it has an UI that allows to enable and configure effects for both input and output. Last I checked, it is on the default repository of many distros.
Your computer mic will be biased towards vocal range frequencies anyway. Components of the voice that are unpleasant are in the upper mid range around 3-4k i think. For smoother voices what you really want is specialised compression to get rid of sibilance (hissy s sound) and plosives (low frequency large air movement caused by b, d, p sounds) from listening to a lot of voice calls recently though I think most machines do quite a lot of this automatically. The key here thou is don't get too close to the mic as it exaggerates these effects.
Not a linux user, but a decent external mic or headset is going to make the most difference here, its largely a factor of the tiny internal mics in laptops. They will also do better with sound insulation, internal are not well insulated from the fan, keyboard etc.
Tone changes can help with audibility. I have a really low voice, it doesn't carry well over voice-chat; I'd push my register up a few notes. But yes, filtering tweety mic sounds from other people would be good too.
pulseeffects is a pretty neat tool for both playback and record effects including a compressor, gater, equalizer, filters, de-sser (can help quite a lot in reducing those sharp S sounds) and more. It has made a lot of presentations a lot more bearable for me, and my relatively low quality microphone in a noisy environment sound a lot better for others.
It only requires pulseaudio (almost all distro's already use it) and is available in most package repositories and as a flatpak.
Watching hoards of people hop on webcams to transmit choppy video information about their face and home seems like wasted bandwidth to me.
Edit: addendum, get a headset, transmitting voice clearly with some decent noise cancellation is really important. I buy the cheap logitech h390, like 25 bucks each.