Hacker News new | past | comments | ask | show | jobs | submit | alec's comments login

I work for Google, but not on anything related to Meet. I'm most excited about this because the video/audio quality is much higher.

When I use Zoom with friends, everyone is usually 320x240, or maybe 640x480 if there's only one person. Doom was 320x240 in 1993.

In Google Meet, I usually see people at 1280x920. A 4k monitor will fit 6 people at that resolution.

I value this higher resolution because it makes me feel more connected to people.


Is this the same service I'll have been using if my company has Google Suite and we've been using meet.google.com to run meetings? Asking because I've had the opposite experience - we've used Zoom for a few meetings recently and get massively better quality than we did on Meet, and I'm not sure what could be going wrong there. Or is this a new premium Meet service that's higher quality than both Meet and Zoom?

Zoom also displays a much much wider angle which feels like it explains the quality difference - the Meet one is a tiny portion of my webcam blown up and horribly artifacted.


meet.google.com is Google Meet, yep.


G Suite domain owner and manager here, huge fan. Using zoom and meet a lot these days. Zoom’s voice prioritization is very nice compared to Meet. And if the Brady Bunch view is coming, I welcome it.

My only other complaint is that my world (DoD health) blocks Most of G Suite.


Meet already has grid view, can show up to 16 people.


Does it ? I've looked and looked and looked for it. I'm not running Chrome/Windows though, so presumably I'm a second class citizen.


It released a week or two ago, so if you've been using Zoom recently, you might not have noticed. It's under the overflow in the bottom right > change layout.

I'm on Firefox/Mac and has worked fine for me.


yup, rollout date depends on the release schedule of your G-Suite domain https://cloud.google.com/blog/products/productivity-collabor...


I'm on MacOS/Firefox and it started showing a grid view for me a couple of weeks ago. I just assumed they were just rolling it out to replicate Zoom, but maybe it could have been configured somewhere on the G Suite side too.


I'm not really excited that every meeting participant will be able to see my facial flaws in 4k HDR with dynamic surround.

The tech is not interesting, the UX is less interesting than Zoom or webex. The first hangout version was far more exciting than Google Meet, Duo or Latest-Hangouts.

Like every other industry, Google lost its ability to push boundaries forward and act as a beacon for innovation. "Higher resolution" is definitively not innovation.


> I'm not really excited that every meeting participant will be able to see my facial flaws in 4k HDR with dynamic surround.

Do you meet people in person?


It's not really the same is it? Your face appearing an unknown size/quality to however many people are at the other end is a different experience.


How much innovation do we really need in this space. My primary request would be for higher fidelity - what do you feel is missing?


> I'm not really excited that every meeting participant will be able to see my facial flaws in 4k HDR with dynamic surround.

I recall there being an option in settings where you can limit the send quality of the video.


> I'm not really excited that every meeting participant will be able to see my facial flaws in 4k HDR with dynamic surround.

Then you are a fool because high quality audio and video is not just a "nice to have" in video conferencing. It makes the difference between a phone-like conversation where you can just talk, and "I.. sorry.. after you. Ok so I was g.. no go on - what were you going to say? Sorry can you repeat that I can't hear you?"

That said, Zoom has been much better quality than Hangouts when I tried it. Do Meet and Duo use the same tech or does Google really have three different video conferencing systems?


fwiw, I have found video quality on Meet to be by far the worst out of any video chat app I've ever used. My company pays for g suite but all the video in my google meetings range between 320x240 and flip phone video from 15 years ago, and I swear I'm not exaggerating. This can happen even with a 2 person meeting. I've never had this issue with Zoom.


Weird, I have the opposite experience. Meet for years was much worse video/screen share resolution/latency/quality than Zoom or Gotomeeting. In fact it was so bad originally that a bunch of our team members requested we keeping paying for gotomeeting even though we had meet for free in GSuite.

It's gotten a bit better to the point it's mostly tolerable now, so we might actually switch over. Still, I wonder how much the web client is holding it back.


with most people working from home using the crappy webcam and mic they a don't have cam capable of doing 1280x920 or the bandwidth and/or computer capable of streaming at that res.


Nice try, Larry.


I recently got a Pebble Time Round and love it. Never wanted a watch before.

Few watches provide vibrating alarms, but Pebble's helps keep me on track during the day without disturbing everyone around me with beeps. I wrote a tiny app to vibrate at different points of a meeting so I know how much time is left without boorishly checking a clock, and the development experience was smooth.

With the Android Light Flow app, I can control which contacts for which apps can send notifications, which helps keep me aware of things I need to be aware of without being a firehose of alerts.


Saving half a pound (225g) on a backpacking trip is a enough that a frequent hiker would notice.


Some people do spend big bucks to cut out ounces for hiking/backpacking gear. I'm skeptical it's meaningful numbers in the context of a device like this.


you obviously haven't met any ultralight backpackers. Of course, I can't see any of them "lugging" around a kindle on their hikes because there's no way that extra "weight" is necessary


I do know ultralight backpackers. But, yes, I can't see them carrying a Kindle as opposed to maybe some pages ripped out of a book :-) Maybe a certain class of long distance hikers but now we're getting into vanishingly small markets.


I don't think this offers an "incompatible experience" - my reading of the post is that the experience is fully compatible and that users of non-Chrome browsers will still be able to use Google Search. Currently, some/all non-Chrome browsers won't do the extra prefetches.


> there's nothing in this analysis that suggests they're doing otherwise

The article included a decompiled code snippet showing it running methods like "sendMMSLog" and "sendPhoneCallLog", apparently logging a bunch of private data and sending it back to Uber.


No, it shows that there exists a method that calls a bunch of methods. However, as others have explained these methods appear to be from a library that's been loaded in wholesales, and it doesn't show that the method is ever called or that any data is ever transmitted.


The method is named sendMMSLog. The body of the method is not shown. Is it logging the messages sent to & from Uber? Or the messages you sent to your girlfriend. The difference between those two is massive.


My computer runs Debian, so I read debian-security-announce, which gets one email per updated package. Sample of messages from this year: https://lists.debian.org/debian-security-announce/2014/threa...



Google talks a fair bit about their internal systems at this level of "descriptions but not code" - Bigtable, MapReduce, Spanner, Flume, Chubby, and more have been influential.


In fact, they do more than just talk: they often publish papers describing how they work. The open source community has since recreated a lot of them, which has proven useful to a lot of people (e.g. HBase, Hadoop, Apache Crunch, etc.)


Works for me - I have a StartSSL personal cert and an Android phone (Nexus 5 / KitKat), and all of Chrome, Chrome Beta, and Firefox load up the site without any sort of warning or other indication. I also dug out a Jelly Bean phone (Galaxy Nexus) to try with the stock Android Browser and didn't have any issues.


re: "One way less for Google to track me."

Google says that they don't use Google DNS for tracking.

From the Google DNS privacy page: "We built Google Public DNS to make the web faster and to retain as little information about usage as we could, while still being able to detect and fix problems. Google Public DNS does not permanently store personally identifiable information."

They go on in some detail to say how and what they log.

https://developers.google.com/speed/public-dns/privacy


They go on to contradict themselves. Google Public DNS does not permanently store personally identifiable information except for there 20 things:


Unless you happen to own an as number I don't see how that info is personally identifiable.


Why exactly should we believe anything google says ?

It's unverifiable and they don't exactly have a clean record. I wouldn't take their word for it, specifically for something privacy related.


Google say a lot of things. If you believe them, I have a bridge I can sell you.


But you can bet that at least the NSA and 3-letter-agencies around the world do monitor anything going to or coming from these two IPs. It's just a too convenient target.

More distributed resolvers (like with Cloudflare/Amazon datacenters directly linked to ISPs) would make this type of spying orders of magnitude harder (they must actively infiltrate the ISPs network instead of just tapping the DECIX/exchange switches, which e.g. German BND is ALLOWED to do!).

Shit, I'd pay for Cloudflare or any other service to build robust, interception-secured DNS servers. Or my provider, but providers have a shameful track record of building fast and reliable DNS servers.


We support DNSCrypt which will encrypt your DNS traffic between you and us. That's the last mile, at least. We support DNSCurve for the other hops, but almost nobody else does.


That's probably enough for most uses, as the unencrypted queries entering the cache are mixed with millions of other people's.

Myself, I'm still wary of providing data to any third party. Maybe it isn't the case any more, but at least recently, OpenDNS stored identifiable logs forever and potentially resold that data.


How about DNSCurve for traffic between you and us? (client requests). That'd be nice!


DNSCrypt meets this need and is based on the same crypto from DJB. If you're running a full-blown resolver, I'm not sure if DNSCurve works if you forward to us... I'd have to find out.


> More distributed resolvers (like with Cloudflare/Amazon datacenters directly linked to ISPs)

Cloudflare has 24 datacenters[0]. Google Public DNS is deployed in 45 peering points over at least 16 metros[1]. I would be very surprised if Cloudflare/Amazon were more distributed than Google in this regard.

[0] https://www.cloudflare.com/network-map

[1] https://developers.google.com/speed/public-dns/faq#locations


Why would they track Google and then ignore other large DNS servers?


Google is by far the largest public one, next to OpenDNS. The rest are provider DNS servers, which can't be tracked that easily (NSLs and other "pseudolegal" stuff aside).

It is a shame that the Internet has descended from a place where everyone could implicitly trust everyone into a hellhole of spammers, hackers, spooks and other retards. One cannot even trust that private two-way communication STAYS private because our own fucking governments have done everything to erode that trust.

It is bad times that one can trust Google to keep your data half-way safe, but your government not. It should be just the other way around!


It's not "our own" government, for any "our own" that purports to include me. It is a junta with enough guns to have their way with people across a continent. Pretending otherwise may sound "non extremist" and "responsible", but it's still ridiculously naive and bound to lead to nothing but disappointment. In ever increasing doses.

Practically, a good start is to recognize that not all valuable communication mechanisms benefit all that much from minimizing latency of packet delivery. IOW, not all, in fact not even most, means of communication really need the kind of "apparently real time" performance that telephony requires.

Moving services that don't, to a protocol where the focus is on making mixing and anonymizing simple, reliable and robust; rather than simply max throughput and min latency, would make end user security and anonymity guarantees much easier to make. And, for many types of channels, this can be done without much at all in the way of negative side effects, given how fast the underlying switching infrastructure has gotten.

Current protocols were necessary for any kind of usability when hardware was slow and expensive. And good enough privacy and security vise, when even the NSA didn't really have the means to do much wide net spying at the network level. But neither of those realities of the original internet is true anymore. Instead, sorry for the pompousness, the new environment is so different as to require, or at least recommend, something almost akin to a "new internet." Built with the "new" threats to communication in mind.

I'm not working anywhere, at a startup nor anywhere else, that could conceivably "profit" from any of the above ramble. If what I'm saying makes no sense, it's because I'm a moron (or at least misinformed), not because I'm a scumbag.


> It's not "our own" government, for any "our own" that purports to include me. It is a junta with enough guns to have their way with people across a continent.

Virtually all governments spy on their and other countries' citizens these days, not just the US. We Germans spy, the Brits and the rest of Five Eyes spy, the Russians spy, the Chinese spy, the Iranians spy and I bet that even North Korea has quite some good hackers.

And for the rest of your comments: indeed, a "new internet" would be required. But as you can see on the adoption rate of IPv6, we're stuck with this mess unless quantum computing forces us to switch.


IPv6 doesn't fundamentally offer end users anything far beyond the current standards.

I'm imagining a protocol for less tightly coupled endpoints could be written to, while the "switches" merely translate traffic to route it on current infrastructure. A more application agnostic version of mixmaster or TOR, so to speak. The important part, is really to get enough of a variety of end user apps written to it, to prevent anyone from knowing much about the traffic simply due to the protocol spoken. Then, over time, to optimize away more and more crud, until we've got dedicated hardware. It may still be a bit utopian, but the current mess isn't really serving people all that well anymore, either.


Virtually all governments spy on their and other countries' citizens these days

Virtually all governments have ever done so, these days it's just easier.


Which is a pretty good indication that all meaningful solutions to the spying problems, need to work at a level more fundamental than government. Routing around them, or rendering them impotent, by design, if you wish.


> Google is by far the largest public one, next to OpenDNS.

"Next to"? Google serves 130B queries per day on average[0]. OpenDNS only serves about 50B[1].

[0] http://googleonlinesecurity.blogspot.com/2013/03/google-publ...

[1] http://system.opendns.com/


There's no one I know in between, except of course the DNS servers set up by providers like AT&T, Comcast etc., which are locked to their customers only.


"the largest next to" often means "the largest except"; I think this was the source of confusion


I think I got confused and thought you were thinking of switching to OpenDNS for the NSA and not Google.


See also Delta Debugging for a line-based (non-syntax-aware) version of this. Very useful for cutting down test cases to size. I'm enthusiastic that c-reduce is threaded.

https://www.st.cs.uni-saarland.de/dd/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: