This -- is super freakin' cool. Real value add from Google, considering everyone is on the same very mediocre Qualcomm Snapdragon 835 platform. (Please PLEASE for the love of God, Google, build your own mobile SoCs so someone can remotely catch up to Apple's 3 year lead in mobile performance already)
Credible too because Pixel 2 had such stellar picture quality in the reviews.
Chip making isn't magic. The more transistors you have the more things you can do. For example you can have more cache, more decoding units, more execution units etc. All this increase in complexity has costs which other phone manufacturer may or not want to pay for. Apple has a billion more transistors to work with. Will most consumers even notice the difference? will they pay more? Apple has a high asp so they can do it. How many other manfuctuers can do that?
A. They are already working on one. I expect it to arrive with Pixel 3 next year.
B. It's really hard to develop your own SOCs let alone get it right. And, it'd be difficult to catch up with Apple's lead, but If'd to bet on someone, I'd bet on Google.
4214 versus 1965. And that's assuming you own the very very very latest Android device from this year. It gets far worse the deeper / older you go in years.
Qualcomm is seriously lagging Apple but so are Samsung and HiSilicon[0][1], they seem to be about the same performance. Android Authority [2] tried to explain why Apple is faster and their best argument (among a few) seems to be that Apple's silicon is just larger. Qualcomm could probably make larger chips if they get payed for it.
That's somewhat over-simplifying their argument - the long and the short of it though was that Qualcomm has different business interests in chip-making (ie, bulk, cheap, quick to produce) than Apple - so Qualcomm is extremely unlikely to ever match Apple.
Of course if Google did in-house SoC design and had Qualcomm as a manufacturer then of course they could do it, but that's so far no happened and at that point it's Google's chip practically speaking anyhow.
One of the other theories I've heard is that it's partly because 4 is an unlucky number in Chinese, which is a major market. Thus why so many SoC vendors pushed to 8 core chips instead of taking those extra transistors and spending them on an L3 cache instead, which would have helped real world performance far more than the extra 4 cores do.
Probably not very true, but it's a fun theory. More likely is it just looks better on the spec sheet to have more cores & ever higher burst-only frequencies rather than the things that would have more bang/buck on more real world workloads like an L3 cache would. It's why you ended up with ridiculous nonsense like the Helio X20 & X30 (10-core SoCs with only 2 good cores and 8 low power ones)
Those low power cores don't take up a vast amount of die area, but they substantially improve battery life. Apple seems wedded to the idea that 12 hours of screen-on time is sufficient, so they're happy to keep pushing for maximum performance; other manufacturers are willing to trade off a bit of performance for a lot more battery life.
2 low power cores help battery life as they can handle all the screen off and background activity. 8 low power cores is just stupid. That 10 core chip in practice will function like a 2+2 quad. The other 6 low powers will be doing nothing of value at all.
Samsung could make a faster chip, but they pretty much have to put a qcom chip in us phones due to the qcom tax. So having a s8 with wildly different preformance doesn't make sense. For example the exynos chips can do 4k60 but Samsung limits camera app to 4k30 to keep it in line with us phones.
I don't see how Apple makes use of that power. After experiencing the general slickness of the pixel and it's awesome HDR abilities, then observing the iPhone calculator input/ animation lag, I would say that extra power isn't put to any use.
That’s a silly comparison. The iPhone’s calculator is sloppy programming, but it’s not indicative of any particular performance issue.
There are a bunch of neat real-time effects ones iPhones, and ARKit is pretty cool. But it’s really the opportunities that high performance offers for devs that matter!
that is a cpu score, most of the camera processing is done dsp with cpu only getting used for control so this would not be the right metric to compare.
> Fused Video Stabilization on the Pixel 2 and Pixel 2 XL
Aside from gatekeeping, what does any of this have to do with the Pixel 2? Why is the headline not "Fused video stabilization for any Android device with a camera"?
(Also, I can honestly say that I've never seen a video recorded on a phone that had motion blur or shutter wobble anywhere near as bad as what they show in their examples, so those examples seem contrived, but obviously it's great to have an algorithm that can handle exceptionally bad cases.)
There's a difference between running algorithms and running them without lag or battery drain. You need to tune the whole pipeline and keep every single step in sync, without unnecessary data copies or conversions, which is why Google spent the last few years designing its own image sensor processor, among others. It wasn't until recently that phones exposed raw pixels through the Camera2 API, for example. Some manufacturers don't want to go through the motions of exposing camera internals because of the extra work involved, some just want to keep their secret sauce secret.
See, I don't believe that that is true. There might be some additional information gained there, but it's not _needed_, as evidence by a decade of publications on the subject of correcting motion artifacts.
Without knowing where the lens is in relation to the sensor, you can't correct for the time-varying perspective distortion.
Perhaps the decade of publications have never involved optical image stabilization in the loop.
What you could have, however, is a simulation of where the OIS moves the optical axis of the lens to, when given the gyroscope inputs. In control theory terms, an open loop observer. But that would need to be calibrated to each model of phone.
> But that would need to be calibrated to each model of phone.
We needn't suddenly forget that every android phone asks you to wave it around in figure eights to calibrate the compass (and then still only seems to be accurate to within +/- 180 degrees). Making every video you take come out better on every Android device with sufficient CPU power after a few seconds of recording would be significantly preferable to "Only works on our devices, neener neener." To me, obviously, maybe not to Google.
Cropping is still happening, and it can gobble up your content if you're not careful.
Even in their demo video of guy jumping down sand dunes, we see the destructive price of 'fused cropping stabilisation'. The most important shot of guy jumping misses the moment his feet hit sand at 2 sec in when he does his biggest leap.
It's details like these, and the clouds of sand he kicks up behind him which make an interesting shot. I like my focal length untouched, thanks anyway, optical stabilisation all the way for me.
The terribleness and unnaturalness is precisely the problem they are trying to solve with this solution. The results look very impressive compared to previous solutions.
Idk about videos, but Google Photos automatically stableizes live photos on iOS. Didn't realize it did so until about a month ago. The results aren't always great, but it's cool when it works well. When it doesn't it's one button click to turn it off.
Hyperlapse from Instagram does some impressive stabilisation, but not for prerecorded videos. Hyperlapse (yes, same name) from Microsoft does, on prerecorded videos, but using an entirely different technique to Google's - IIRC it actually attempts to construct 3d spatial data about the scene.
I'm not sure FOMO is warranted, since Apple's so-called 'cinematic video stabilization' (introduced in the iPhone 6 days) is quite good. Additionally, the iPhone 6 Plus, 7 Plus, and both iPhone 8 models have optical image stabilization.
Two things left to improve for mobile phone video that would massively increase image quality: increased dynamic range and higher bit rate.
Increasing dynamic range in going to be tough. The only way to to do that would be to make the pixel's electron well bigger, but phone cameras have limited pixel size. Image sensors will probably have to provide a separate stacked electron well capacitor below each pixel, instead of using the photosensor PN junction.
Also, the industry really needs a standard raw video format that allows for high-quality post exposure correction and white-balance.
Credible too because Pixel 2 had such stellar picture quality in the reviews.