Hacker News new | past | comments | ask | show | jobs | submit login
Pixel Visual Core: Google’s first custom-designed co-processor (blog.google)
172 points by dazbradbury on Oct 17, 2017 | hide | past | favorite | 95 comments



In addition to still photos, would love to see exposure fusion (or HDR+ as Google calls it) in mobile video too. RED Epic and Scarlet cameras do this with 2 exposures per frame and call it HDRx, I've found it adds about 4 stops of dynamic range in grading. RED doesn't do any fancy anti-ghosting stuff though like HDR+. Magic Lantern firmware for Canon dSLR's also does something similar with dual ISO frames.

This together with 10bit video profile for h265 (think phone hardware is capable now but haven't seen any phones with 10bit video yet) would make mobile video a whole lot better, right now it frankly sucks for anything with high contrast or dynamic range.


This is pretty much Tensorflow on metal. Considering that tensorflow works pretty well on most mobile phone GPU, im betting this feature can be turned on for most phones if Google wants.

I would love to see the apples to apples comparison of the Pixel tensorflow computation running on a vanilla snapdragon GPU vs this custom core.

Also will be interesting to know why halide and not opencv.


Halide and OpenCV are not comparable. Halide is mainly about decoupling the description of a low-level image processing operation from implementation details regarding tiling, storage of intermediate results, etc. OpenCV is a library implementing high level algorithms like multi-view 3D reconstruction that Halide does not even attempt to address.


My mistake, I phrased my question incorrectly. Why not opencv with a halide backend. The tensorflow whitepaper compares the TF ir to the halide ir. In fact, the work in tensorflow core is around XLA https://www.tensorflow.org/performance/xla/ (which brings compatibility across various architecture)

It seems that this is more an attempt at locking the tensors to target custom metal, so that it can't be copied and made to work on other phones.

For examples, the Google Camera NX version is a port of the HDR+ feature on compatible phones.. Even the cheap Xiaomi. They actually give results comparable to Nexus.

However, by locking the model to a very custom image processor (and custom IR) , this can't be ported. This looks to me as a version of walled garden more than a performance win.


Halide is much more flexible. OpenCV is a fixed high level toolbox, combining things requires lots of intermediate buffers. Halide allows compiling arbitrary kernels in a fused way with extremely good performance. Sure you could implement OpenCV functions using Halide, but better to expose the lower level more flexible API. Source: I’ve rewritten parts of an OpenCV app in Halide for performance.

Also your argument that it’s not about perf makes no sense to me. Sure other phones have HDR, I bet its way slower and more energy intensive than the Pixel 2. Except maybe the iPhone, which may also have custom silicon. If they had custom HDR tech in software they wanted to keep they could just keep the source closed, or patent it.


That bet is the problem. Tensorflow on mobile already targets GPU and is going to be even more so with XLA. This debate is going to be hearsay unless we have an apples to apples comparison of the base tensorflow code running on the halide ir targeted to the tpu vs a vanilla ARM GPU... Even using opencv.

I find it telling that they compared cpu vs tpu... Not GPU vs tpu.


I think you’re confusing two separate things. There’s two separate approaches to a computer vision task: classical and deep learning. Halide and OpenCV are for classical approaches, TensorFlow is for deep learning.

People don’t use both for the same problem, nobody is using “tensorflow code running on the halide ir”.

Both approaches use a lot of basic operations applied to matrices, so the custom ASIC supports either, but separately. HDR sounds like a better fit for a classical approach, so their HDR filter is probably written in Halide, with no machine learning or TensorFlow anywhere.


I'm really confused why you keep bringing up opencv. What does opencv have to do with the (presumably) TF ML model running on the IPU that you want to benchmark against the TF ML model running on a GPU?


Because halide is positioned as a high performance improvement over opencv. While I believe it is just an architectural decision to enable a non-portable ML model.


But Halide does have higher performance, is portable to many platforms as well as GPUs, CPUs (w/ parallel SIMD), and isn’t used for ML models...


Where did I mention halide?

My question is why do you care about opencv when benchmarking neural network performance on various hardware?


Curious if anyone here knows anything about this alternative for existing phones:

Ask HN: What happened to tensorflow lite? | https://news.ycombinator.com/item?id=15494368 (Oct 2017, no comments)

>zitterbewegung: They announced it at Google I/O in May See : https://techcrunch.com/2017/05/17/googles-tensorflow-lite-br... I can't find another announcement is this vaporware?



Maybe Google should start designing it's own CPUs/SoCs as well, since Qualcomm doesn't really seem capable of keeping up with what Apple is doing. Snapdragon 835 came out 6 months after the A10, and scored about 2/3 as well at single core, and less than 10% on Geekbench 4. A11 scored twice what the 835 did on single core, and over 50% more on multi core, and the 835 is going to be the best chip available in the U.S. for Android until sometime next year. Plus, you've got Samsung getting exclusive launch deals for the newest Snapdragon chips now, so other manufacturers are just now launching their flagship devices with hardware that is increasingly far behind what Apple is putting out. This gap could end up being as bad as Intel and AMD pre-Ryzen if something isn't done about it. Imagine what the high end laptop market would have looked like if only Macs had Intel chips!


The problem is there is no magic wand the Android SOC manufacturers can wave to match Apple on chip performance. It's a multi-front war.

Firstly yes Apple is well ahead in CPU core design and has been since 2013 when they brought out the first 64-bit ARM chip. And yes I know 64 bit by itself doesn't account for the performance increase, but the 64-bit ARM core architecture is much more streamlined and efficient than 32-bit which had accumulated all sorts of cruft over the years. The A7 wasn't just 64-bit, it was step change up in performance creating a gap that the other ARM vendors have never caught up with.

That's not the whole story though, core design alone isn't the only reason Apple chips are faster, another is they have dramatically more cache memory on chip that other designs. The A10 had a total of 7MB of L2 and L3 cache, more than double the maximum configuration for the 835. Apple SOCs are massive, which makes them expensive.

Finally, they can build in customised capabilities in hardware to support specific high level features, such as the secure enclave and the neural engine in A11 Bionic to support image post-processing effects and face recognition. This is clearly what Google is trying to do, but in an add-on chip because they're not (yet?) up to designing their own SOC.

So it's not just the the Apple SOCs are faster, the reasons why they are faster are on several different axes. Some SOC vendors apparently say their profit margin lies in shaving off just one or two square millimetres of die size. Market forces like that would never produce a chip like the A11. So really this approach by Google of designing their own custom co-processor, and maybe eventually their own SOCs, could be the only way to close this gap.


The problem is that basically Qualcomm is strangling the entire cellphone space. So what if you built a completely new, much faster, much better-in-every-way SoC... if you still have to pay Qualcomm for the entire chipset because they refuse to sell you just the modem without you paying the full bill.

Multiple designers could run circles around Qualcomm, but the marginal cost of doing so combined with the number of CPUs you'd have to mint to do so makes it a losing proposition. Apple's in the unique position of being able to leverage Intel to bend Qualcomm over a barrel, and Qualcomm fires back with a gazillion lawsuits and New York Times articles that cry "woe is me, Apple is eating my lunch!"

On a different topic, benchmark navalgazing is probably not going to help anyone. Every company that's been designing CPUs from the mid 80s on has been writing benchmark defeating code, and I'm 100% certain PA SEMI/Apple is no different; they certainly have the silicon space to waste writing Application-specific-accelerators for acing benchmarking tasks, especially on a platform with as much opacity as Apple holds over the iPhone.


> The problem is that basically Qualcomm is strangling the entire cellphone space. So what if you built a completely new, much faster, much better-in-every-way SoC... if you still have to pay Qualcomm for the entire chipset because they refuse to sell you just the modem without you paying the full bill.

If Qualcomm is that big of a drag on Android (or not-Apple, but it amounts to the same thing) smartphones, maybe the solution is for Google to buy them outright and rearrange their practices.

Sure, it'd be expensive, but not anything Google couldn't afford.


Why not also blame Samsung for their Exynos or Huawei for their Kirin? Do you know what all 3 of these 3 SoC's have in common? They all use the same reference design developed by ARM. None of these SoC makers are going to spend the additional billions in SoC R&D deviating from the ARM design. If you really want to see a change then it must begin with ARM.

>Qualcomm doesn't really seem capable of keeping up with what Apple is doing. Snapdragon 835 came out 6 months after the A10, and scored about 2/3 as well at single core, and less than 10% on Geekbench 4. A11 scored twice what the 835 did on single core, and over 50% more on multi core, and the 835 is going to be the best chip available in the U.S. for Android until sometime next year.

Qualcomm and Apple are on different release cycles. The SD 845 will be in the same ballpark in multicore score and about 25% slower in single core than the A11.


You could blame them as well, it's just a lot less relevant to the U.S. smartphone market (where iPhone has the highest market share AFAIK) since everyone is on Snapdragon here.

As for Qualcomm and Apple being on different release cycles...that's exactly the problem! They're 6 months behind AND still have worse performance, plus due to their deal with Samsung, it's going to take even longer than that for any other manufacturers to ship devices with 845.

As another comment said, it's not really going to be profitable for any company to try and compete with Apple on this. Which is why if it is going to happen, it's going to have to be done by someone like Google who has a vested interest in it beyond profitability from the chips themselves. If ARM reference designs can't keep up with what Apple is doing, and nobody else is willing to invest to go beyond them, Android will fall behind in the high end of the market.


I don’t think google has enough volume to make that worthwhile. Wouldn’t you need to sell tens of millions of chips for that to pay off?


If they only used it in their own devices, it would probably be too expensive. If they sold those devices to other manufacturers, they could probably get enough volume to at least avoid taking big losses. It's less about making money from selling the chips, and more about keeping the hardware behind the Android ecosystem competitive.


I wonder if the other OEMs would buy it. I could see Samsung wanting to make their own or at least not strengthen their dependence on Google.


So what is that IPU exactly? Lots of buzzwords but few actual details about the architecture.

Is it like a integer GPU? The massive amount of ALUs suggests something like that.

But later, I see "Notably, because Pixel Visual Core is programmable, we’re already preparing the next set of applications.". So, is it FPGA-like?


It's an Image Processing Unit. Architecturally I'd expect it to look like a GPU, maybe with something like the new matrix-multiply unit from their TPU. https://www.slideshare.net/AntoniosKatsarakis/tensor-process...

Edit: wrong name


So I'm seeing this and another comment downvoted. Is that because you're wrong? If so, can one of the people downvoting (or someone else) expand and actually answer the original question?


It does not have a TPU matrix multiply unit.

That unit is massive, and wouldn't fit in a small die like this.

It seems likely it's optimized for the same kind of operations though, even if it doesn't have a matrix multiply unit.

I would guess the main intended use for this silicon is running neural networks, even though the initial use case is for photos.

Nearly all AI things on phones (voice recognition, google assistant's local features, keyboard predictive language model, offline translation, etc.) are severely compute limited, and could perform much better with this silicon.

Some features are obvious candidates to put on-device, like realtime recognition of the contents of a photo, realtime wavenet voice synthesis, yet compute limitations preclude it.


[flagged]


You got down voted but the way Google has been acting I wouldn't be surprised.


Cool, but am I the only one who thinks the photos on the right (toward the bottom) are a little overexposed / bland / washed-out?


Yeah, agreed. I screen captured the unprocessed image from the webpage (which has been jpeg'd and therefore lost lots of the shadow detail). I loaded it into a paint package and did a simple (and computationally cheap) histogram adjustment. IMO, the results were at least as good as theirs. And these are the results they cherry picked. Not impressive.

I guess most people won't load the image into a paint package to fix it, so their aim is just to automatically do a job nearly as good as a trivial change a human would make. Still, I was hoping for more.


I don't think the Pixel Visual Core is supposed to do anything new or revolutionary -- the primary purpose is to do these old things FASTER for the user.


But a simple histogram stretch can be done with very little processing power by modern standards. 50 ms on one core of the application processor would be enough.

I think they just chose bad examples. The results from this Google Research blog post are much much more impressive:

https://research.googleblog.com/2017/04/experimental-nightti...

I wish my smartphone could do _that_ automatically.


You end up with a lot more noise with simple histogram adjustment from a single image, plus the dynamic compression to a final 8bit image won't look as good for many use cases.

Exposure fusion from multiple images has much higher usable dynamic range and lower noise.


Thanks for that. That was amazing.


Faster and supposedly uses much less power.

> Using Pixel Visual Core, HDR+ can run 5x faster and at less than one-tenth the energy than running on the application processor (AP).


Not just faster but more energy efficient I presume. That's especially important when taking long videos.


BUT it is HDR aka they take multiple of photos and use the under exposed for the sky and the more exposed for the faces. So they are doing MUCH more then a histogram.


Sure, but why even do that if a histogram stretch achieves what is effectively the same end result?


Because in the long run HDR will be better for people who are horrible picture takers.


In your one example sure. But isn't that sort of short sighted?


No, I noticed it right away and I'm no photography expert. I suppose it's easy to control that in software, though, preferably before they actually enable the feature in Android 8.1.

I also don't know if Google actually consults with real photography experts when creating these machine learning techniques. Apple made a big deal out of doing that at the last event to create "beautiful photos". It's not enough to just "mathematically improve" something. It should look very pleasing to the human eye, too.

Perhaps if Google did that, too, it would have avoid issues such as this one, or the somewhat unrealistic look of its portrait mode blur in photos. It's hard to explain, but to me the Pixel 2 portrait shots look like the person doesn't "belong" in the environment. Like they have a linear-blur wallpaper behind them and they photograph themselves in front of it. Or almost like they are photoshopped into the image. I wouldn't say it's that bad. And a majority of people probably won't notice it immediately, but I still think it needs to be improved (perhaps, I don't know, by actually using a dual-camera setup like Apple and Samsung?).

https://www.gsmarena.com/google_pixel_2_xl-review-1670p4.php...


I feel the same way in that their portrait mode just doesn't look right compared to what you would get from a real shallow depth of field and a fast lens.

But I think the (depressing) truth of it is that I notice it because I've had fast lenses and spent a lot of time taking photos, so I know what natural looking bokeh looks like. And bokeh is also just an artifact of a lens and aperture combo, so who is to say that extreme blur with a sharp line as they are doing isn't just going to be the new standard?

I suspect the next generation will come to prefer this look (if it continues) and find the "natural" effect with real glass as weird.


>Apple made a big deal out of doing that at the last event to create "beautiful photos". It's not enough to just "mathematically improve" something.

There are many non-obvious issues with this as well. You're most likely thinking of, for example, an non-centered subject can create an interesting composition and algorithmically centering the subject would not create better looking photos.

However, in film's history, we've seen older technology briefly preferred for no other reason than momentum. An example off the top of my head is higher frame rate video does not have the "movie feel" that 24 fps film does. That doesn't mean 24 fps is ideal, its simply familiar.


As landscapes, they may well be less interesting. As communicative portraits, they're (IMO) more successful. The first one is probably the best example. Having a clearer and brighter picture of (for example) your partner and daughter would likely be preferable. If it was a candid and unposed shot, it might look weird and unnatural though. This is why the fourth example is probably the least successful, that hazy summer glow in the "before" picture is somewhat desirable.

To my eyes, the shadow boost is a little too much. I'd ideally like a slider to customise exactly how much brightness you're adding. Personally, I'd probably turn most of these down a touch.

Google clearly has fantastic technology driving this image processing, but I think they need a little more work on their taste.


IMO from a photography point of view the first is the worst displaying very clear haloing around the subjects, one of the clear signs of "not good" HDR. I guess it depends what exactly you're optimising for but the results aren't pleasing to my eye.


I would also like more controls. You can have that shooting RAW and editing with Snapseed, but it's a bit more involved and the usable dynamic range of a single mobile raw image is below exposure fusion from HDR+.


The point is they recovered the shadow detail of people’s faces which would otherwise be lost.


Those are challenging circumstances for taking photos (backlit). The point here is to recover shadow detail.

Them looking bland is a function of them being made in challenging circumstances.


Their results are quite bad compared to a passably competent Photoshop user willing to spend 2 minutes per picture.

But if you need the computer to do all the color adjustment for you, so you can post your snapshot on facebook or print it at the local drug store with no manual intervention, this might be a good enough result.


I think you are missing the point with improvements of this nature. Take for example: you have kids. You're at an event with said kids. While I still own a mirror-less micro 4/3rds, I often time don't have it with me. It's easy to snap hundreds of photos in random life events with a good smart phone camera. I personally have almost 2TB of family photo and video and I've got a long way to go. Can you imagine me spending 2 minutes, per picture, on even a fraction of that? Not only have you discounted the "sharability" aspect, the "instant gratification" aspect, and ultimately the tradeoff of results over time. When phones started to wield cameras I intentionally avoided them because of less-than-potato level quality. Now a phone I'm buying must have a fast, quality shooter. I own both Google and Apple products and the Pixel has pushed the bar up significantly in my personal opinion. Do I occasionally edit (mostly in-phone)? Sure. Simple grading and some vignetting goes a long way. Is it professional? No. But the point is I'm not shooting the pictures to win anyone else over. It's an investment in preservation. If I have all of that overhead, as you suggest, it will never get done. I have more useful things to spend time on than editing a few dozen photos on my computer, after I transfer them, then upload them SmugMug and then send out an email or note about it. Instead I can send a group message on Signal and everyone gets the gist in a fraction of the time.

This thread seems to be filled with amateur professionals thinking that their iPhone or Pixel will magically do something optically it can't (yet?). It's good enough, and in some cases a spectacular ROI (both Apple/Google/Samsung/LG/etc) for the price. The photography giants wouldn't sell or develop new products if they felt a lot of pressure in this space. You shouldn't have an expectation that your $1000 phone competes with a few thousand in camera, glass and filters in the right hands. It won't.

Edit: words


> This thread seems to be filled with amateur professionals thinking that their iPhone or Pixel will magically do something optically it can't (yet?).

This is key. I own some decent camera gear and I'm amazed at what is coming out of all of the top end phones. Of course I can critique and find faults, but the pictures that non-photographers can take with their phones in a matter of seconds are down right amazing.

I actually wonder if the gear heads feel their domain threatened? I'm waiting for the day when I don't have to lug around my camera and multiple pounds of lenses in order to get all the shots I want :)


> but the pictures that non-photographers can take with their phones in a matter of seconds are down right amazing.

But that simply highlights that simply owning good gear doesn't mean you can shoot good pictures. You still need to learn lighting, composition, posing your subjects, etc etc, which was always the case!


Absolutely. Every photographer should be focused on those things as job one. My point above was that many wannabe photographers just focus on gear, and they are very quick to be critical of what phones are producing. My point was that phones can and are making great pictures without requiring a bag full of gear, and that it's highlighting that gear is secondary to the skills you listed.


I guess we have different personalities in mind when it comes to gear-heads. I think of them as people chasing tiny (sometimes imperceptible) improvements which only require you to buy something as opposed to learning something. (much like the casual bikers who chase tiny weight reductions or obsess over which bike is more "aero").

>My point was that phones can and are making great pictures without requiring a bag full of gear, and that it's highlighting that gear is secondary to the skills you listed.

I'd phrase it differently though, and I'm sure you'd agree. I'd say that gear isn't secondary, but gear is a tool that should be used when appropriate. You still need carry an external flash/light/etc if you want to light your subject in poor light. You still need to carry a long lens if you want to take a picture of an eagle perched in a tree. You still need a tripod if you want long exposures, etc, etc.


..or for the day that we have all those magic baked into our "decent camera gear"!


And the problem is that it's mostly software. The main camera people (Cannon/Nikon) are terrible at software. From what I understand Cannon is a bit better, but all of the software I use from Nikon is horrible. I love the lenses, and my Nikon body, but anytime I have to use their software I cringe. I also don't see how they fix this anytime soon. It's just not part of their DNA.


You're missing who this technology focused article is for, and making a vague argument about a grandmom usecase in a retail product. Sorry, but this article is clearly intended to be viewed by people interested in photography, and also photography-technology. If you claim your tech is awesome, you will get criticized if people don't think it is all that great.

> You shouldn't have an expectation that your $1000 phone competes with a few thousand in camera, glass and filters in the right hands. It won't.

IMHO, it doesn't even beat $500 worth of camera equipment.


I wish my Nikon pro camera had a similar chip inside. Now we have two worlds - awesome pro cameras with incapable post-processing chips (I mean beyond RAW and initial processing) and horrible mobile cameras with outstanding post-processing chips.


Also the double lens, one for wide angle, one for tele, snapping at the same time and combining the images into one.

It will be a looooong time until we see that on a pro camera with interchangeable lenses, if ever.


We already have zoom lenses that can do wide angle and take portrait shots. Having multiple full frame sensors with multiple lenses doesn't make much sense to me TBH. What is the problem you're trying to solve here?


Sony tried to do it with their alpha series of cameras. You could install apps from sony, and do post-processing. The cameras were under-powered for this sort of stuff, and the experience of installing those apps isn't very seamless.

https://www.playmemoriescameraapps.com/portal/


If you have burst mode and can get the raw frames, then shouldn't you be able to recreate the effect in post-processing? With the severe power constraints of a mobile phone a co-processor makes sense, but I'd think a general purpose desktop processor could easily handle this algorithm.


Can you burst 80 frames @ 60fps of 12 bit 12 megapixel RAW and save all that output, together with the output of gyroscopes and accelerometers timed to the start and end of exposure of every row of the image?

If you can, you might be able to postprocess it... But as far as I know, no DSLR cameras even have gyros, nor sufficiently high burst rates, so that kind of photography is out of the question.

It comes out to over a gigabyte of data per image... and you can't compress it before processing...


Good point about the gyros and accelerometers. I'd think you could do decent lossless compression given that the frames should very similar. I suppose though that once you are doing sophisticated compression you aren't that far off from implementing the post processing we are talking about.

I guess the long and short of it is these camera companies aren't software people--except isn't Sony a big camera company? You'd think they'd be able to bridge the gap.


The left/right comparison for HDR are pointless : they should show the same image with same contrast/exposure with the two techniques at 100% pixel. The difference between without HDR and with HDR should be less noise (otherwise if it's just about having more exposure in the darkest parts of the image, then a little bit of image processing can do the trick)


Hopefully the idea is that it looks at the scene, decides which exposures are needed to capture the DR (for example 1/100s ISO800 for the foreground and 1/200s at ISO100 for the background) and then quickly captures exactly those two exposures in rapid succession. Then the software would try to compensate movement between the frames and stack these exposures into one.

So long as it involves multiple pictures and the inputs (exposure params)are computed are taken done when the photo is captured, then it does things you can't do with image processing in any single image.


Yes you can, there are multiple inputs BUT the effect is about extending dynamic range only (not create an artistic effect between the inputs).

You can always extend numerically the DR of any set of values but you will stumble upon some noise eventually.. This HDR+ module is supposed to have less noise in the shadows OR have better sun effects / beautiful highlights (candles for example) / good looking clouds. A good test would be shooting in strong direct sunlight and look at the shadows.


Right, the key "artistic" thing is identifying foreground/background, faces, interesting elements etc., and simply try to balance the exposure to create a pleasing end result. I have no idea what this thing does, or even if it uses multiple exposures (or even multiple cameras), but I was simply thinking that facial recognition/focus and automatic toning has been around for ages, it has to do something that requires fast decision (at trigger time) about exposure params.

I couldn't see whether that was done though, it could be done with most of the same effect by simply exposing e.g. +2EV and -2EV and simply deciding afterwards how to blend those images into a HDR given the image content (faces, ...)


> HDR+ addresses this problem by taking a burst of shots with short exposure times, aligning them algorithmically, and replacing each pixel with the average color at that position across all the shots.

This is almost exactly how you'd make super-resolution pictures (adding a nearest-point scaling up step at some point! Yet their super-resolution app (Photoscan) has some sort of artificial upper resolution limit. If they do plan to expose this capability to 3rd parties, I hope someone makes a proper scan/super-resolution app! Also tried Office Lens, also left disappointed :(


Super resolution would want to utilize the OIS to artificially shift the lens around instead of cancelling movements. I dont think they are exposing anything like that.


Turns out one doesn't have to. The users hand movements are at least one pixel as long as the phone is handheld. Just use natural movement to do superresolution.

Everything you need is available in the camera2 RAW API.

Using the users hand movements help cancel out various types of per-physical-pixel noise too.


It seems odd to me that Samsung isn't doing things like this. After all they have both the chip design talent in house, they design their own Eynos chips, and even have the fabs in house as well. They're not as well regarded for software, but they do at least have some in house dev capability. A Samsung Galaxy with that sort of differentiation would, like this phone, make even a die hard iPhone user like me take serious notice.

Or is Samsung actually right not to go down that path and the economics of Android handsets just don't support the significant extra spend and commensurate price bumps that sort of investment would require? The Pixels look like great devices, but without dramatically upping their distribution and marketing game is this really a vanity project for Google? I hope not, I'm crossing my fingers that with the talent they acquired from HTC, Google is really intent on making a splash in the market with the Pixel and not just in the tech press.


I'm not sure if they have the software dev capability. Sure, they make nice UIs/apps on top of Android, but what about core computer science and domain specific skills? The new OS they tried to make was apparently full of kernel level bugs.

Google has ML experts, compiler experts, etc. who probably contributed a lot to this. I guess it is a perk of keeping domain experts in the company who are there to advise when time comes.


Since half of Samsung phones use Snapdragon and half use Exynos, I've heard that Exynos is gimped to not exceed the features/performance of Snapdragon too much.


HDR+ is one of my favorite features from my 6p.

I am routinely blown away by how well it performs at all lighting. Everyone i've shown pictures from low light scenarios is always impressed.

HDR+ is a big reason I bought the Pixel 2, I know its going to keep getting better.


Consider how inexpensive and high quality modern image sensors are. With this SoC, you can use so many amazing image processing algorithms. ML based object masking, deconvolution, focus stacking, exposure stacking, sensor fusion like FLIR, frame stitching multiple cameras, color compensating in camera flash. For serious heavy lifting you can send these bundles of images to the cloud for processing. All done without any more user input than tapping the take photo button.

It's going to seem like magic to the user.


A tag-a-long comment, this SoC + imaging system is going to make it much easier to write the software because you know all of the particulars of the optical system, sensor, position data, etc etc. It's not like implementing this stuff on the application layer which is nightmare mode difficult. Cameras are so fast, their controls are so fast, this is like some super human time dilation photography and photogrammetry.

I've held out for so long, it's finally time to replace my HTC One m7.


Even with all the fancy ML stuff, the HDR "glow" is still clearly visible on most of these photos.

On the plus side, the co-processor seems programmable, which means quality can improve without the need to upgrade the hardware.


It seems odd to me that they’ve made this special part of the chip but it won’t be enabled until a future software update that will come out 1+ months after release.

The basic idea of using special hardware makes sense. Apple has done the same thing, and as you get more and more data off the sensor this seems like the only way to do analysis/adjustment in real time without wasting a ton of power.


It looks like they are releasing these bits and pieces before the Finale. Custom ARM core, SOC in entirety.


I'm not so sure about that. I don't doubt they have the skill to do it, but I doubt they can make the numbers work. Apple can make custom CPU cores because they sell a shitload of phones. Google does not.


Maybe they would license it to manufacturers - use our core design AND our operating system gratis, as long as you send users to our services?

Googles ad revenue is so important that they should do absolutely anything and everything to protect it, spending billions on R&D to keep android at the forefront is essential, and at the moment they are only really competitive on the software side (A11 smokes latest snapdragon chips, as does the A10....), offering a custom chip would give android the vertical integration that Apple leverage to be so dominant.


It also gives leverage over Qualcomm etc. wrt availability of security updates.


I'm not so sure... Have they bought or hired any taken like apple did?


You're begging the question here. Apple's acquisition of PA Semi filled a void. You assume a similar void.


The sophistication of the team you need to plug together components into a workable SoC or design something like the TPU processor core is much less than what you'd need to design a top of the line OoO application core. They clearly do have a team that can put together SoCs like the original TPU and if they keep working at it I'm sure they'll be able to train up to doing that eventually but I'd expect a decade long learning process unless they bring on people who've done that sort of thing before.


Absence is the default for any ontology.


Is the HDR+ stuff running on the PVC already, or is it still running on the application processor until they enable it?

They say it uses Tensorflow, but is it a limited subset of Tensorflow designed only for image processing, like running a pretrained model on a TPU?


Wow. It feels like the new era of custom-designed SoCs. I should've studied HDL like Verilog while I was in EE major in the school.


Can't see any pics on Firefox Focus browser on mobile. Are they using some weird Desktop only CSS?


Works fine on my own Focus browser (Android). Weird.


Focus on Android currently uses WebView (aka chromium/chrome)


Looks like Movidius Myriad... I'm surprised Google would make their own chip for this.


> The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML)

Is that hardware only accessible by the camera? If so, isn't that wasteful?


I think the following bit at the very bottom of the article suggests that it isn't:

"HDR+ will be the first application to run on Pixel Visual Core. Notably, because Pixel Visual Core is programmable, we’re already preparing the next set of applications. The great thing is that as we port more machine learning and imaging applications to use Pixel Visual Core, Pixel 2 will continuously improve. So keep an eye out!"


Anyone feels it's tech for the sake of tech ? I don't even appreciate the HDR processed shades. There's a lot more detail that's true, but 1) it's fake IMO 2) pics don't have to be perfect, it's just a moment of life, the details are precise enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: