Hacker News new | past | comments | ask | show | jobs | submit login

I read the entire blog post and I don't understand the fanfare, it's a custom SoC, presumably to rival Apple's M1. I don't see anything innovative about this, besides heralding "begun, the neural net wars have".

What is the benefit of having all of the ML bits on device? Can models leverage them post training?




> What is the benefit of having all of the ML bits on device? Can models leverage them post training?

Yes, the whole point is to be able do to things like improve personalized speech recognition on-device, image recognition on-device, translation on-device, etc.


Reduced latency. Which has an apparent performance boost versus sending data to Google for processing and waiting for a response.

Potentially improved privacy (this is how Apple tries to sell it). Less data has to leave the phone to gain the utility of the ML models.

Improved device performance. Reduced network use and better specialized chips leading to better performance. Either in terms of better battery life or better time to get the result.


> Potentially improved privacy

On a Google device I would say the potential is reduced to zero!


I agree that the prospect is ludicrous. Hoping for potential of privacy from an ad-tech giant. Nonsense.


> Can models leverage them post training?

Yes. The whole point is that you have a slow process train the model offline on very large volumes of sample data, then use that trained model to make actual inferences based on data you find in the field. As those models become bigger and more complex, it takes progressively more computing power to run inference on those models. These ML accelerators are effectively the new GPUs — highly specialised processors designed to more efficiently handle highly specialised workloads.


To consumers they have to promote the magic/wondrous chip, but they greatest benefit to Google is controlling more of the hardware stack themselves.


There is a huge benefit to being able to do common tasks like speech recognition and photo processing fully on device without a server round trip.


Hopefully this translates into the phones being supported for longer since they are an inhouse design.


Are you asking what is the advantage of having ML optimized hardware on device? Yes, running inferences is expensive too, especially for video, photo, and speech processing. I would expect this phone to have user noticeable improvements in those three areas.


Latency, inference cost to Google. Privacy, in theory


Latency can be better if they get it right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: