Hacker News new | past | comments | ask | show | jobs | submit login

Do you have a source for this, or are you just reading the statement I quoted differently than I am?

The way I read the quote, you use TF-Lite to produce a quantized TF-Lite model, and then use a cloud based compiler to compile it for the actual chip.

This is why I asked "am I missing something." Do you have a reference for where the compiler exists in the open source TensorFlow project?

Mostly, what I'm interested in is learning what capabilities their TPU provides, to see if it would be useful for other similar kinds of kernels like DSP (which, like machine learning kernels, also involves a lot of convolution).

So I'm interested in looking at what the capabilities of the chip are, seeing what could be compiled to it. But I haven't found those docs, or found a compiler that could be studied. But maybe I'm not looking in the right place.

Here's an overview of the architecture of their Cloud TPUs, which has some good architectural details but doesn't documet the instruction set:

https://cloud.google.com/blog/products/gcp/an-in-depth-look-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: