* They actually started deploying them in 2015, they're probably already hard at work on a new version!
* The TPU only operates on 8-bit integers (and 16-bit at half speed), whereas CPU/GPUs are 32-bit floating point. They point out in the discussion section that they did have an 8-bit CPU version of one of the benchmarks, and the TPU was ~3.5x faster.
* Used via TensorFlow.
* They don't really break out hardware vs hardware for each model, it seems like the TPU suffers a lot whenever there's a really large number of weights and layers that it must handle - but they don't break out the performance on each model individually, so it's hard to see whether the TPU offers an advantage over the GPU for arbitrary networks.
It's something that keeps getting rediscovered. I know embedded industry shoehorns all kinds of problems into 8- and 16-bitters. Some even use 4-bit MCU's. Might be worthwhile if someone does a survey of all the things you can handle easily or without too much work in 8-16-bit cores. This might help for people building systems out of existing parts or people trying to design heterogenous SOC's.
The lack of real 8-bit comparison data makes the whole paper a little suspect IMO...it's sort of like the early GPU papers that claimed 100x improvement over the CPU while running x87 scalar CPU instructions...the benefits are definitely still there but handicapping one architecture when it has features that are specifically capable of doing this is a bit stupid. It's not like they didn't have to do a lot of work on TF to make it output TPU instructions. When you're down to a 4x improvement...the benefits of specialized accelerators start to become somewhat questionable.
I do like that they highlighted the importance of low latency output though...that's even more critical for future non "Web" applications which have to run in real time.
3.5x faster than CPU doesn't sound special, but when you're building inference capacity by the megawatt, you get a lot more of that 3.5x faster TPU inside that hard power constraint.
Anecdotally, it seems most models can be quantized to 8 bits without much loss of accuracy, and fixed point arithmetic requires much less hardware. Training is still done with floating point though.
The brain appears to spend about 4.7 bits per synapse (26 discernible states, given the noisy computation environment of the brain); so it seems to be plenty enough for general intelligence. This could, of course, merely be a biological limit and on silicon more fine-grained weights might be the optimum.
Almost certainly, and depths would have to increase. Like all series expansions the coefficients on later terms have less and less impact so their dynamic range is less and less important to the final value. But the dynamic range on the initial terms is proportionately important. I expect the dynamic range of the weights will turn out to be logarithmic with respect to overall depth of the network.
Yeah...you could implement 16-bit multiplication/addition as two 8-bit multiplications plus carry... so worst case, if you want 16-bit multiplies, you implement it yourself
Yeah, in practice Karatsuba isn't a performant option for small operands unless your multiplier is catastrophically slow. (And it still doesn't get you to two multiplications.)
There are hints that even less than 8 bits per weight might be usable (for certain cases and on custom hw). Not sure if it's practical but it is definitely interesting.
I wanted to have some basic idea about hardware so I did some "research" (googling) and ended up giving a short informal talk. My slides with some links are here:
I imagine that once they've trained the floating-point models, they'll then quantize them into integers to make inference faster. It's not something I've done, but I imagine that the limited range of the of integers may cause problems (though they say in the paper that the 16-bit product can be accumulated to something that's 32-bit). The features to do this will be coming fairly soon to regular TensorFlow too.[1]
Note however, that on Intel it's actually slower than run off the mill float32 linear algebra library like Eigen or OpenBLAS. Its main forte seems to be ARM.
I was going to say as well. It seems like if caches are the bane of sequential processing(CPU) then routing has to be the counterpart on the parallel(FPGA/ASIC) side of the equation.
And what's amazing is that it was built on 28nm. So TPU 2.0 could increase by another 2x in perf/W just by going to 14nm (most likely) - even more if it's built on newer processes than that.
Intel's latest chips will be even further behind compared to the next-generation TPU than Haswell was compared to TPU 1.0.
28nm was quite a cheap fabrication technology even in 2015, but it costs a lot to have a completely custom production run. My guess it that it approximately works out in savings of power and space over the lifetime of the chip. It probably doesn't make sense for them to move to something smaller (and thus more expensive) whilst the performance benefit remains so substantial. If I were Intel, I probably wouldn't lose too much sleep over it either, because you still need something to attach the highly-specialised TPU to, and that'll be a Xeon for the forseeable future.
Using approaches like OpenAI's recent evolution strategies paper would remove the need for backprop, likely allowing these TPUs to be using for training without any changes.
People have known that training NNs (for any purpose) using evolution works well since the 1990s. The rise of the NN frameworks has made doing differentiation much easier now than it was before (and having gradient hints is intuitively a good idea). But for OpenAI to allow their PR people to declare this as a novel advance is ... surprising.
Citation for training NN for image classification task where evolution works well?
Let's say you want to use a genetic algorithm to find a good set of weights: you generate, mutate, combine and select many random networks, and repeat this process many times. How many networks and how many times? That depends on the length of your chromosome and complexity of the task. Networks that work well for image classification need at least a million weights. The entire set of weights is a single chromosome.
You realize now how computationally intractable this task is on modern hardware?
> "You realize now how computationally intractable this task is on modern hardware?"
Here are the people that prove it isn't computationally intractable : https://blog.openai.com/evolution-strategies/ - but to say they've discovered a new breakthrough method is over-selling the result.
You said: "training NNs (for any purpose) using evolution works well". I gave you an example of a purpose where it does not work well.
So, let me ask you again: can you give an example of evolutionary methods that work well when applied to training NNs, other than this breakthrough by OpenAI, which only works for RL?
Not really, it's not such a pity. It's not for us, mere mortals, unless you have billions of predictions to make and megawatts power to save. For your personal project where you make a few predictions per day you can use a GPU or even a CPU.
TPU excited me too at first, but when I realized that it is not related to training new networks (research) and is useful only for large scale deployment, I toned down my enthusiasm a little.
Neither Google Cloud nor Amazon Web Services offers Maxwell-series GPUs. Both jumped, or, to be more precise, are in the process of jumping, from the K-series to the P100 series.
When I google around a bit, I see several results talking about the software licensing cost model for the M-series GPUs.
It's interesting that they focus on inference. I suppose training needs more computational power, but inference is what the end-user sees so it has harder requirements.
Most of us are probably better off building a few workstations at home with high-end cards. The hardware will be more efficient for the money. But if you're considering hiring someone to manage all your machines, power-efficiency and stability become more important than the performance/upfront $ ratio.
There's also FPGAs, but they tend to be much lower quality than the chips Intel or Nvidia put out so unless you know why you'd want them you don't need them.
They're also not very interested in making it easier for you to train models at home. Not that it's a big risk for them if you were able to do so: you don't have the data, and your models are only as good as your data, but they'd rather you came to their cloud and paid $2/hr per die for an outdated Tesla K80. Which, to their credit, they've made it very easy to hook up to your VM. Literally, you just tell them how many you need and your VM starts with that many GPUs attached. Super slick.
Right. Or billions -- or trillions. Consider something like the Inception-like convolutional model that's one of the workloads in the paper. Training Inception is "relatively" easy -- one week of 48 K80 GPUs. (I'm lying, of course, because you retrain, and you train many times to do hyperparameter optimization, but still).
Then consider the possible applications of that at Google scale -- there are "an awful lot" of images on the web, over 13PB of photos in Google photos last year [1], a gajiggle of photos in street view and google maps, an elephant worth in google plus, and probably a few trillion I'm not even thinking of. :)
Same applies, of course, to Translate, and to RankBrain, also mentioned as NNs running on the TPU. 100B words per day translated [2], and .. many, many, many Google Searches per day, even if RankBrain primarily targets the 15% of never-before-seen queries [3].
Add that to the fact that GPUs are poorly-suited to realtime inference because of the large batch size requirements, and it's a solid first target.
Looking at the analysis of the article one of the big gains of this is that they have a Busy power usage of 384W which is lower than the other servers while having performance that is competitive with the other methods (although only restricting to inference).
I was wondering how it compares to other solutions in terms of performance/watt, luckily they address it in the paper[1]:
> The TPU server has 17 to 34 times better total-performance/Watt than Haswell, which makes the TPU server 14 to 16 times the performance/Watt of the K80 server. The relative incremental-performance/Watt—which was our company’s justification for a custom ASIC—is 41 to 83 for the TPU, which lifts the TPU to 25 to 29 times the performance/Watt of the GPU.
While this is interesting for TensorFlow, I think that it will not result in more than an evolutionary step forward in AI. The reason being that the single greatest performance boost for computing in recent memory was the data locality metaphor used by MapReduce. It lets us get around CPU manufacturers sitting on their hands and the fact that memory just isn’t going to get substantially faster.
I'd much rather see a general purpose CPU that uses something like an array of many hundreds or thousands of fixed-point ALUs with local high speed ram for each core on-chip. Then program it in a parallel/matrix language like Octave or as a hybrid with the actor model from Erlang/Go. Basically give the developer full control over instructions and let the compiler and hardware perform those operations on many pieces of data at once. Like SIMD or VLIW without the pedantry and limitations of those instruction sets. If the developer wants to have a thousand realtime linuxes running Python, then the hardware will only stand in the way if it can’t do that, and we’ll be left relying on academics to advance the state of the art. We shouldn’t exclude the many millions of developers who are interested in this stuff by forcing them to use notation that doesn’t build on their existing contextual experience.
I think an environment where the developer doesn’t have to worry about counting cores or optimizing interconnect/state transfer, and can run arbitrary programs, is the only way that we’ll move forward. Nothing should stop us from devoting half the chip to gradient descent and the other half to genetic algorithms, or simply experiment with agents running as adversarial networks or cooperating in ant colony optimization. We should be able to start up and tear down algorithms borrowed from others to solve any problem at hand.
But not being able to have that freedom - in effect being stuck with the DSP approach taken by GPUs, is going to send us down yet another road to specialization and proprietary solutions that result in vendor lock-in. I’ve said this many times before and I’ll continue to say it as long as we aren’t seeing real general-purpose computing improving.
Are people really using models so big and complex that the parameter space couldn't fit into an on-die cache? A fairly simple 8MB cache can give you 1,000,000 doubles for your parameter space, and it would allow you to get rid of an entire DRAM interface. It's a serious question, as I've never done any real deep learning...but coming from a world where I once scoffed at a random forest model with 80 parameters, it just seems absurd.
Yes. Each layer can have millions of parameters if your data set is large enough.
Convolutional networks easily get up there, especially if you add a third dimension that the network can travel across (either space in 3D covnets for medical scans, or time for videos in some experimental archetecture). Say you want to look at a heart in a 3D covnet, that could easily be 512x512x512 for the input alone.
In fully connected models, for training efficiency, many features are implemented as one-hot encoded parameters, which turns a single caragory like "state" into 50 parameters. I think there is some active research into sparse representations of this with the same efficiency but I've never seen those techniques, just people piling on more parameters.
The latest deep learning models are indeed quite large. For comparison, Inception clocks in at "only" 5M parameters, itself a 12x reduction over AlexNet (60M) and VGGNet (180M)! (source: https://arxiv.org/abs/1512.00567)
A further point is that even if the model has relatively few parameters, there are advantages to having more memory--- namely, you can do inference on larger batch sizes in one go.
Not sure if you meant to laugh at a serious question. I am fully aware of my ignorance of the space.
Since it appears you're in the deep learning hardware business, what would be the impediment to using eDRAM or similar? eDRAM is too costly at those sizes for general purpose processors, but I imagine the reduced latency and increased bandwidth would be a huge win for a ridiculously parallel deep learning processor, and would definitely be a tradeoff worth making.
Sorry, that was more of a laugh at the state of deep learning model sizes than anything.
Okay, so about eDRAM. There are two types of eDRAM: On-die and on-package. On-die eDRAM refers to manufacturing of DRAM cells on the logic die, which would be a big boon in terms of density since eDRAM cells can be almost 3x as dense as SRAM. The problem however, is that on-chip eDRAM has been impossible to scale beyond 40nm, which mitigates any advantages you would receive from using eDRAM.
On-package eDRAM is more interesting but the primary cost in memory access is the physical transportation of the data, which is a physical limit and can't be circumvented. You can call it all sorts of fancy names such as "eDRAM", but the fact of the matter is that you're still moving data. For reference, the projected cost of movement of a 64-bit word on 10nm (ON CHIP) according to Lawrence Livermore national laboratories is ~1pJ, whereas the cost of a 64-bit FLOP is estimated to be 1pJ also. As you can see, the cost of data movement dwarfs the cost of computation.
Of course, you gain a lot compared to DRAM, but HBM can offer the same efficiency gains of course.
Didn't meant to be rude with the first response.
Let me know if you have any other questions, I'd be happy to answer them :)
Interesting stuff; really points to the complexity of measurement of technical progress against the Mores law; it's really a more fundamental around how institutions can leverage information technologies and organize work and computation towards goals that are valued in society.
This appears to be a "scaled up" (as in number of cells in the array) and "scaled down" (as in die size) as the old systolic array processors (going back quite a ways - 1980s and probably further).
As an example, the ALVINN self-driving vehicle used several such arrays for it's on-board processing.
I'm not absolutely certain that this is the same, but it has the "smell" of it.
They're comparing against 5-year old Kepler GPUs. I wonder how it had fared vs the latest Pascal cards, since they're several times more efficient than Kepler.
* They actually started deploying them in 2015, they're probably already hard at work on a new version!
* The TPU only operates on 8-bit integers (and 16-bit at half speed), whereas CPU/GPUs are 32-bit floating point. They point out in the discussion section that they did have an 8-bit CPU version of one of the benchmarks, and the TPU was ~3.5x faster.
* Used via TensorFlow.
* They don't really break out hardware vs hardware for each model, it seems like the TPU suffers a lot whenever there's a really large number of weights and layers that it must handle - but they don't break out the performance on each model individually, so it's hard to see whether the TPU offers an advantage over the GPU for arbitrary networks.
[1] https://drive.google.com/file/d/0Bx4hafXDDq2EMzRNcy1vSUxtcEk...