Hacker News new | past | comments | ask | show | jobs | submit login
MTIA v1: Meta’s first-generation AI inference accelerator (facebook.com)
110 points by thinxer on May 19, 2023 | hide | past | favorite | 44 comments



Comparing MTIA v1 vs Google Cloud TPU v4:

MTIA v1's specs: The accelerator is fabricated in TSMC 7nm process and runs at 800 MHz, providing 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision. It has a thermal design power (TDP) of 25 W. Up to 128 GB of ram LPDDR5.

Googles Cloud TPU v4: 275 teraflops (bf16 or int8), 90/170/192 W. 32 GiB of HBM2 RAM, 1200 GBps. From here: https://cloud.google.com/tpu/docs/system-architecture-tpu-vm...

So it seems that the Google Cloud TPU v4 has an advantage in terms of compute per chip and ram speed, but the Meta one is much more efficient (2x to 4x, it is hard to tell) and has more ram but it is slower ram?


FWIW, you're comparing a training-specialized chip to an inference-specialized chip. It'd be more apples to apples to compare to TPU v4 lite, but I can't find that chip's details anywhere beyond some mentions in the TPU v4 paper: https://arxiv.org/abs/2304.01433


How does a training specialized chip function? Forward mode is simple, just a dot product machine. But how do you accelerate backprop on hardware? Does it have the vector Jacobian transformation lookup logic and table baked into hardware?


Mostly you need to be able to stash intermediate products computed in the forward phase so that you can access them in the backward phase. This requires more memory, more memory bandwidth, more transpose, and also, training usually operates at slightly higher precision (bf16 instead of int8 as one example).


What about the autodiff/VJP lookup table? What's the overhead like for those?


I think it's helpful to categorize the things that go into an ML accelerator as those that are big picture architectural - things like memory bandwidth and sizes, support for big operations like transposition, etc., -- and those that are fixed-function optimizations. In all of these systems, there's a compiler that's responsible for taking higher-level things and compiling them down to those low-level operations. And that includes the derivatives used in backprop - they just get mapped to the same plus a few more primitive operations. While there are few more fixed functions you need to add for loss functions and some derivatives, probably the largest difference is that you need to support transpose (and that you need all that extra memory & bandwidth to keep those intermediate products around in order to backprop on them)

This paper has a nice summary of the challenges of going from an inference-only TPU to the inference-capable TPUv2: https://ieeexplore.ieee.org/document/9351692

Look for the section "CHALLENGES AND OPPORTUNITIES OF BUILDING ML HARDWARE"

But then things change more when you want to start supporting embeddings, so Google's TPUs have included a "sparse core" to separately handle those (the lookup and memory use patterns are drastically different from that of the typical dense matrix operations used for non-embedding layers) since TPUv2: https://arxiv.org/pdf/2304.01433.pdf


Look at the massive diff in the TDP & RAM to start with. Meta's is 1/3rd the TDP and has very different RAM.


Is there something that compares these to more consumer offering like Apple’s ANE?


This looks like a customized ASIC specializing solely in recommendation systems possibly focused on ads ranking

>We found that GPUs were not always optimal for running Meta’s specific recommendation workloads at the levels of efficiency required at our scale. Our solution to this challenge was to design a family of recommendation-specific Meta Training and Inference Accelerator (MTIA) ASICs.


What a tragic waste of human effort and potential.


Same thing was said about GPUs when they were just for games


I don't remember that. Games are art and offer at least some benefit to the world. Optimizing Facebook attention algorithms harms society.


You must be Gen Z because the prevailing attitude towards video games of all kinds was mostly negative in the 20th century. They even tried to blame Doom for school shootings.


I was born in 89. I'm talking about GPUs specifically, not just video games.


That, but the exact opposite. Games were the primary motivation for GPUs in desktop computers in the 90s.


I hope they take comfort in the fact that this is open-source.


It's curious why nobody is selling these systems yet


Probably the software needs to be optimized for the hw and also the hw may not be general purpose enough even if offered. People demand nvidia because cuda is very optimized for their gpus and many AI software use cuda


Competing against NVIDIA must be exhausting.

You come up with a clever ASIC that is better than their current GPU for your workload… and by the time it comes out they’ve released the next year’s chip that just has like 50% more memory bandwidth or something ridiculous like that, and beats you by pure grunt.

“No replacement for displacement” actually seems to be true in compute.


For the same reason why it took a long time for crypto mining accelerators to actually ship. It is more profitable to keep them for yourself.


This is a popular myth. Bitcoin asic's were 'shipping' in 2012/2013.

Some companies definitely played games and mined with the asic's themselves (and then shipped those used asic's)... but in general, it was always a lot more profitable to sell the shovels than it was to mine the gold.


Check out https://coral.ai/products/ accelerators you can actually buy.


Why does the headline just mention inference when the acronym also mentions training?

Is it primarily for inference and the training is just an after thought?


These seem power and density optimized. This sort of custom hardware is all about supply chains and getting a lot of them everywhere. This flavors the inference use-case. For large training jobs it is more about turn around time; running hideously expensive GPUs sucking down huge amounts of power is fine.


It looks rather general-purpose (for ML tasks) to me:

Each PE is equipped with two processor cores (one of them equipped with the vector extension) and a number of fixed-function units that are optimized for performing critical operations, such as matrix multiplication, accumulation, data movement, and nonlinear function calculation. The processor cores are based on the RISC-V open instruction set architecture (ISA) and are heavily customized to perform necessary compute and control tasks.


They designed it in 2020 does that mean it is likely to have been in use for a while or is the design lag a few years?


It is ambiguous on that front. If you designed it in 2020, getting through test runs at TSMC and then to a final production run would take a while. So when they had it deployed at scale at FB is unclear.


Can OpenXLA/IREE target it? Supposedly PyTorch 2.0's big shift was a switch to these new systems. Curiosity to know if that's actually happened here.

Side note, the chip says Korea on it & I this expected it was Samsung... But it's TSMC made chips? What's up with that?


Probably a Korean packaging company


>>>> fabricated in TSMC 7nm process and runs at 800 MHz, providing 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision. It has a thermal design power (TDP) of 25 W.

So 2 generation of immediate improvement available.


TOPS might take a more complex operation as a unit, same as computing shader passes per second might mean a very simple computation or a very complex operation every s^-1


Their goal is to beat GPUs


Yes, so three nodes ahead.


Has there been any rumors or statements from Facebook on them eventually stepping into selling cloud compute? I'd be surprised if they are investing in building hardware accelerators just for their own services.


Their footprint for just their own services rivals some other public clouds.


Given that these chips seem to be power optimised and Facebook's recently released sensory model, I wouldn't be surprised to see them in their next iteration of VR devices.


I think they’d be bad at it for the same reason google is bad it. Enterprise sales is not in their dna.


The AI inference/training market is so competitive that I doubt enterprise sales is going to be the problem. A company planning on spending $50M training a model is not going to be convinced by some smooth talking sales guy over a golf game. They will look at the actual price/performance.


You’d be surprised. Azure is growing like gangbusters on the backs of smooth talking salesmen taking CTOs golfing.


I want one. This thing can run LLaMA 64b int8 easily.

Meta is going to use it in datacenters, Much more efficient than NVidia generic GPUs. They are serious about putting AI everywhere.


Why are there so many Mini SMP (?) connectors on the board? (video time 1:21)


Just missed FP8 implementation on hardware


How do they compare to TPUs?


Just as incredible is the corresponding announcement of their RSC which is purportedly one of the world's most powerful clusters

Amazing times! Private companies now have compute resources previously only showing up in government labs, and in many cases using novel components like MTIA

This feels like the start of a golden age and in a few years we will have incredible results and breakthroughs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: