Hacker News new | past | comments | ask | show | jobs | submit login

Google has had TPUs for a while. Most of the incredibly powerful transformer architectures like Parti and PaLM are trained on it. They even merge TPU pods with Pathways so they can go up to 540B parameters (gpt-3 is only 175B)

https://blog.google/technology/ai/introducing-pathways-next-...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: