Hacker News new | past | comments | ask | show | jobs | submit login

Matrix multiplication is important for graphics and important for finding the weights of a neural network



Yep, hard to imagine though that the original creators of the Nvidia TNT or Voodoo had any idea that GPUs would become fully programmable computing hardware used for non-graphical applications.


Creators of Voodoo (3dfx = Gary Tarolli, Scott Sellers) came from the world of fully programmable GPUs. Silicon Graphics workstations had full T&L since ~1988 (http://www.sgistuff.net/hardware/systems/iris3000.html).

The whole point of Voodoo 1 was making it as simple and cheap as possible by removing all the advanced features and calculating geometry/lighting on the CPU.

https://www.youtube.com/watch?v=3MghYhf-GhU


Iris Graphics Geometry Engines weren't programmable in the modern sense. There was a fixed pipeline of Matrix units, clippers and so on that fed the fixed function Raster Engines. You could change various state that went into the various stages, but the pipeline's operations were fixed.

Later SGI Geometry Engines used custom, very specialized DSP-like processors, but the microcode for those were written by SGI, and not end-user programmable.

There were probably research systems before it, but AFAIK the Geforce 3 was the first (highly limited) programmable geometry processor that was generally commercially available.


Uhm, weren't their later graphics systems heavily based on i860 processors?


Yes, later REs were i860s.


I don't think they'd have been super surprised. Just pleasantly happy.

AI Accelerators have been a thing for decades - DSPs were used as neural network accelerators in the early 90s - and Cell processors were a thing by 2001.

GPUs just became vastly more accessible to general purpose program in the last decade. People were doing it back in the 90s but it was seriously hard.

We finally hit a tipping point where it's just kinda hard.


There were also the various custom "systolic array" processor designs in the 1980s (the ALVINN vehicle, and earlier projects which led to it, used these for early neural-network based self-driving experiments).


I remember back in 2004 when I heard a fellow grad student was working on using GPUs as a co-processor for scientific computing, I though "Wow, that's esoteric and niche."


This reminds me of a comment i read here ages ago about a scientist using the "processors" of the univercity's postscript printers because they did the work much faster than their scientific workstations.


Reminds me of some Commodore 64 programs running code on the 1541 disk drive to offload computation from the main CPU (both the C64 and the 1541 had 6502s (well, the C64 had a 6510 which had an I/O port) running @ 1Mhz). The original Apple Laserwriter had a 68k running at 12Mhz, while the Mac Plus, which came out almost a year later, had its 68k running at 8Mhz.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: