>If you check the twitter thread, he didn't create a CPU, or anything close to one. As far as I can tell, he just took some circuits that others created (e.g., debouncer, pwm) and hooked them up to create a 3 channel LED controller that can output a pulse signal. That's it. He did go through the rest of the flow (synthesis, STA, P&R, etc.) but it seems to be the core only, there's no pads or anything to connect the chip to a package, it's just the core logic.
His complaints about the inability to find any details on GPU architectures strikes me as weird because there's a wealth of information about it out there, especially if (as he clarifies in later posts in his thread) you're ignoring the fixed function and graphics aspects of a GPU and just making it GPGPU, at which point the architecture is largely "high-way SMT in-order CPU architecture with SIMD execution units"--nothing out of the ordinary in any advanced computer architecture course.
Interestingly, Anthropic’s Claude Opus AI tools have been useful during this GPU designing stage. “I've been proposing my ideas for how each unit must work to Claude, and then somehow it will guide me toward the right implementation approaches which I can then go and confirm with open-source repos,” explained the engineer.
I’m sorry, what has this guy done? It really doesn’t sound like he’s actually done much? I’ve also read about fabrication and that means jack all - it doesn’t sound like this guy is doing his own fabrication, if anything it sounds like pretty run of the mill 2nd/3rd year logic circuit design?
From other comments though I’m not sure it’s even that?
I can’t tell because twitter makes it hard to read threads now and I can’t tell if details are missing from the article, or literally there nothing to actually report on?
Also the comments on GPUs is really confusing? It sounds like someone has bought into the mythology of GPUs being somehow magic, rather than simply many parallel micro-micro processors.
As best as I can tell, this is someone who's speed-run something like nand2tetris and an intro-to-EDA-tools tutorial, and is now trying to move into GPUs and is somehow distraught about how hard it is to make that conceptual leap. And this was picked up by a reporter trying to make a quick article who doesn't fully understand what he is reporting on and thinks it's something more impressive than it really is.
To be clear: I think that a decently motivated person can go from "0 knowledge" to a toy implementation of a CPU in a digital logic simulator in something like 20 hours of focus or so (especially if the digital logic simulator gives you prebuilt components for things like adders and memory). I've never done digital design for circuits before, but I imagine it's a similar level of difficulty.
From parsing the Twitter user's comments, the GPU confusion seems to be the Twitter user skipping any attempt to learn advanced computer architecture because he thinks it's not on the critical path to building a GPU and doesn't realize that the information he's struggling to find is really just computer architecture 301.
Apart from the already mentioned nand2tetris project, if you want to build a physical CPU from scratch I would recommend one of the kits from Ben Eater: https://eater.net/
Nothing here has anything to do with creating a “CPU” in 2 weeks. In the Twitter authors defense he says “chip” not CPU and I didn’t see where he claimed to have actually successfully completed anything in 2 weeks just that he was “challenged to.” This article is pretty garbo.
well, the interesting part here is that the tools (verilog and openlane) seem to work pretty well and are highly accessible. designing a cpu hasn't been a complicated exercise for decades, but implementing it has been.
Verilog is kind of trash by modern standards. Unfortunately we are stuck with it (well SystemVerilog) until tool vendors support something else.
It's kind of a similar situation to JavaScript actually. And in a similar way, you can compile to Verilog, but just like with JS it makes debugging much more painful.
There was this interesting project but it seems inactive: https://llhd.io/
There's also various alternative HDLs that seem to have various levels of solving the wrong problem (SpinalHDL, MyHDL, Chisel). This one looks quite interesting though: https://filamenthdl.com/
>If you check the twitter thread, he didn't create a CPU, or anything close to one. As far as I can tell, he just took some circuits that others created (e.g., debouncer, pwm) and hooked them up to create a 3 channel LED controller that can output a pulse signal. That's it. He did go through the rest of the flow (synthesis, STA, P&R, etc.) but it seems to be the core only, there's no pads or anything to connect the chip to a package, it's just the core logic.
His complaints about the inability to find any details on GPU architectures strikes me as weird because there's a wealth of information about it out there, especially if (as he clarifies in later posts in his thread) you're ignoring the fixed function and graphics aspects of a GPU and just making it GPGPU, at which point the architecture is largely "high-way SMT in-order CPU architecture with SIMD execution units"--nothing out of the ordinary in any advanced computer architecture course.