Every one should go through this rite of passage work and get to the "Attention is all you need" implementation. It's a world where engineering and the academic papers are very close and reproducible and a must for you to progress in the field.
(see also andre karpathys zero to hero nn series on youtube as well its very good and similar to this work)
I would also recommend going through Callum McDougall/Neel Nanda's fantastic Transformer from Scratch tutorial. It takes a different approach to conceptualizing the model (or at least, it implements it in a way which emphasizes different characteristics of Transformers and self-attention), which I found deeply satisfying when I first explored them.
That magic moment in Karpathys first video when he gets to the loss function and calls backward for the first time - this is when it clicked for me. Highly recommended!
Is this YouTube series also “from scratch (but not really)”
Edit - it is. Not to talk down on the series. I’m sure it’s good, but it is actually “LLM with PyTorch”.
Edit - I looked again and I was actually not correct. He does ultimately use frameworks, but gives some early talk about how those function under the hood.
I appreciate you coming back and giving more details, it encourages me to look into it now. Maybe my expectations on the internet are just low, but I thought it was a virtuous act worth the effort, I wish more people would continue with skepticism but be willing to follow through and let their opinions change given solid evidence.
As a chronic premature optimizer my first reaction was, "Is this even possible in vanilla python???" Obviously it's possible, but can you train an LLM before the heat death of the universe? A perceptron, sure, of course. A deep learning model, plausible if it's not too deep. But a large language model? I.e. the kind of LLM necessary for "from vanilla python to functional coding assistant."
But obviously the author already thought of that. The source repo has a great motto: "It don't go fast but it do be goin'" [1]
I love the idea of the project and I'm curious to see what the endgame runtime will be.
Why wouldn't it be possible? You can generate machine code with Python and call into it with ctypes. All your deep learning code is still in Python, but in the runtime it gets JIT compiled into something faster.
The only problem is it's implemented in Python. One reason is i hate to install python on my machine, and i don't know how to manage dependencies. The MacOSX required to upgrade to install native stuffs. Such a hell.
(see also andre karpathys zero to hero nn series on youtube as well its very good and similar to this work)