Somewhat related: looks like Prof. Harry Lewis is also teaching a class on "Classic CS" during the Spring 2020 term at MIT.
"This subject examines papers every computer scientist should have read, with an emphasis on the period from the 1930s to the 1980s. It is meant to be a synthesizing experience for advanced students in computer science: a way for them to see the field as a whole, not through a survey, but by reliving the experience of its creation, relating the original work to the field as it exists today. The aim is to create a unified view of the field by replaying its entire evolution at an accelerated rate, giving students the opportunity to become sophisticated generalists"
wow. setup and hold times have been demoted to 'demystification only' and not something that is part of the formal curriculum. when I helped teach this course in the 90's, it was a major section and if you could not answer basic questions about synchronous clock discipline, you could not get an A. It was as important as stack-based calling conventions.
I mean, I guess most 'computer science' folks today can have a fecund and profitable career and have never heard of these concepts, but... I hope some people still wonder, why do we have clock speeds, and what other alternatives might exist?
This seems very comprehensive and beautifully comprehensible.
It reminds me closely of The Elements of Computing Systems and its companion web-based incarnation Nand2Tetris available at https://www.nand2tetris.org/
I started trying to read through the slides in the link and I felt the same way - this looks like fascinating stuff that I wish I understood better, but I got lost trying to just read the slides without a lecturer going over them. There were a few textbooks listed (including Hennesy & Patterson's excellent "Computer Organization"); I might check out the other two instead.
I kind of like their Minispec HDL - I don't think I've seen it before though I guess it derives from Bluespec. I have always liked Wirth's Lola as well.
Verilog and VHDL are serviceable, but I think there is an advantage to having a simple, friendly syntax without the verbosity and overhead of VHDL.
I also liked how Wirth's course involved running on FPGA hardware. It looks like you might be able to do that in the MIT course as well although I didn't see specific labs for it.
This is an undergrad course, "200 level" in the standard course numbering (which MIT does not follow). It's for people who know how to program, but relatively new to the field. It's mostly going to be getting people used to assembly and basic computational building blocks.
You might be interested in the new class: 6.812 Hardware Architecture for Deep Learning.
"This subject examines papers every computer scientist should have read, with an emphasis on the period from the 1930s to the 1980s. It is meant to be a synthesizing experience for advanced students in computer science: a way for them to see the field as a whole, not through a survey, but by reliving the experience of its creation, relating the original work to the field as it exists today. The aim is to create a unified view of the field by replaying its entire evolution at an accelerated rate, giving students the opportunity to become sophisticated generalists"
https://www.eecs.mit.edu/academics-admissions/academic-infor...