The class 6.004 at MIT basically steps through these abstractions one week/lab at a time. This was my favorite class in school. Each lab is a higher level of abstraction and eventually you've built a basic processor. It gives you a very clear picture of how "1s and 0s" can turn out to become what you see on your computer screen.
We had a class like this at my school (LTH in Sweden) and it was probably the most enlightening school experience I've had.
We started the first week with a handful of diodes and resistors building latches, and slowly built an adder, memory, a calculator and by the end eventually ended up with a programmable traffic light computer.
They were also phasing in FPGAs in the education so you could pick between breadboarding everything or doing it in software with an FPGA or even a mix. Very very fun.
This is a good question. I sometimes comment about and help others on the topic of travel on Reddit, often writing a lot of content. The incentive for me is that it's fun and intrinsically rewarding to help others do something that I find meaningful (which would fit into sibling commenter Gatsky's reason #2). So in a way it's kinda self-serving, but in a good way.
As for why authors don't just link to another webpage: I think it's because a) those pages often don't fit what is being asked, if it's more in-depth; b) the author might have a way to explain it that they want to use; or c) you might presume that writing your own content will get more upvotes.
Meta-funnily enough, in this thread we have both approaches: myself and others offering opinions, another guy linking to a Quora question.
1. They are excited by chip architecture
2. They achieved profound and ecstatic insight once they worked all this out for themselves, and want to share that feeling with other people.
3. They just like helping others, and in particular helping others to understand.
From the tone of the piece, I suspect it is mainly 2.
I found this a very helpful write up. I am glad this person exists and that they took the time and effort to achieve 2.
Further more, at the end the author say he has started a blog on Quora about chip architecture. No idea why anyone would post this stuff on anything other than their own domain is beyond me.
> I had this idea after writing this answer: Subhasis Das' answer to Computer Science: How does a computer chip work? which gained quite a few upvotes. I realized a single answer was far too little to talk about all the ultra awesome machineries that are hidden inside a processor.
I think answering the Quora question was what inspired him to start his own blog, which means he didn't really have a blogpost to reference when he answered the Quora question.
It's called gamification. You start counting points/upvotes etc and suddenly everyone wants to collect more of that. Quora has two more twists: It allows you to post to social networks so your friends know that you are helping people with your knowledge (oh - I'm sorry - it just does that by default). Quora also has ties to online versions of some known publications where they post popular answers. So suddenly you are "published" bonafide writer in likes of Business Insider, Info World etc. I know of people who perhaps spends ~2 hours a day writing beautiful answers to stupid questions like "why this actor is so popular" complete with screenshots, trailer videos, dialog transcripts and so on.
The same gamification strategy works on sites like StackOverflow, Reddit and HN. The most successful ones have some twists like Quora, like StackOverflow throws in bonus of putting your points in your resume. I once saw the top points earners on StackOverflow was Jon Skit who had earned 700k+ points. I looked at count of his answers, length of average answer and estimated that he must have spent 4hr per day each working day for past 4+ year to get that many points. This was on minimum side. If you convert this to hourly rate, it's same as he donated about half a million to StackOverflow.
Now if you can upvote this answer, that would be great ;).
It doesn't look zero-sum, looks more of a win-win game than a win-lose game, until we don't count the number of hours lost trying to assemble a long answer etc...But time/money/energy seems priceless when you seem to love/like what you do!So its poised to be a win-win for the source gets reputation/credits and publisher gets visits too!
So from another perspective , it seems ones likes and dislikes can be used in more ways than one!But guess transition from use to abuse occurs when priorities change!
> I once saw the top points earners on StackOverflow was Jon Skit who had earned 700k+ points. I looked at count of his answers, length of average answer and estimated that he must have spent 4hr per day each working day for past 4+ year to get that many points.
That's interesting.Doesn't this defeat the very purpose of having reputation points mentioned in CV?
> This was on minimum side. If you convert this to hourly rate, it's same as he donated about half a million to StackOverflow.
So that's how this so-called zero-sum-game looks but its more driven by priorities than perspective.We don't know how many people were saved from excess time and energy spent on understanding the same because of one persons generosity.
>Now if you can upvote this answer, that would be great ;)
Quite often, this kind of detailed reiteration can be an important part of maintaining one's own knowledge. And making the effort to explain a topic well keeps one aware of what is and is not well understood outside of one's own, narrow circle. This can be enormously advantageous when having to work with the larger group in some capacity that demands their trust in your ability and openness.
It amazes me that in almost every single article of this kind, there is this huge leap from binary adder to "computer as a bunch of labeled boxes", where these boxes mostly describe the data-path and all the control and sequencing is omitted. I think that most people after reading this kind of description will still think that it is some kind of impenetrable black magic (I certainly did for a long time). And it is not and somehow showing how one could build state-machine sequencer out of register and some logic is not that hard nor complex.
Here is project which gets you from relays to assembly language. With careful design (one cycle per instruction), you do not need much control and sequencing logic:
The subheading of the question was never addressed—"how do we get all of that information compacted onto that ever shrinking chip?"
That's always been the biggest mystery to me: what are chip manufacturers doing differently with each of these "process nodes" that makes them able to do photolithography at slightly smaller scales, but with the scale only shrinking a little bit per five-year-interval?
Naively, I'd expect a process like photolithography to be mostly scale-invariant (you can lens a mask down to whatever size you like) down to a size where it hits a wall due to quantum effects. So when photolithography was invented, why didn't chips suddenly jump from 100um to 100nm scale?
> why didn't chips suddenly jump from 100um to 100nm scale?
Another point is that Moore's Law is at least partially self fulfilling. If you are a Chip Fabrication company you need to spend money to make new technologies, the smaller you want things to be the more money you have to spend. You could spend a comparatively huge amount of money and leap ahead of all of the competition, but then you'd have to charge more than the competition for your services. All of your customers are expecting things to progress according to Moore's law, so they won't be prepared to spend the extra money. I suppose ideally you want to be just ahead of the competition, not way ahead. I hope that makes sense, I found that hard to articulate.
> why didn't chips suddenly jump from 100um to 100nm scale?
Manufacturing processes. As you go to smaller scales, the investment required to do so increases, and the error rate as well. When you produce millions of units you want to have a low reject rate, and your process needs to be fully controlled. That's why there's always a gap between what's technically possible and what makes sense in a industrial context.
There is a substantial amount of detail to be added, but if you just want to skim through some industry documentation the International Semiconductor Roadmap for Semiconductors describes a lot of the challenges involved in producing leading edge process products. This is from the working group that a lot of our companies participate in to coordinate what everybody will need from tool vendors and by when (e.g. when everybody wants to start pushing for 450mm wafers all the tools need to already be able to accommodate a larger wafer).
Photolithography is very difficult to accomplish when feature size is smaller than the wavelength of light used. They have to do all sorts of crazy things with interference patterns to get below 200nm. It's amazing that it has progressed to 22nm
ohh I remember when I built my first Von Neuman machine with eeproms, gals, flipflops and multiplexers, it was a 16 buttons "simon says" game, I used 6 protoboards and kilometers of copper wire.
OCW Link: http://ocw.mit.edu/courses/electrical-engineering-and-comput...