>While no one programming legend can possibly accomplish any big feat solo, there are programmers worthy of fame for their supreme productivity.
Actually at least one programming legend did according to many:
------
Woz designed all the hardware and all the circuit boards and all the software that went into the Apple II, while the other Steve spewed marketing talk at potential investors and customers on the phone. Every piece and bit and byte of that computer was done by Woz, and not one bug has ever been found, “not one bug in the hardware, not one bug in the software.”[15] The circuit design of the Apple II is widely considered to be astonishingly beautiful, as close to perfection as one can get in engineering. Woz did both hardware and software. Woz created a programming language in machine code. Woz is hardcore.
-Geek Sublime: The Beauty of Code, the Code of Beauty
- Donald Knuth (TeX, METAFONT); even more surprising here given that he's mentioned in the article a few times on other accounts; https://en.wikipedia.org/wiki/Donald_Knuth
Right. You saved me from
making this outstanding correction.
So, Knuth did TeX, METAFONT,
Web, Weave, and Tangle for
literate programming,
and published the literate
version of the code. Essentially
just one guy.
"[...] Knuth definitely wrote most of the code himself, at least for
the Metafont re-write, for which I have pesonal knowledge. However,
some of his students (such as Michael Plass and John Hobby) did work
on the algorithms used in TeX and Metafont. He also did have a
programmer (David something) working for him at one point, but not
as I recall at the time of the Metafont re-write."
That said, some solid source on the earliest phases, as well as on particulars of the specific "algorithms", would be nice to learn.
I intended to prove that wrong one day given what I've seen Moore, Wirth, LISP machine people, etc do with no more than two people. However, if I'm to be honest, anything I come up with individually will be a rehash of what a bunch of others came up with. So, hard to even say it was my own work unless I did something radically different across the board from the circuits up.
Possible. I just saw Steel Jr and Sussman's names. Then implied a LISP machine could be made with 1 person if it could be made with 2. Just take more time. So, if not that one, than any that was only done with 2 people.
Errr, to my knowledge, neither of them made major contributions to the Lisp Machine, outside of perhaps ideas.
It sounds like you're referring to their Scheme chip project, which they did not have the resources to push to success, e.g. getting the microcode right in one or two tries (the computing resources to simulate it were not available).
The Lisp Machine proper was a project done with TTL and fathered by Richard Greenblatt, who probably did some hardware design and more likely microcode work, as well as system software as I recall. However the principle hardware designer was Tom Knight, David Moon wrote a lot of microcode (the Lisp Machine's microcode did a lot, e.g. eval, GC, the bytecode interpreter), he and Dan Weinreb are the only authors listed on the cover of the 1981 4th edition of the manual, Weinreb wrote the first text editor for it, the per Wikipedia and my faint memory the 2nd EMACS implementation, and the first with a GUI and done in Lisp. Howard Cannon developed the Flavors OO extension, with which I remember a lot of the GUI was implemented.
It was a pretty big project; hmmm, TempleOS is the only "comprehensive" OS I can think of that was done by one or two people.
Yeah, it was the Scheme chip I was talking about. I guessed that they were included in the banner of LISP machines because Scheme is a LISP. So, they never actually built it? That's disappointing.
No, they built it, but the microcode had enough bugs it wasn't viable as more than a proof of concept, and it was also a part of the brand new and very exciting Mead & Conway revolution (https://en.wikipedia.org/wiki/Mead_%26_Conway_revolution).
After that the MIT Scheme community concentrated on producing a good version of the language for the 68000, which of course developed an ecosystem a homegrown chip could never hope to achieve back then. This was to support Sussman's work, including SICP/6.001 (Steele went to CMU "to bring the light to the heathen" :-).
Ah, thanks for the Mead & Conway revolution link. That fills a gap in the hardware and EDA research I've been doing: the how of the transition from discrete to custom chips. So, they practically invented VLSI methodology and MOSIS service? That's pretty awesome.
Back to Scheme. Ok, so they built it but it was a buggy, throw-away, proof-of-concept. I'll try to remember that in future references. Then they transitioned to software and SICP. Ok, a working Scheme chip would've been neat for me but I concede they made the right call for the time. Plus, it's better to work out a concept and how it will be used before trying to put it into silicon. Lets you decide which parts are really worth putting in hardware.
"Steele went to CMU "to bring the light to the heathen"
Haha. That's funny. Guess that's my job now. Appreciate your clarifications on the Scheme chip and its context.
Look at Wirth's and Gutknecht's Lilith project. After seeing Xerox's PC's, they independently from Apple invented a PC and GUI line involving: a custom computer; a CPU from bit-slice chip; an ideal assembler (M-code); a high-level, safer language (Modula-2) built for it; compiler; primitive OS; basic tools. This work and much of its methodology later became the Oberon language and system, also a 2 person job. They also implemented custom HDL's and hardware for it later on. I believe one person, esp of Wirth's or Gutknecht's capacity, could've built all this albeit with a lot more time required. The key was simplicity, layers, understanding each module, and incrementally building them.
One interesting aspect of Wirth's work, which is illustrated by Oberon, is that he always iterated — from Pascal (itself heavily influenced by ALGOL) to Modula, to Modula-2, to Oberon, then finally Oberon 02. Apple's ObjectPascal is in there somewhere. For example, as I understand it, Oberon was basically bootstrapped with Modula-2.
They're all Pascal-like, and therefore ALGOL-like, sharing a lot of similarities; not just syntax, but also concepts such as ranged types, range-bound arrays, enums, record syntax, etc. You can know Pascal and be able to pick up Modula or Oberon very quickly, since so many keywords and concepts are identical. At the same time, each iteration tried to simplify the language (the BNF for Oberon has only 33 grammar rules).
That's a good point. A strength of his improvements in terms of learning curve and re-using prior work. He was always good at that part. However, I like best how he designed the assembly languages (P-code, M-code) and high-level languages (Pascal, Modula-2) to be consistent with one another. Some called it a hack but I think it's brilliant. Imagine how easy it would be to learn inline assembler if its workings matched the language behavior already described to you. It's like explaining stacks, frames, goto and math in C was about all it took to know x86 programming. It would make both ends more effective.
He did it for ease of implementation and compilation. Still need that for formal verification of whole systems, securing whole systems, people that like to experiment with them, and so on. So, his principle stands the test of time.
Yeah the weirdness. It's funny because I'm generally sympathetic to the thrust of the article, but then there are these random sentences that just read like a Markov chain.
Seems mostly right on. I don't think you need to know one language, though. You just need to understand whatever tools you use to model the problem and solution. One can get pretty far by ignoring as much of what's out there as possible, using subsets, and using a consistent style across tools. Let's you get more in your head without mastery of every detail. I still use a Cleanroom-like approach and FSM's with my programs looking similar across toolchains.
re nobody accomplishing any big feat solo
Seems to be a few, potential counterexamples on this list:
Bram Cohen's Bittorrent jumps out as having traits of one programmer, a solid technical contribution, and impact. Probably more but must assess if they had help. List is misleading on that part.
Bram has written a post (link: http://bramcohen.livejournal.com/4563.html) about what in his view separates great programmers from others (spoiler: it is the ability to come up with good architectural decisions). But then, his codeville project went nowhere so take it with the grain of salt ;)
That was an interesting write-up. I disagree with how he thinks people will come up with good architectural decisions. That blind trial-and-error is how most programming happens and most of it is anything but good architecture. It's a nice learning and exploratory process, though.
For learning architecture, I'd recommend people do what I did: look up all kinds of present and past solutions to problems that worked well to see how they did it. See what they worked with, constraints, goals, specifics, and what resulted. Most discoveries are re-hashes of old ones and there's plenty of good stuff to draw on in many subfields. Some are truly novel but studying old stuff for new applications will get you pretty far. Out of the box thinking for the rest.
This article is bizarre: "The greatest programming minds are capable of using software to multiply massive integers that are bigger than the universe." What does this even mean?
Whether software is the most complex systems humans have built very much depends on what you consider to be a system.
And whether you agree that software is the most complex thing humans have built also seems to correlate, to a remarkable degree, with whether you build software for a living.
"Whether software is the most complex systems humans have built very much depends on what you consider to be a system."
Indeed. To take just one example, the world economy is a system created entirely by human beings, yet seems to me to be much more complex than any piece of software. People try to model pieces of the economy with software, with varying degrees of success, but I don't think it's feasible (it may be impossible) to model the economy as a whole.
I don't know if software systems are the most complex systems we built, but you can't compare it to the world economy, which is an emergent property that nobody has "built".
The ideas of personal property and wage labor are certainly not emergent properties of the universe. We could just as easily have chosen collective ownership with equal distribution of the fruits of socially compelled labor, trial by combat to see who gets the most stuff, serfs being fed by their masters who effectively own them, everyone feeding himself directly from the land which is owned by no one, everyone feeding himself directly from the land which was cleared of competitors by deliberate genocide, etc. All have been practiced at one point or another.
An economy based on property ownership, investment, and labor is absolutely a system created by humanity, though you could argue many of its observed properties are emergent.
The key in my comment is the difference between "built" (as in: purposefully designed and built, like it happens with software) and "created" (as in: you can accidentally create something). But I know that human language isn't so well defined and other people can give different meanings to those terms... because nobody "built" English!
Regarding "We could just as easily have chosen" I don't agree at all on "easily". Contrary to many theories which say or imply that, people's minds aren't blank slates that can be easily programmed (not to mention re-programmed) to adapt to any social engineering project. Countless examples in history prove this.
Not that I necessarily agree with the distinction made in the definition above, but assuming it's true, wouldn't the garden be an artifact? The soil and the tomatoes surely not, but the garden in layout and concept would be a human built thing.
The world economy absolutely was designed, just not by a single designer. It certainly isn't there by mistake.
Economic theory dating back to the late 1700's arguably were the first major attempts at retroactive design of the global economy. There've been other pockets of this happening, but not globally, more likely the monetary and debt systems of various epochs and empires.
But after retroactively observing market exchange, most classical economic theory became pretty prescriptive, i.e. it wasn't a reflection of "human nature" so much as that was an easy way to promote free market ideals.
For example, the "profit motive" has been basically disproven as something that never actually existed historically or anthropologically as a universal human driver. It was invented by economists as useful way to explain away certain aspects of their system design. This is not to say some people were not and are not motivated by profit.. sure they are. It's just not universal, like hunger, or breathing.
Markets also weren't omnipresent throughout human history until proto-economists noticed them, wrote about them, and businessmen began convincing governments to apply them by force to various communities in the 1800s. (see Karl Polanyi, the Great Transformation).
One could argue that economics is really "software design for the world economy", except you need to test and debug in production. Religious wars about better designs and features abound (fiat currency vs. gold standard; Keynesians vs. Rational Expectations / Real Business Cycles; Austrians vs. Science; etc.)
The world economy is not analogous to "any piece of software", it is loosely analogous to something like "all software running on the Internet" or some other broad set of heterogenous systems.
The OP is referring to white box systems where the programmer thoroughly understands everything. The economy may be artificial, but its' immense complexity makes it a black box.
The point about internalizing what you are doing is sadly overshadowed here. I know folks love Bret Victor's talks about externalizing this visualization to teach people and make things easier to show to others. What that misses, is there is a great benefit to internalizing this such that you don't have to find a way to visualize it.
The killer section for me is Thompson's quote regarding how hard it is to follow programs that are built with layer upon layer. Just a single paragraph really sums this up.
One thing is to start SOLO when the field or subfield is new and a very different thing when it is mature. Game programming? individuals or small teams developing great games for the Atari 2600, Sinclair X, and Commodore Y but now that's almost impossible. You can follow the indy programming route but requires to ignore some dimension of the finished product.
Actually at least one programming legend did according to many:
------
Woz designed all the hardware and all the circuit boards and all the software that went into the Apple II, while the other Steve spewed marketing talk at potential investors and customers on the phone. Every piece and bit and byte of that computer was done by Woz, and not one bug has ever been found, “not one bug in the hardware, not one bug in the software.”[15] The circuit design of the Apple II is widely considered to be astonishingly beautiful, as close to perfection as one can get in engineering. Woz did both hardware and software. Woz created a programming language in machine code. Woz is hardcore.
-Geek Sublime: The Beauty of Code, the Code of Beauty