It's not about LOC being cost-effective, it's about orthogonality. When you're gaining knowledge about something, you're expanding your data set. When you're getting wisdom(understanding) about something, you're contracting it.
Everything that we're doing currently points to a huge scattering of knowledge but little-to-no understanding. So that's what Alan was shooting for: trying to express everything with as little as possible.
The reason we can't do it yet is that, while we think we know what we're doing, we still don't understand what we're doing.
You may not believe in a silver bullet but very smart people believe we can do vastly better. I think you may agree that we need to at least try.
Thank you very much for the explanation. I think I get the point but remain skeptical. Is knowledge (even a totally man-made subdomain like OS/App) fundamentally scattered in nature, or can be modelled in the scale of 0.1% of the original size?
It's kind of odd that Alan Kay, the visionary of DynaBook, who advocates the importance of exchanging of ideas, has the belief that a single man should be able to handle thing of the scale of industry. Karl Max believes in social labour division, and I think he is right.
I think Kay believes that the scale of our current industries are a lot smaller than they could be. That the reason that they currently require us to divide our labour so extremely is incidental complexity we have built for ourselves in 60 years of small, short-sighted steps. If we could slow down and think about how to simplify what we are doing, each person should be able to handle large chunks of our current industries and together we could move on to more interesting things. Instead of struggling to reproduce the functionality of a WYSIWYG document editor from the 90s in HTML5, maybe we could move on to more easily create new classes of software that do not resemble what came before.
Agreed, except I think you mean from the 70's. PARC invented WYSIWYG word processing, I think it was Butler Lampson's gig. As I recall that went out to users before many of their other inventions.
As you know three sub-atomic particles (electrons, protons and neutrons) give us the 118 elements which construct everything. All hardware can be founded on a single NAND gate. I think he's looking for that kind of fundamental core to software -- and it's missing.
I am personally quite sure that software construction will one day be a form of mathematics with symmetry being found in everything from biology and neurology to physics.
My only counter might be that Ulysses is a monumental work of art. We don't have anything like that in software -- and not just because we don't have a Joyce. We lack that standardized alphabet. That we compile/interpret programs, that we have a completely different language for data access, that we have in-memory data structures, I could go on and on -- it's all getting in our way from creating something monumental. (I feel like I'm cheating a bit; I've seen the other side and it's beautiful.)
Whaaaaat?? Just a NAND gate? Nah, hardware is done on a combination of standard cell library, RAM, ROM/flash, and analog components. You can't even get the volatage right with just a NAND gate.
Far as symmetry, there's potential there given some work is reducing it to math and simple constructs. Hardware too. Ill try to rrmember to give you a link when I'm off work.
When just about any system starts its lifecycle, it can fit within the brain of 1 or a few people. As the system grows, new people come in and start working on just one tiny piece. Each of those pieces then grows until it strains the limits of the person or people working on it, and usually is broken into further pieces, or has new parts built on top of it.
As this process unfolds, what once was a simple system that could be fully understood and reliably reasoned about balloons into a monstrosity with 20 layers and thousands of complicated moving parts which no person has any hope of reasoning about fully. The abstractions start “leaking” because the people working on them have gaps in understanding which leads to interface mismatches all over the place, and all kinds of bugs and other software flaws.
But in general, most of the complexity is incidental, created out of the goop of the program itself and the institutional structures of the programmers working on it rather than the inherent challenges of the problems it tackles.
Alan Kay’s idea is: what if we focus really really hard on cutting that complexity down, and spend years of disciplined effort simplifying our abstractions until a full usable computer system can have its source code printed at normal size in one medium-length book, instead of needing a whole library of books. Maybe such a system wouldn’t be used “in production”, day to day, but it might inspire us to simplify and refactor our foundations elsewhere.
According to your description, the root cause of the complexity is head count and lack of communication, which I respectfully disagree. Most of the web apps have the `User` class and it does more or less same thing. I challenge this world to provide a single solution makes everyone happy. Complexity comes from specific use cases, and they are not incidents.
OK, let's say you are right, and Alan Kay has come up with a 20 kLOC functional system. Everyone starts to coding on it and quickly produce 100+ distributions and millions of Apps. Surprise, the old problem comes back!
It's an ecosystem/social/management problem that is about human, not technology.
Everything that we're doing currently points to a huge scattering of knowledge but little-to-no understanding. So that's what Alan was shooting for: trying to express everything with as little as possible.
The reason we can't do it yet is that, while we think we know what we're doing, we still don't understand what we're doing.
You may not believe in a silver bullet but very smart people believe we can do vastly better. I think you may agree that we need to at least try.