Hacker News new | past | comments | ask | show | jobs | submit login
Less is Moore (samgentle.com)
76 points by sgentle on April 27, 2015 | hide | past | favorite | 25 comments



The problem with this kind of YAGNI thinking is that it 1. commits everyone to understanding how to do everything and 2. requires everyone to do everything afresh, every project. The fact is, I don't know the gory details of all the Unicode encodings, XML Namespaces, multithreaded concurrency locks, database indexing strategies, or SVG rendering pipelines my code uses every day. While I can dive down to understand those things if need be, at any given moment I don't know even a fraction of those things my program directly uses. If I had to dive down on them, I wouldn't get productive work done. And as a group, we'd each be reinventing slightly different, mostly crappy single-use versions of The Wheel. Thanks, but I'm perfectly happy standing on the shoulders of giants--even if it gets a little wobbly some times.


When it comes to really supporting non-english languages well, you kinda do have to know gory details of Unicode. (I had to.) Otherwise, stuff will mysteriously not work, and you won't know what to do. OS X / Windows filenames will surprise you. Python on OS X / Windows will surprise you. Javascript will surprise you. URLs will surprise you. There's a lot of surprises in store.

If you have some real-time large-ish-data need, you'll need to evaluate your options based on how they manage concurrency and locking. You may even have to implement a custom cache or archive layer. (I had to.)

I'll admit, I haven't dealt with SVG rendering, and it wasn't important to any company I've worked for. In cases like that, you can say "whatever, fast enough on today's computers, this one doesn't work so I'll replace it with a png." But if you really need stuff to work right, if you want it done right, you don't have to do it all yourself, but you do have to understand it all yourself, otherwise you can't correctly choose and marshall the pieces that do it.

EDIT: I feel like adding an addendum: yes, the specific views described in OP are historic relics. Still... the number of levels that exist today between do-it-yourself c programming with an alternative/minimal libc, and a ruby/rails/activerecord/auth-gems/assets-gems project, is really amazing.


the number of levels that exist today between do-it-yourself c programming with an alternative/minimal libc

Turtles all the way down. I'm working right now own an OS-less Forth system to run on a SoC, and keep thinking "man, how much easier would this be if I had a kernel and a library!"

Meanwhile, somebody else is probably working on a pure logic processor and thinking "wow, I wish I had a full SoC to work with..."


All these surprises you get with non-english characters is exactly why it needs to be abstracted and handled at another level so why don't need to know how they all that works.


YAGNI doesn't say to avoid using a general purpose framework, it says that writing one to solve the problem at hand is usually a bad idea.


You're doing a disservice to yourself and the people who have to work on (or more likely fix) your code by having this attitude. Plenty of people are capable of producing great work and still spending the time to actually understand the systems they are working on.

There aren't that many giants; there are a lot of people with a few years of experience that think/claim they're giants.


Mmmm... We'll just have to disagree on this. I can track a great number of octaves, from business strategy down to operating system scheduling and locking strategies, CPU instruction pipelines and scheduling, semiconductor fabrication techniques, and even down to the lifetime exergy of the whole equipment/software/facilities supply chain. But I can only do a little bit at a time, an as-needed basis. I can't be too concerned with instruction set design or supply chains while I'm optimizing an SQL query or building a D3.js web visualization. It's just too remote from the task at hand. If you, in contrast, can simultaneously understand and reason about every technology along the continuum from delivered apps and services down to the impedance constraints in the CPU layout process, good show! That's a surpassingly rare skill, but perhaps there are some Sherlocks among us.


I think it's obvious that I'm not advocating you have innate knowledge of everything down to semiconductor fabrication in order to be a competent javascript programmer. I would argue you want at least a basic understanding of the internals of the software you're relying on.

You're writing SQL queries and claim you don't have the time to research how the database you're using works internally without destroying your productivity? You can write SQL without knowing the database's indexing strategy or how it optimizes queries, but it's likely to be of a lesser quality than someone who has taken the time to do so.


> The problem with this kind of YAGNI thinking is that it 1. commits everyone to understanding how to do everything and 2. requires everyone to do everything afresh, every project.

IIUC is that how the Go team sets out to do stuff? I have no references but I seem to recall I read something along the lines of "writing a Min(uint, uint) function is easy enough, so there's no real need for it in the stdlib (whereas Min(float, float) is there because it relies on optimized hardware), and you can always implement it and have it published" and "container types are best implemented on a case by case basis because their traversal is best suited to each specific case, so generics aren't needed that badly".

I do undestand such arguments, yet encountering the fifteenth slice Mirror function {which is unconcerned about the type) being either non-optimal, or just wrong due to an off-by-one, or giving up on static typing, gets old fast. And I'm not even talking about tries, b*-trees or whatever other interesting data structure there is.


> I came into contact with a version of this philosophy even earlier, in The Mote in God's Eye by Larry Niven and Jerry Pournelle. In the story, there are aliens called Engineers who only build special-purpose things. To them, there's no such thing as a generic chair. They would instead build a Sam-chair to my exact proportions. If those proportions changed because of a series of brownie-related incidents, they'd rebuild the chair. Every item in their world is custom-made for its particular purpose. Both Moore and Niven's specialisation philosophies came from resource-constrained environments: the Engineers because of the limited physical resources on their home planet

To expand on this, and why Moore's views are a niche and will remain that way for a long time: in _Mote_, the aliens in question have been trapped on the same planet for millions of years, typically overpopulated, and have evolved to an extremely high degree to cope with living at (and sometimes beyond) the Malthusian limits. The Engineers are an example of these adaptations at work - because there are so many Moties, Motie time and labor is dirt-cheap (specifically, they are at the Malthusian limit where their wages equal how much it costs to live the most minimal life) while resources remain at their usual finite amounts; and so, it pays to have Engineers finetune and customize each product, for the same reason evolution leads to ultra-optimized (but often unclean and inelegant) solutions.

In contrast, as ugly incidents like the wagefixing scandal at Google/Apple/etc show, we face the opposite situation. Real resources are extremely abundant, and computations have never been cheaper, while programmer time remains expensive. So it makes little sense to take a Moore/Engineer approach, and instead generally the tradeoff of performance for more generality is made.

This will remain true for as long as the cost of programmers remains the main part of running software compared to the resources like CPU or RAM or joules consumed to run the software. (The software could be run at Internet-scale, in which case the possible efficiencies from specialization are worth the programmer time; or programmers themselves could become much more abundant, such as in a Robin Hanson upload/emulation SF-like scenario.)


The corrollary to this is that sometimes a Company hits scale where having a few "Moties" customize/refine every level of the stack saves billions and suddenly it's financially worth it.

It has always been a cost/benefit analysis. The hard part is figuring out the costs and benefits when it comes time to make that decision.

Case in point. My current work is in C# with some Go sprinkled around. Go happened because the c# Console an IO api's are a hopeless mess. They are bloated with everything under the sun and special cased in the API. And to top it all of they are just about the farthest thing from composable. The creators of C#'s standard library created a monstrosity in the name of abstraction that ultimately was harder to use correctly. What is the cost of that decision? Longer development times than is necessary for projects that do IO in C#. Late refactors when you realize you should have been using a TextReader instead of a StreamReader and you have these seams running all through your app that have to change now.


I'll make a stronger case to Keep it Simple: even if you have infinite computing resources and memory, implementing the simplest possible thing is far easier for other programmers to understand than the ugly generalized 'abstractions' most of us come up with 99% of the time. That 'cognitive bottleneck' will remain no matter how far back you push the resource bottleneck, and it will keep an anti-abstraction ethos competitive.

(Now, the Forth way isn't the one I would choose to be easy to understand: http://yosefk.com/blog/my-history-with-forth-stack-machines..... But that's a separate story..)


Actually, if your language is designed right, abstractions become much cheaper, especially conceptually. As an example, I find a call to `map' or `filter' much easier to understand than a (C-style) `for' loop.

Part of your point still stands: the `for' loop is harder to read and write specifically because it is more general. `for' loops can do all kinds of wacky things.


Yeah. When I said 99% I wasn't just including what the language provides. That usually lies in the 1%, even for 'bad' languages. Generalization there is usually justified. No, I was thinking of the functions and interfaces we create atop the language and standard library.


Language, standard library, or custom module boundaries, it's the same problem: there are 2 kinds of generality.

The first kind is exhaustiveness. Being general by accounting for all special cases. That one is complicated, and rarely worth doing beforehand.

The second kind is genericity. Being general by ignoring all the special cases. That one is often simpler and more solid than any special case. Parametric polymorphism (or generics, or templates) are like that: ignoring the specifics of the parameters limits what you can do with them, making the generic thing simpler.

The layer at which you choose to apply exhaustiveness or genericity is irrelevant in my opinion. C++ often leans towards the exhaustiveness end of the spectrum despite being a language, for instance.


Interesting. My take was that language, standard library and third-party libraries are part of the same state space, just organized by the amount of use they receive. Since user-space libraries receive less hammering (many have just one user) they're usually still in the "adding exceptions" phase and haven't yet attained (and perhaps never will attain) the simplicity on the other side of complexity when the requirements stabilize.

I think that maps to your distinction, except that I don't believe in 'exhaustiveness'. The state space of a program isn't some fixed thing for most programs. It evolves and grows in response to what we want it to do, which dimensions we choose to generalize along and where we stay stable. I think it's equally reasonable to view the evolution of special cases as equally exhaustive at every point, it's the boundaries of the state space (requirements) that is growing in strange ways.


So can calls to "map" and "filter", if your language allows side effects.

Not that I'm trying to be all dogmatic about referential transparency- more like, without it, is there a large benefit to functional style vs. procedural style?


You are right about side effects.

From all I've heard people in impure languages have begun to embrace the notion of restricting side effects voluntarily.


The existence of poor designers isn't a good reason to never design.

In fact, the more design you do, and the more you're exposed to, the better you can get at design. And the better the design, the easier it is to understand.

An "ugly generalized abstraction" is pretty much by definition not a good abstraction. Moreover, a good abstraction is simple. It's easy to understand and it's easy to manipulate. [1]

But you need to practice to get better, so it may take a few ugly monstrosities before you can start creating masterpieces.

[1] Granted 99% of the code I've encountered in Java qualifies as ugly abstractions; Java's nature seems to encourage poor abstractions. Or attract poor developers. Or both. But even in Java it's possible to create beautiful and easy to understand abstractions.


Oh man, that word 'design'. You have a problem and you think, "I should design a good solution." Now you have two problems.

I was deliberately avoiding the d-word because it is vague enough that I'm not sure there's anything useful around there. The good designers are all good at designing within some domain. Jonathan Ivy, Christopher Alexander, Ralph Lauren, they all had their specialty. You wouldn't ask Ralph Lauren to design a rocket ship. Software is even wider a set of domains than meatspace. So I'm not sure you get better at design by 'practicing design'. That's just a set of vague words like 'simple', 'good', 'beautiful', 'abstraction', 'decoupled', 'coherent', 'cohesive', etc., etc. I think you get better at design by better understanding a domain. You get better at design by avoiding premature design.


Abstractions that never leak are extremely rare. Because of this, trying to write a general purpose abstraction is very hard. A special purpose abstraction only needs to not leak for its intended purpose which is a lot easier to get right.


I want more examples before I accept this Even the original "law of leaky abstractions" post did not have any good examples; using UDP rather than TCP will not save you if you pull out the network cable, and micro performance differences like the AND A = C example are rarely relevant.


UDP vs TCP: environments with significant non-congestion related packet-loss (not to mention buffer bloat). In both cases the underlying assumption of TCP is broken, so the abstraction leaks.


This part I don't want to forget:

"When you take on a framework, you're like a consumer buying a product: if it does a hundred things you don't need, or doesn't do things the way you want, well, tough. That's what we've got for sale. But as a programmer you're not a consumer. You're a producer. You aren't forced to accept an abstraction that doesn't work for you, or solves a problem you don't have. The option to build the Engineer-style specialised solution that conforms exactly and only to your needs is always there."


I liked this:

If you're solving problem X and realise "hang on, that's actually a special case of problem Y", it's actually the single most dangerous point in the development of your solution; you're only one step away from the logical conclusion of "I should solve Y instead". Now you're solving the wrong problem.

I think this resonates with most of us but we will still pursue the general case. I think its because problems are simpler when expressed in their general case than with all the hoary detail of a special case. Its close to a rule of nature: whenever we see complexity we can discover simple elegant rules that govern it, and as programmers who's main challenge is taming complexity, we are prepared to risk anything in our pursuit of simplicity, including more complexity...!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: