Hacker News new | past | comments | ask | show | jobs | submit login
The Future Programming Manifesto (alarmingdevelopment.org)
101 points by jashkenas on Aug 27, 2014 | hide | past | favorite | 106 comments



Nice to see another post addressing the biggest issue in Software Engineering head-on.

This of course is nothing new - it's something Alan Kay has been telling us for more than 3 decades [1], who also has an enlightening talk addressing the biggest problem facing software engineering [2].

Before vanishing from the Internet node's Ryan Dahl left a poetic piece on how "utterly fucked the whole thing is" [3].

Steve Yegge also has dedicated one of his epic blog posts to "Code's worst enemy" [4].

More recently Clojure's Rich Hickey has taken the helm on the issue producing his quintessential "Simple Made Easy" [5] presentation, explaining the key differences between something that is "Easy", to something that is truly "Simple".

[1] http://mythz.servicestack.net/#engineering

[2] http://www.tele-task.de/player/embed/5819/0/?iframe

[3] https://gist.github.com/cookrn/4015437

[4] http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.ht...

[5] http://www.infoq.com/presentations/Simple-Made-Easy


Some demonstrations of how simplicity can be pursued:

The Viewpoint Research Institute is trying to build an entire computing stack in 20kloc - http://www.vpri.org/pdf/tr2010004_steps10.pdf

The Berkeley Order-Of-Magnitude group reimplemented Hadoop and HDFS in a few thousand loc whilst remaining api compatible and having similar performance - http://db.cs.berkeley.edu/papers/eurosys10-boom.pdf

Both efforts required questioning the assumption that the complexity of modern systems reflects inherent complexity in the problem. That is the point of this manifesto - to get such drastic improvements we have to step back and rethink our entire approach to build software systems.

I suspect that a large part of the problem is the different response curves for features and complexity. Adding a new feature for a little complexity seems like a good trade. Thousands of people make that decision in their own individual areas and suddenly the cumulative complexity falls off a cliff and we can't handle it anymore. To get simplicity back we have to take a very high-level view of the tradeoffs and demand much more power in return for each unit of complexity we add.

EDIT "Programmers tend to overlook the fact that spring cleaning works best when you're willing to throw away stuff you don't need." - http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.ht...


The example from the BOOM group is interesting! However, it's not clear they are arguing for the same kind of "simplicity" than Jonathan Edwards.

The BOOM group seem to be using a Prolog variant, and their conclusions argue for declarative, high-level programming languages (I didn't read the paper in full; please correct me if I'm wrong!). But if you take a look at declarative programming languages such as Prolog, or functional languages like Haskell or OCaml (all languages which aim to make complex problems more tractable), this is precisely what many programmers reject as "too complex". It doesn't help that very often, "complex" is used as a synonym for "unfamiliar".

Here is another piece of the puzzle: in another post by John Edwards, he claims simplicity led him to reject higher-order functions for the "simple" language he is designing. But higher-order functions are a tool (not necesarily a simple tool) useful to reduce the complexity of code! It just happens that a programmer unused to them must first learn about them before becoming proficient with them.

So the trade-off between complexity and simplicity isn't so easy to define.


> he claims simplicity led him to reject higher-order functions for the "simple" language he is designing. But higher-order functions are a tool useful to reduce the complexity of code! It just happens that a programmer unused to them must first learn about them before becoming proficient with them.

Higher-order functions, or higher-order anything (e.g. predicates) are not complicated because they are unfamiliar, but are complicated necessarily: HOFs, for example, indirect control and data flow, and now you have to think 2nd or 3rd order (hopefully not more) about what your code is doing!

Complexity is really just in the definition of "higher-order." It even shows up directly in our tools for automatically reasoning about program behavior (1st order is easier to deal with than 2nd or N-order). There is a reason 1st order logic is so popular, not because people don't get higher-order logic, but 1st order logic is easier to deal with. The problem is, of course, expressiveness (you can't do much with a functional language that doesn't admit HOFs).


I disagree that higher-order functions are "complicated necessarily". I believe this is actually a side effect of people usually learning to program using languages that don't support them. Weren't there success stories posted here on HN about novices learning to program with functional languages without difficulty?

It's not surprising you can do a lot with a programming language that doesn't support HOFs (though, of course, there wouldn't be much point in calling it functional) -- you can in fact do everything without them. The key issue here is whether HOFs are a useful tool that, once learned, will help make the complexity of the problem you're attempting to solve more manageable. I believe they are, by helping with modularity and composability.

Even if learning to use more powerful tools is more "complex" (that is, it's "easier" or "takes less time to get used to"), once you master them, presumably you're equipped to deal with the tasks you want to accomplish.

Maybe some power tools can be made easier to handle. This is a valuable goal. But if your goal is to never make a tool that requires the user to be trained, you're severely limiting yourself.


I think he is arguing that higher-order functions are more difficult to learn/use than first-order functions because they require higher-order reasoning. That's not to say that they aren't useful or that beginners can't learn them, but just that they impose an overhead and it's worth exploring alternative methods of reducing complexity that may end up having a lower overhead.

> ...once you master them...

Arguments of this sort tend to assume that a) the learning time can always be written off and b) that once you master a tool there are no more overheads. a) is false in the case where the learning time is too long or simply not available eg Joe Accountant may benefit from learning to program but he doesn't have the time to spend years learning. b) is false in the case where the abstraction is harder to reason about or makes debugging harder.

There is certainly an economic decision to be made here but it must consider opportunity cost, cognitive overhead and maintenance overhead as well. That decision doesn't always favour the slow-to-master tool.

It's like extolling the virtues of investing in an industrial nailgun to someone who is just trying to hang a photo. Sure, the nailgun will make driving in nails faster but the investment won't pay off for them.


The assertion is in the definition of higher-order, it is designed to be more powerful by allowing for variables that can be bound to procedures. It has nothing to do about learning curve, there are actually trivial formal proofs that higher-order X is more complicated than lower-order X (that simply involve the definition of order).

Once you have mastered using HOFs, you have simply become proficient at doing a higher-order analysis in your head, it is still more complicated than a lower-order analysis. Even SPJ's head would probably explode given a function that took a function that took a function that took a function... with HOFs, you really want to keep N (the order) fairly small.

There is actually a whole body of work on point-free programming, where the idea is to make something like f(g) look more like f * g, that you aren't passing g into f (which you actually are, but this is an implementation detail), but you are composing f and g. The semantics of that composition are then more well-defined (and hence restricted) than naked higher order functional programming (which is more an assembly language at that point).


> However, it's not clear they are arguing for the same kind of "simplicity" than Jonathan Edwards.

Jonathan Edwards: "Complexity is cumulative cognitive load. We should measure complexity as the cumulative effort to learn and use a technology."

BOOM: "One simple lesson of our experience is that modern hardware enables real systems to be implemented in very high-level languages. We should use that luxury to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."

> The BOOM group seem to be using a Prolog variant

They are using a variant of datalog, which is dramatically simpler than prolog (no unification, no data-structures, no semantic-breaking cut etc). In our (admittedly limited) user experiments we have found that their data-centric approach is easier to teach non-programmers since it lends itself to direct manipulation, live execution and computer-assisted debugging (eg we can cheaply record the exact cause of every change to runtime state and make it queryable in the same language). We (Light Table) are working on a similar datalog variant with a front-end that is about as complicated as excel.

> So the trade-off between complexity and simplicity isn't so easy to define.

I agree that there is a often a trade-off between simple and easy but I don't think that our current tools are anywhere near optimial in that sense. For the case of distributed programming the BOOM tools are both simpler and easier in that they allow you to focus on specifying high-level behaviour and not worry about most of the details that normally make up the bulk of programmers mental load.


> BOOM: "One simple lesson of our experience is that modern hardware enables real systems to be implemented in very high-level languages. We should use that luxury to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."

Yes, but this is accomplished using declarative high-level languages, precisely the kind of languages that many programmers will claim are "too hard" (i.e. sufficiently unlike traditional imperative languages).

I bet almost everyone agrees that the goal of most modern software development should be "[...] to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."

The disageement here is on whether this can accomplished by making programming languages simplified and easier to learn as the overriding principle. Sometimes you have to use a complex tool in order to make your work better.


> ...precisely the kind of languages that many programmers will claim are "too hard"

Your argument seems to be that Prolog and Haskell are high-level and Prolog and Haskell are complicated and hard to learn. I agree up to that point. That doesn't imply that any high-level declarative language must be hard to learn. SQL and Excel are declarative high-level languages (more so than Haskell and Prolog in terms of abstraction from the execution model). Both have had much more success with non-programmers than imperative languages have.

Bloom is a much simpler language than, say, Ruby. It doesn't have data-structures. There is no control-flow. Semantics are independent of the order of evaluation. The entire history of execution can be displayed to the user in a useful form. Distributed algorithms can be expressed much more directly (eg their paxos implementation is almost a line for line port of the original pseudo-code definition). We have actually tested Bloom-like systems on non-programmers and had much better results than with imperative or functional languages.

> I bet almost everyone agrees that...

The key part of that quote was "We should use that luxury to implement systems..." ie give up low-level control over performance and memory layout in exchange for simpler reasoning. Note that in both Excel and SQL the user doesn't have to reason about what order things happen in or where things are stored. The same applies to successful dataflow environments like Labview. In contrast, programmer-centric languages like Haskell and Prolog require understanding data-structures and Prolog requires understanding the execution model to reason about cut or to avoid looping forever in dfs.

Excel, SQL and Labview are all flawed in their own ways but we can build on their success rather than writing them off as not real programming.


> We have actually tested Bloom-like systems on non-programmers and had much better results than with imperative or functional languages.

Interesting! Do you have any link to your study? I'm interested to see your methodology and how you handled researcher bias :)

By the way, SQL is a pretty complex formal system well grounded on theory. I would never write it off as "not real". Most people don't understand SQL without training, by the way.


> By the way, SQL is a pretty complex formal system well grounded on theory. ... Most people don't understand SQL without training, by the way.

I think lots of people understand, or can understand, the theory of SQL. It's the "install this, go into this directory, edit this config file, make sure it starts with the system by doing this, install these drivers for the database" stuff that stops a lot of people before they start. Same applies to pretty much everything that runs as a service, like webservers.


Really? I agree installation is a major hurdle, but in my experience -- that is, coworkers and students -- most people are helpless with SQL beyond very basic queries unless they've been trained in it and truly used it. Most people don't understand the relational algebra it's based on, either. To make it clear this is not some kind of elitist statement: I struggle with SQL, too, because so far I haven't needed to write truly complex queries in my day job.

This isn't an argument against SQL, by the way. It's an argument against the notion that the overriding principle when assessing a tool/language is whether it's simple and easy to learn.


Ok sure. My point is that there should be a clear path from "I might want to use this thing, what is it about and can I give it a spin" to "ok I get it, where can I learn more." Not to say that everyone will _instantly_ understand the concepts, but that time spent on incidental tasks is pretty much wasted.

See also: http://www.lighttable.com/2014/05/16/pain-we-forgot/


UK schools teach 15-16 year old kids to build simple CRUD apps in MS Access. So the basics are definitely somewhat approachable.


But the basics of almost everything are approachable. The basics of Prolog, Haskell [1], Basic, etc, are all approachable. I learned programming with GW Basic, back when I was a kid; it's learnable, but I wouldn't enjoy building complex software with it.

SQL itself is a complex beast. Approachable in parts and with training, like many complex tools.

[1] http://twdkz.wordpress.com/2014/06/26/teenage-haskell/


We were also taught prolog, asm and (weirdly) pascal. We didn't manage to actually produce anything interesting in any of those. The Access CRUD apps and the little VB6 games were the only working software that came out of that two year class.


> I'm interested to see your methodology and how you handled researcher bias :)

Oh we totally didn't :) Our user testing is very much at the anecdotal phase so far. Fwiw though, the researcher bias was in the other direction - @ibdknox was initially convinced that logic programming was too hard to teach. We were building a functional language until we found that our test subjects struggled with understanding scope, data-structures (nesting, mostly) and control flow. That didn't leave a whole ton of options.


I'm all for simplicity, but often, programmers follow the same cycle:

1. X is bloated, let's reimplement it under the name Y with all the fat trimmed!

2. Notice that the problem may be a little bit harder than originally envisioned, start to add more and more features

3. Y is impossible to distinguish from X

Not to say that sometimes you shouldn't cast down existing systems, but it's often because the times have moved on (eg, Wayland vs X11).

I'll note that 1. is (involuntarily) in the same category as the discourse of populist politicians who promise simple solutions to complex problems. I am wary of both.


Case in point: Notice that the web technology stack is really complex. So are the native GUI toolkits on all mature platforms. And something like wxWidgets that wraps all those toolkits is its own kind of complexity. "I know, I'll implement my own beautifully simple UI toolkit and bind it to the core graphics and input APIs on each platform." Congratulations; your beautifully simple solution is going to be off limits to some users, because you haven't handled accessibility, internationalization, etc. So some of that complexity you wanted to avoid was necessary after all. But maybe we can still categorically reject the web stack for applications that don't actually run in a browser.


I think you are not stepping back far enough to look at the problem. Are you actually suggesting the entire complexity of the current web-stack (html, css, js and server side code) is essential to make web-apps?


I wasn't clear. I know there's a lot of incidental complexity in the web stack. But I'm afraid that some developers, in response to a stirring call to simplicity such as this manifesto, will go too far, blissfully unaware that some of the complexity of mature UI toolkits (not just the web stack, but also the native toolkits of mature platforms) is actually necessary.


It looks like you're making a point that these things are all inherently complex.

Isn't it more probable that the tools we have today are just inadequate to deal with those problems? And maybe they are still inadequate after all these years because our industry is very stubborn and doesn't learn from its mistakes?

I see nothing complex about drawing interactive elements on the screen. Smalltalk with its Morphic interface offers a much richer and flexible GUI toolkit and that was out when, in the 60s? How many GUI toolkits have learned from those lessons? And Morphic/Smalltalk was cross-platform before Java, in ways Java isn't to this day. It seems to me what hampers evolution is the technology we choose (Java, C) and not the problem itself (drawing interactive elements on the screen).

For anyone interested in this discussion I recommend Alan Kay's "The Computer Revolution hasn't happened yet": www.youtube.com/watch?v=oKg1hTOQXoY


My point is that the inherent complexity of a UI toolkit is greater than many programmers realize. It's quite easy to implement your own UI that directly draws its controls on the screen and handles mouse clicks. Now, how are you going to make that accessible to blind users? Users with mobility impairments who can't use a mouse? How about right-to-left languages and non-Latin input methods? And there's probably more that I'm not aware of.


Thats true, but there is still plenty of incidental complexity. Poor layout languages with no composition, stateful manipulation of the ui, no way to express components. Tools like React.js and Apple's Auto Layout show that much of this complexity is not actually needed.


Agreed. I like the ideas behind React.js in particular.


I learned a great phrase over the weekend that perfectly applies to #2, "Chesterton's fence"[1]:

"[the] intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

In other words, that feeling when you realize that a thing you've rebuilt because you didn't like it was built that way for a reason.

[1] http://www.chesterton.org/taking-a-fence-down/


This manifesto pushes back hard against step 2. If the software is getting complex, it's worthwhile to reconsider your approach. And if the problem really requires complex software, it's worth changing the environment to simplify the problem instead of writing complex software!


> it's worth changing the environment to simplify the problem instead of writing complex software!

No doubt there is a fair amount of architecture astronautics out there, but thinking you're going to turn complex problems into simple problems by "changing the environment" is most of the time extremely naive. You can push against reality all you want, but reality tends to push back.


"Reality is going to push back." It's a tradeoff, but experience has shown that fixing the problem without software will cause fewer headaches in the long run than fixing it with complex software.


[citation needed]

More seriously, there are many classes of problems which range from extremely impractical to impossible to tackle without software. The question is also, who is getting the headaches? If you pile up enough man-hours on the most awful software, it will eventually work satisfactorily. Customers don't give a damn about the underlying code, nor they should, they want a product that works.


At the risk of belaboring the point from my other sub-thread...

Suppose you've developed an application that has a graphical user interface. For the sake of simplicity, you rejected all of the overly complex GUI toolkits out there and rolled your own, with the assumption that all of your users can see what you're drawing on the screen. So you didn't implement your host platform's accessibility API (assuming you're running on a host platform with an accessibility API.) Now, a blind person needs to use your software. How do you fix that problem without software?

Beyond that specific example, the point is that, as the GP said, trying to change the environment to eliminate inherent complexity is often not feasible.


I implemented my own UI framework once on top of WPF:

http://bling.codeplex.com/

But I did something different: rather than wrap high-level APIs in even more higher-level APIs, I instead focused on making it "simple to express the math." So for a table, you could use data binding to simply express some constraints on cell width, height, adjacency, and so on. So I never bother using what WPF calls a table (or list or whatever); I would just use canvas and organize the cells using a few lines of math. That is quite liberating!

It is not always the case that GUI toolkit goes in the direction of increasingly more abstraction.


As I mentioned, rethinking your approach is also valid. Making bad assumptions upfront is the cause of a lot of added complexity.

Maintaining complex software is also infeasible. Maybe we're doomed.


Most of the complexity in these GUI toolkits is unrelated to accessibility or other "essential but non-obvious" features, so I don't think this is a good point.


Most attempts at radical change will fail, does that mean it's not worth trying? Sometimes it does work, and the world becomes much better as a result.


As usual, the big issue is "do you actually understand the problem you need to solve?". Many problems are complex, either per se or due to constraints outside of your control. Maybe you don't like to have to implement a lot of complex code to interact with a terribly programmed legacy app, but if your customer pays you to do that, that's the kind of constraint you cannot really do anything about. When you look at it closely, my feeling is that you'll find a lot of apparently-unnecessary complexity is there for a good reason, and it's our job to make this complexity manageable.

It doesn't mean you shouldn't challenge existing paradigms. Innovate! Blow our minds! But just make sure you understand the problem you're solving first :)


As a researcher, I have freedom to constantly reinvent things. Complexity melts away the better you understand your problem: ya, it was there, it was needed, but now I know more and I can get rid of this. People in production don't usually have these benefits.

I have only made this letter longer because I have not had the time to make it shorter. - Blaise Pascal.


I think that's often true for blue-sky projects. And I agree that a better understanding of the problem can lead to a reduction in complexity, but you cannot always afford it.


As long as we are talking about reinventing programming, I think we can afford it. The problem is that once you are in production, you have to ship...this is what I always got from RPG's "worse is better!"


Of course, it's a matter of tradeoffs. It's particularly interesting to observe how Rust is moving along: since I started watching it, it has undergone a huge quantity of non-trivial changes. Which would tend to show that "worse is better" doesn't work for programming languages. Indeed, languages which have grown organically, like PHP, are not well-regarded. Maybe it's a question of surface: the smaller unpolished surface you expose to the outside, the faster you can ship something which will not turn into PHP later. If your CLI is not well thought-out, it's not a big problem, but you won't be able to make drastic, incompatible changes in syntax and semantics for a programming language with millions of existing lines (ask the Python 3 people :) ).


In which Y is usually prefaced with "micro"...


A Manifesto for Mediocrity.

Every few years someone comes up with the idea of making programming so simple that anyone can do it. This is not new; the concept has been around at least since the '60s. Look at some COBOL history:

"They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximum use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power." [1]

Many attempts have been made to achieve this Silver Bullet. The examples that get closest are things like GameMaker:Studio [2] or Unity 3d [3], which are extremely domain-specific (for certain classes of 2d and 3d games, respectively).

So you CAN create a domain-specific language that anyone can use. But every attempt to create a completely general purpose language that anyone can use to do anything has failed -- or has ultimately (accidentally?) produced a domain-specific language that solves the specific problems that the authors are most familiar with.

[1] http://en.wikipedia.org/wiki/COBOL

[2] http://en.wikipedia.org/wiki/Game_Maker:_Studio

[3] http://en.wikipedia.org/wiki/Unity_%28game_engine%29


> But every attempt to create a completely general purpose language that anyone can use to do anything has failed

Excel is used by millions of people and powers large swathes of the worlds economy. VB6 is still in use today and powers multi-million dollar companies (http://msdn.microsoft.com/en-us/magazine/jj133828.aspx). COBOL still lurks in the heart of many banks.

These systems have problems that cause untold economic damage. Instead of improving the tools to help people avoid these problems we sit around being smug about our uber-programming skills and declaring that programming is too complex for those people anyway. What many people here don't seem to realise is that the choice is not between the crappy VB6 app and a nicely architected system built by a 'real' programmer, the choice is between the crappy VB6 app and the same people doing the job manually.

Empirically speaking, given tools with an approachable learning curve many people are capable of producing simple programs that make them more effective at their jobs. Rather than lamenting that these people produce bad code we should be improving the tools to lead them down the correct path.


I agree with some of what you say about the damage these tools have done (but please note Excel is not a general purpose language, which the parent post was talking about, though it can be used to make pretty cool and surprising things).

However, I don't think anyone is arguing that nothing should be done about it. Some of us would argue that something must be done, but that the answer is not necessarily simplification, especially simplification of the "make it easy" kind.

Especially since, for many programmers, "simple" & "easy to learn" are actually synonyms for "similar to something I'm already familiar with".


What most people seem to be arguing is either that the people currently using Excel and VB6 should suck it up and learn a real language or that they shouldn't be programming at all (I'm not saying you are, but this is the general sentiment I encounter in eg http://developers.slashdot.org/story/14/07/09/131243/normal-...). There are number of different spheres of programming with different requirements. For the sphere of end-user programming (which Jonathan Edwards explicitly calls out in this post) there are a huge number of people who have to choose between simple tools which don't work well or complex tools that they don't have the time to learn. This sphere has seen little change since the start of the millennium (Excel is still the best option by far, for all its limitations). The OP is arguing for efforts to close the gap and produce simple tools for their simple needs.


"What most people seem to be arguing is either that the people currently using Excel and VB6 should suck it up and learn a real language or that they shouldn't be programming at all"

And why is that such a problem? If you can handle excel and vba you can learn to program. A lot of existing languages are easier to learn than excel+vba.

Excel is not easy to use if you want to do anything moderately complex. Even after years of training all through school and in the workplace the average user can't do much programmatically with it.

Excel solved the input/output problem, the availability problem, and the business acceptability problem.

By that I mean business users were doing a massive amount of data entry and data manipulation mainly in order to do simple calculations and view data logically in tables. They did this without thought to automate but it made automation 10x easier.

As a business grunt I can't just install python and start cutting code. If I said "Hey boss I can automate this with python. It will only take me 2 weeks" I've got no hope. For starters corporate IT wouldn't even let me install python. But I can spend a month fiddling around in a spreadsheet to automate the same task and my manager will be happy.


@jamii

> It's not just programming that is hard, it's editing, debugging, compiling, version control, packaging, deploying, upgrading etc. Existing tools give you a huge amount of flexibility and power (that most end-user tasks don't need) at the expense of a brutal learning curve.

Sure, I agree all of those steps are painful, and I think everyone here agrees that anything that can improve them is welcome! At the same time, some of them only exist in specific contexts. For example, is the guy writing an Excel macro not bothered about version control because Excel is an easy-to-learn tool that achieves simplicity, or because he isn't building something collaboratively with a team of 5 other people, all working on the same task?

By the way, I read your article. I fully agree with this:

"Finally, environments can't be black boxes. Beginners need a simple experience but if they are to become experts they need to be able to shed the training wheels and open the hood. Many attempts at end-user programming failed because they assumed the user was stupid and so wrapped everything in cotton wool. Whenever we provide a simplified experience, there should be an easy way to crack it open and see how it works."

I'd add that they also need to be able to reach for the complex tools once they've mastered the simpler ones.


I think you misunderstood my point somewhat. I'm not saying that Excel solved the version control problem - far from it. Manual version control by emailing copies of spreadsheets was identified as a major cause of errors in various studies.

My point is that for end-user programming to work we need version control, deployment, packaging etc to scale down as well as up. The software industry has a tendency to fetishize scaling up and ignore scaling down.

Experienced professional programmers still struggle with git in their day-to-day workflow (http://www.sussex.ac.uk/Users/bend/ppig2014/13Church-Soderbe...). The average end-user has no chance. Even if the underlying data model is something like git, the experience for the user needs to be more like undo-tree or etherpad.

Excel smooths the learning curve for calculation/simulation - you can get started by just putting in some numbers and learning a few basic functions. If that's all you need, that's all you have to learn. I want to see the same pay-as-you-go approach for all our tools.


I made a long list of some of the difficulties one encounters in trying to use industrial-grade tools to solve home-repair-grade problems: http://www.lighttable.com/2014/05/16/pain-we-forgot/

It's not just programming that is hard, it's editing, debugging, compiling, version control, packaging, deploying, upgrading etc. Existing tools give you a huge amount of flexibility and power (that most end-user tasks don't need) at the expense of a brutal learning curve. They make complex things possible but easy things tiresome. Consider how long it takes for the average CS undergrad to reach a point where they could build, deploy and maintain the simple webapp I described in that post.

There is definitely room for a simplified set of tools that lets end-users build simple applications with a pay-as-you-go approach to complexity and learning curve.


>There is definitely room for a simplified set of tools that lets end-users build simple applications with a pay-as-you-go approach to complexity and learning curve.

You want this to succeed, then pick a domain and solve that problem for people.

It's not the source control that is the primary problem. You can do Excel-like things on Google Drive, which has integrated revision control, shared editing, and a dozen other features to help the average user or team get things done.

No, the primary problem is that most people don't know what they want. They don't even know what's possible, much less how to accomplish it. Ask any consultant and they'll agree with me: The first thing you need to do is figure out what the users actually can use based on their needs.

It's like Henry Ford said: If you asked users what they wanted, they'd have told him they wanted a faster horse. [1]

The problems exist because they aren't easy to solve unless you have the expertise. I'm not poo-pooing Excel or any other domain-specific solution. If you can create a general solution to a problem (say, for example, WordPress), then users can plug together the pieces they need and create an app.

But there is no "language" that will solve the "average person can't program" problem in the general case any more than there's a paintbrush that will solve the "average person can't paint" problem.

[1] http://blog.cauvin.org/2010/07/henry-fords-faster-horse-quot...


Excel==Domain-Specific.

I already said that Domain-Specific systems can be successful, and I have nothing against improving domain specific systems.

The original article implied that they were going to fix All The Programming. I'm saying it's been tried, and Every Single Time it fails utterly (or produces a domain specific solution).


Really now? Smalltalk the precursor to much of modern Object Oriented programming languages and development environments started off as a language project for children http://www.atariarchives.org/bcc1/showpage.php?page=61.

If that's the end result of mediocrity, our programming culture could use a whole lot more of that please.


Game Maker and Unity are trying to make the full stack as easy and accessible as possible. The programming languages they use are part of that, but their chose of language is mostly a pragmatic balance between speed, power, familiarity for existing programmers and learning curve for newbies.

I mean, Unity3D uses C# on top of .Net, which isn't at all what is described in the OP.


>Game Maker and Unity are trying to make the full stack as easy and accessible as possible.

That's pretty much my point.

What people need is a full stack to solve their problem, but they need the stack to be able to be customized to their particular variant of the problem.

That full stack will have a lot of code that is beyond 95% of people's ability to create. But even though Unity uses C#, artists and designers I know who are absolutely not programmers (by their own insistence!) can and do successfully create games on top of Unity.

And that is the future of making programming more accessible. Not creating a better language, which is what the original manifesto was about.


> Build compelling simplicity and performance will come.

Python and Ruby are notable counter-examples to this.

Both Python and Ruby were designed primarily for human-friendliness, as this essay advocates. Guido has described Python as "executable pseudo code". Matz has said that the primary purpose of Ruby is to make programmers productive and happy, and to design more for humans than for the machines.

People have tried to make the performance come. Python is well-loved which has led to numerous attempts to accelerate it (Psyco, Unladen Swallow, PyPy). Many of these efforts have made notable progress, but none have met the bar of compatibility and performance necessary to displace the relatively slow cPython. Ruby is getting faster, but it's still among the slowest compared with its peers.

There are other languages where the performance did come (JavaScript, Lua). But this is in no way a given.


In the case of Javascript, the performance came at great cost. Javascript engines are monstrously complex. The language might be "simple", but the implementation is not.

Lua on the other hand is probably fast because it is simple. There aren't many other programming languages like that. (C and Scheme maybe?)


The lesson from both of these is that performance doesn't magically come from simplicity but it is enabled by simplicity. A complex system is always hard to optimise. A simple system may be easy to optimise.

Building simple systems whilst keeping performance in mind is a delicate balancing act. Local optimisations tend to add complexity that may prevent later optimisations. Sweeping generalisations that reduce implementation complexity often inhibit later optimisation due to lack of information (eg http://lucumr.pocoo.org/2014/8/16/the-python-i-would-like-to...).


The Lua interpreter is among the fastest of languages in its class, but LuaJIT is also among the fastest JITs. I agree that a lot of this has to do with Lua being a simple language.


> Python and Ruby are notable counter-examples to this.

I beg to differ. You seem to imply that Python is a simple language. Quite the contrary. Attempts to accelerate Python failed exactly because it's an extremely complicated language. Same goes for Ruby and JavaScript.


> Python have a simple syntax

Python has a rather simple syntax indeed, but it has very complicated semantics. `foo.bar` - syntactically very simple expression of the attribute access has a mountain of semantics behind it.

> Given that Python and Ruby are both simple to learn and use (as users, not implementors), they are both simple by the definition given in this article.

Good remark. I still tend to disagree though. It's easy to learn and use a very small subset of the language. Not the whole language. Even then, it's practically impossible to limit yourself to a certain subset. Not if you want to use any python libraries.


Python have a simple syntax, yet a complex implementation, so simplicity is not carried all along the path. A argument could be made this is in part the fault of C... because a state sub-goal of python is easy to integrate with C and back them the way they do it is complex, and because C is worse than python and a lot of code is made with it. So, you have a simpler language in top of a messy one (ie: In contrast with something like for example, pascal).

The problem when try to make something simple is that carry the idea across boundaries/subsystem eventually break. If, for example, you wanna to make a performant language with clean syntax for a multi-procesor setup with intuitive behaviour, but, the base system/os/CPU not make it easy, you eventually need to break the simplicity along the way.

This also touch the problem that to make something simple is necesary the skill and mastery across the whole problem. So perhaps python/ruby is like is because the authors are great at building a aproachable language but not at build the VM. You need to be both.

Contrast with erlang. Amazing VM but whacky syntax. Or haskell, amazing ideas but the way they are teached... seriuously? A functor, monad, who talk like that? Have simplicity along ALL the path... that is hard!


This essay defines complexity like so:

> We should measure complexity as the cumulative effort to learn and use a technology.

Given that Python and Ruby are both simple to learn and use (as users, not implementors), they are both simple by the definition given in this article.


I feel like this is a kind of naive approach that might work for some end user programs, but isn't a good principle in general. Large/monolithic programs like databases, operating systems, compilers, etc. all can benefit from added overall complexity. (new data structures for storing data on disk, better memory management and program separation, better/more aggressive optimization+static analysis+etc.) You can argue that monolithic programs like those should be broken down into smaller parts (they already are, though), but in the end adding an R*-tree to a database allows you to do more types of lookups efficiently and makes your program /better/.

The thing that no developer really benefits from is added state complexity. But we've spent the last 30+ years coming up with ways to hide that kind of complexity from a programmer. That's /why/ we have our programs and processes divided into boxes (PL, OS, etc.) and /why/ we use things like higher level languages instead of assembly, object oriented programming, and general data hiding. On the other hand, it sounds much less sexy to say "People should do a better job of following best practices, and a lot of the problems in our industry are because people prefer to glue things onto existing code instead of being willing to do a higher level restructuring of a code base when a new need is established."

I'd agree that a lot of programming is not science. We shouldn't be treating it as such. The harder parts of programming are an application of theoretical mathematics (Dijkstra pointed this out years ago, and it's still true), but very little of CS is 'science' the way something like physics or chemistry is 'science'.


> That's /why/ we have our programs and processes divided into boxes

http://db.cs.berkeley.edu/papers/cidr11-bloom.pdf - by unboxing distributed systems we can statically predict whether a given program is eventually consistent (http://arxiv.org/abs/1309.3324 - and automatically add coordination protocols where necessary)

http://db.cs.berkeley.edu/papers/dbtest12-bloom.pdf - by unboxing distributed systems we can use static analysis to more efficiently explore the space of message interleavings in unit tests

https://infosys.uni-saarland.de/projects/octopusdb.php - by unboxing storage, databases and application-side queries we can treat the entire pipeline as a single optimisation problem

http://www.openmirage.org/ - by merging the OS box and the PL box we can improve performance and security for server applications

I don't disagree with you exactly, but I would point out that a) boxes are a mean of dealing with complexity - less complexity means we can have bigger boxes and more opportunities for cross-layer optimisation b) the places we have drawn those boxes are largely arbitrary and shift over time - the existence of the boxes does not imply that we can't benefit by moving the lines around or by merging some of them.


Interesting papers! I appreciate the links.

>a) boxes are a mean of dealing with complexity - less complexity means we can have bigger boxes and more opportunities for cross-layer optimisation b) the places we have drawn those boxes are largely arbitrary and shift over time - the existence of the boxes does not imply that we can't benefit by moving the lines around or by merging some of them.

I agree completely. However, the manifesto seems (to me) to advocate all-over unification as the end goal, which I think is naive. I read it as "if a program needs to be divided into separate boxes, perhaps you need to make it simpler", which to me seems like the wrong way to go about things.


Oh, I read it in context with:

> We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D

A lot of the things we do in programming give us power and flexibility at the expense of increasing the learning curve: eg separate tools for version control, compiling, editing, debugging, deployment, data storage etc. IDEs can show all those tools in one panel but it can't change the fact that they were designed to be agnostic to each other and that limits how well they can interface.

My current day job is working on an end-user programming tool that aims to take the good parts of excel and fix the weaknesses. We unify data storage, reaction to change and computation (as a database with incrementally-updated views). The language editor is live so there is no save/compile step - data is shown flowing through your views as you build them. We plan to build version control into the editor so that every change to the code is stored safely and commits can be created ad-hoc after the fact (something like http://www.emacswiki.org/emacs/UndoTree). Debugging is just a matter of following data through the various views and can also be automated by yet more views (eg show me all the input events and all the changes to this table, ordered by time and grouped by user). We have some ideas about simplifying networking, packaging/versioning and deployment too but that's off in the future for now.

Merging all these things together reduces power and flexibility in some areas but allows us to make drastic improvements to the user experience and reduce cognitive load. It's really a matter of where you want to spend your complexity budget and how much value you get out of it. We think that the amount spent on the development environment is not paying for itself right now.


In that case, wouldn't programming to minimize or remove state help a lot? That's a main goal of functional programming, right?


As an experiment, I took a few paragraphs and replaced the words "software", "software and programming", "programming" with the word "mathematics" and replaced the word "programmers" with "mathematicians":

Most of the problems of mathematics arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify mathematics.

and

Much complexity arises from how we have partitioned mathematics into boxes: <put various branches of mathematics here>; and likewise how we have partitioned mathematics development: <put various phases here, from "Eureka" moment to copy-editing of proof>. We should go back to the beginning and rethink everything in order to unify and simplify. To do this we must unlearn how to do mathematics, a very hard thing to do. (I paraphrased by replacing the original "to" with "how to do".)

and

Revolutions start in the slums. Most new mathematics platforms were initially dismissed by the experts as toys. We should work for end-users disenfranchised by lack of mathematics expertise. We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D. We should take inspiration from end-user tools like <replace with your favorite tools, like pen-and-paper, whiteboard, abacus>. We should avoid the trap of designing for ourselves. We believe that in the long run expert mathematicians also stand to greatly benefit from radical simplification, but to get there we must start small.


If you've seen some of Edwards' work and listen to what he is saying about aiming for non-programmers you may get an idea of how different his idea of programming is from what most programmers do (I admit I am a traditional programmer).

I think the problem is that the basic definition of programming relies on editing textual and mainly static source code that describes processes at a low level. As soon as you start to get away from that your tool by definition is not a programming tool and therefore programmers don't want to be associated with it.

Its a problem of a failure of imagination and worldview and a psychological issue of insecurity. Programmers are like traditional wood workers who look upon automated manufacturing with disdain.

Ultimately superior artificial general intelligence will arrive and put those mainstream programmers with their overly complex outdated manual tools out of work.

But programmers will hang on to their outdated paradigm until the end of the human age. Which by the way is coming within just a few decades.


I think you have a mistake in your analogy. We're more like mechanical engineers drawing complicated CAD diagrams. It's just that we don't need a whole factory to produce our product, because it's not physical. All we need is a compiler and copy the bytes of the output.

There are 100 decisions every minute that need very complicated thought processes to solve, this is as true in manufacturing as it is in software development. Do you make this button out of plastic or metal? What kind of metal or plastic? How big should it be? Should it click or not? Does it have multiple functions based on the state of the vehicle? Does it have a label? Where does the label go? What does it say?

You could write this all into a spec, but the spec would end up being the same as the CAD drawing / program. So you end up relying on the experience of the engineer to make all the decisions without it all being 100% specced out. Maybe it's a button on a top end Mercedes that gets pushed a lot, so it should be metal to complete the feeling of luxury. Maybe it's a button on a mid-end Nissan that gets pushed a moderate amount - make it a decent plastic, etc.


But we arent using CAD programs to develop software are we. Look at the tools used to develop software and compare them to CAD programs.


> We're more like mechanical engineers drawing complicated CAD diagrams.

In engineering of any kind, you have functional requirements and you have outputs that you design to satisfy them. In software engineering, you may be trying to help people manage finances; in mechanical engineering, maybe you're trying to remove water from a mine shaft. Your outputs are a graphical computer program and a pump. You have certain resources available, such as operating system X or a steel mill. Your job is to bridge the gap between resources and outputs by creating the minimal viable instruction set that results in the output.

The instruction sets are, respectively, source code and a CAD drawing.

The mechanical engineer has a CAD program that already knows a lot about pumps, and a library of standard screws, threads, and sockets. The program can infer constraints, suggest placement of parts, substitute components based on formulae, read data from Excel or CSV files, simulate stress due to different loadings, and even knows a bit about manufacturing so that it can tell if the part can be made or not. It can help generate drawings that the steel mill can use to create the pump. The mill, too, is pretty smart and has a lot of leeway to make the pump however the want, as long as it works.

All this is basically equivalent to the what the software engineer has available: test suites, frameworks, reusable functions and libraries, cross-platform compilers. The difference is workflow:

While working on the design, the engineer can switch between a variety of layouts, color schemes, transparencies, and zoom levels. S/he can visually show constraints (using symbols and colors), directly manipulate parameters and positioning of objects. The engineer doesn't have to memorize commands or syntactic conventions in order to be productive. In a modern system, these kind of interactive operations are intuitive to perform and distinct from the manipulations of the source object (the CAD file). The equivalent for software engineering doesn't really exist. It's like instead of separating engineering and implemenetation, you're trying to do everything on the factory floor.

The effect of this is that you have plenty of smart people (in science, business, whatever) frustrated in translating their ideas into "code"[1]. In the long run, what we should, and probably will, see is smarter compilers, more sophisticated IDEs -- abstractions above the level of implementation (the problem level) rather than below (the machine level). This is similar to what happened with the process of putting ideas into words for people to read: at first, only scribes could read and write; then, anyone could read and write, but only printers could publish books; now, anyone can read, write, publish, and distribute anything via the internet.

None of this is to say "you're doing it wrong," but rather "we're not there yet." Software engineering is not going away in the same way that mechanical engineering is not going away because of CAD and simulation programs. More sophisticated tools are a path towards "Simple things should be simple, complex things should be possible" (quote from Alan Kay).

[1] There is a reason it's called code, after all.


This "manifesto", for lack of a better word, neatly exhibits the main problem I have with so many efforts to "improve programming" of this style: they focus on ease of learning as the be-all and end-all of usability. Coupled with the unfortunate rhetoric¹, it left me with a negative impression even though I probably agree with most of their principles!

Probably the largest disconnect is that while I heartily endorse simplicity and fighting complexity—even if it increases costs elsewhere in the system—I worry that we do not have the same definition of "simplicity". Rich Hickey's "Simple Made Easy"² talk lays out a great framework for thinking about this. I fear that they really mean "easy" and not "simple" and, for all that I agree with their goals, that is not the way we should accomplish them.

How "easy" something is—and how easy it is to learn—is a relative measure. It depends on the person, their way of thinking, their background... Simplicity, on the other hand, is a property of the system itself. The two are not always the same: it's quite possible for something simple to still be difficult to learn.

The problem is that (greatly simplifying) you learn something once, but you use it continuously. It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it! We should not cripple tools, or make them more complex, in an effort to make them easier to learn, but that's exactly what many people seem to advocate! (Not in those words, of course.)

So yes, incidental complexity is a problem. It needs addressing. But it's all too easy to mistake "different" for "difficult" and "difficult" for "complex". In trying to eliminate incidental complexity, we have to be careful to maintain actual simplicity and not introduce complexity in other places just to make life easier for beginners.

At the same time, we have to remember that while incidental complexity is a problem, it isn't "the" problem. (Is there every really one problem?) Expressiveness, flexibility and power are all important... even if they make things harder to learn. Even performance still matters, although I agree it's over-prioritized 99% of the time.

Focusing solely on making things "easy" is not the way forward.

¹ Perhaps it's supposed to be amusingly over the top, but for me it just sets off my internal salesman alarm. It feels like they're trying to guilt me into something instead of presenting a logical case. Politics rather than reason.

² http://www.infoq.com/presentations/Simple-Made-Easy


I've observed a definite correlation that people who like Hicky's simple/easy framework don't agree with mine. Personally I don't find it useful because it tries to separate knowing from doing.

I also seem to disagree with people who emphasize "expressiveness, flexibility, and power". I think they are mostly a selection effect: talented programmers tend to be attracted to those features, especially when they are young and haven't yet been burned by them too often.

With such fundamental differences we can probably only agree to disagree.


I don't think this manifesto and the simple/easy framework are even really talking about the same things beyond the basic point around avoidance of incidental complexity. I think both viewpoints outline worthy goals with staggeringly different levels of scope. In the case of the manifesto there's hardly anything actionable beyond doing lots of mostly messy research. I think people find this frustrating, but so what? Lofty goals often arise out of the conviction theres far too much momentum in the wrong direction. In contrast I think the simple/easy framework is something a working programming can apply to everyday tasks and while unlikely to result in a radical shift it may perhaps bring some of us closer to seeing that even larger goals may be possible.


"Personally I don't find it useful because it tries to separate knowing from doing."

What do you mean? Learning and doing are quite different.

From a professional programmer point of view: If it takes me 6 months to learn a tool, and then the tool allows me to complete future work twice as fast (or with half as many defects etc) that is a great trade off.


Rather than just agreeing to disagree, you could defend your beliefs with the best arguments and examples you have. You're opposed to expressiveness, flexibility and power? That's a somewhat surprising view. I'm interested in why.


You think that Edwards doesn't know the difference between simple and easy to learn/use?

> It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it!

Why is that important? Why can't a tool be simple, expressive, easy to learn and easy to use? What studies do you site for your viewpoint? There has been a lot of research in this area. Please reference the research that supports your claim.

Reason has been tried by Edwards and many other for decades. It hasn't worked.


"Why can't a tool be simple, expressive, easy to learn and easy to use? What studies do you site for your viewpoint?"

Perhaps it can be. But they are all design choices that are often at odds with one another. E.g. I've frequently used software that was easy to learn but hard to use.

Likewise I've used tools that were hard to learn because they had new abstractions but once you understood the new abstractions they were really easy to use. Etc etc etc.


> ...they focus on ease of learning as the be-all and end-all of usability.

I see people jump to this conclusion on pretty much every post of this type. In this case it is clear from the authors work (http://www.subtext-lang.org/) that his focus is not on making programming familiar/easy to non-technical users but rather on having the computer help manage cognitively expensive tasks such as navigating nested conditionals or keeping various representations of the same state in sync.

> ...you learn something once, but you use it continuously.

Empirically speaking, the vast majority of people do not learn to program at all. In our research we have interviewed a number of people in highly skilled jobs who would benefit hugely from basic automation skills but can't spare the years of training necessary to get there with current tools. There does come a point where the finiteness of human life has to come into the simple vs easy tradeoff.

You also assume that the tradeoff is currently tight. I believe, based on the research I've posted elsewhere in this discussion and on the months of reading we've done for our work, that there is still plenty of space to make things both simpler and easier. I've talked about this before - https://news.ycombinator.com/item?id=7760790


I explicitly advocate crippling tools and making them more complex if it results in them being easier to learn.

The cost of a barrier to entry is multiplied by everyone it keeps out who could have been productive / creative / or found their passion.

The cost of a limited set of tool features is, arguably, that people will exhaust the tool and be limited. However I have never found this argument convincing given what was achieved with 64kb of memory, or even paper and pencil.

The typewriter, the polaroid camera, the word processor, email. All are increases in complexity and massive decreases in effort to learn and they all resulted in massive increases in the production of culture and exchange of ideas. Some inventions are both easier to learn and less complex (Feynman diagrams) but if I had to pick one, I pick easy to learn, every single time.


>> It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it!

Not sure if I agree. Steep learning curves significantly hurt user adoption. This is especially true for tools that have lots of alternatives.


<brain dump> Complexity really is the enemy. A couple of thoughts:

- In programming, our job is to move over the "complexity hump." We hear a problem, we analyze it, we code it, then we simplify. Most really bad code comes from programmers never pushing past the hump. They just slash away at whatever problem is in front of them.

- When we move past the hump, we push complexity off. Sometimes this is done by abstraction, sometimes by a reduction in terms. In either case, our job is to make the complexity go away. If we're still dealing with arcane issues a month from now? We're not past the hump.

- Many times we believe we've simplified a problem, only to have the complexities jump out and bite us again later. That's why it's best to "exercise your code" to make sure that your abstractions or re-organizations will hold up in the real world. Use it in different contexts. When we don't exercise our code enough, we get complexity debt. The old "works on my machine" is now "works in these 12 cases I tried"

- Every tool we pick up has some degree of complexity debt depending on how much it has been exercised in various contexts. Stuff like *nix command line programs? They rock. The reason they rock is because they have a billion different scenarios in which they've been proven.

- Most modern programming environments involve programmers plugging into x number of tools, all of which have varying degrees of complexity debt. Many a time I've watched a programmer walk down the "happy path" of using a tool or suite. What a happy look! Then they realize they need something that's off the happy path. So they've got to pull the source code, inspect the deployed javascript, take a look at the IL, get a new patch, spend an hour on Google. It's at this point that the tool/framework stops being a help and becomes a liability.

- For this reason, most tools and frameworks are actually liabilities instead of assets. I think there are a lot of programmers that would be amazed at what you can do in C, JSON, simple HTML, and Javascript.

</brain dump>


This part seems really difficult to do well:

We should work for end-users disenfranchised by lack of programming expertise. We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D.

The single biggest win for end-users "disenfranchised by lack of programming expertise" in the past 20 years was probably PHP, because all you really needed to know was HTML plus a tiny smidgen of CSS, PHP and SQL. Anything else could be cobbled together with help from StackOverflow. The results were a mixed blessing: A lot of people built sites to suit their needs, but a lot of those sites degenerated into pretty ugly hairballs. Before that, the biggest win was probably Excel: The world runs on useful (but often buggy) spreadsheets. I have nothing against these tools. They're important and they fill a critical need.

Now compare a tool like Python: it's simple and it scales from novices to experts. But it's mostly limited to programmers and scientists, and unlike PHP, you actually need to learn some basic programming to use it in most cases.

I like using tools designed for professional programmers. It's great if they also work well for novices who are willing to learn a bit of programming. But the stuff I build is a lot bigger than the typical amateur PHP website, and I need tools for managing large amounts of complexity in my problem domain, and for managing requirements that will change significantly over the course of years.


Highly recommended: Code Complete by Steve McConnell. A big part of the book talks about how to reduce complexity, in the literal sense and by using abstractions.


A.k.a. the Suckless philosophy a.k.a. back in the day, the Unix/Plan 9 philosophy.

Unfortunately the market has shown that people vastly prefer ready-made, comprehensive, integrated solutions to a loose kitbag of nice, simple, composable parts out of which one might build a solution. See: Outlook (mail, contacts, and calendaring in one!) vs. Unix mail programs, many of which didn't even have an editor built in but punted that to your $EDITOR. See also, Apple's domination of everything. So you are going to see less and less of the KISS priciple apply as the market demands more complexity for the sake of making their lives easier. And this applies to open source too. Systemd is evolving into a tightly-coupled mass that monitors and controls everything you might monitor or control on a Linux system; and soon it will be everywhere. There's also a whole generation of young programmers who do not understand Unix and so reimplement it poorly in JavaScript.


The key quote from the OP is "Complexity is cumulative cognitive load".

From the point of view of an end-user, using Outlook is vastly simpler than learning to program in order to cobble together their own solution out of command-line tools with harsh learning curves. Even for a professional programmer the tradeoff is pretty clear.

People only have so much cognitive capacity and so much time to live. Every hour spent messing with configuration or customising behaviour has to be weighed against the lifetime gain of that improvement (let alone the lifetime cost of maintaining the custom solution as the environment changes).

That's not to say that I don't want software to be configurable and composable, just that the cost of doing so with our current tools is too high for most users.


I think a crucial clarification here is the distinction between simplicity of interface and simplicity of implementation, as pointed out in "The Rise of Worse is Better". The components of a Unix system may individually have simple implementations, but it adds up to a complex interface for the user.


Well said. That's the distinction that causes so much tension between programmers and users - it's why we have this rift between unixy software which is wonderfully composable but infuriating to use and windows/mac software which is easy to use on the common path but refuses to talk to anything else.


The article seems to be cross with Computer Science in some way, while all the points it makes are about software engineering. The last paragraph in particular seems to be railing against something or other.

The manifesto declares boldly "we are not doing science". Right. This is completely and trivially correct. No one thinks you are.


> "Perhaps we can learn from the methodologies of other fields like Architecture and Industrial Design. This is a meta-problem we must address."

Something about history and being doomed to repeat it springs to mind.


The entire concept of Design Patterns stems from a (largely failed) attempt to create design patterns in architecture. [1]

So, yes.

[1] https://en.wikipedia.org/wiki/Pattern_%28architecture%29


It seems everybody is talking at a different level of abstraction. Reality is complex so the closest you're to it, the more complex your code will be (e.g. kernel supporting a zillion different hardware devices).

If the idea is to abstract even more in a simple and powerful way that more people can create more/better software, go for it. But don't dream that all that has been done until this day is complex because nobody cared about simplicity at all. I'm not saying the complexity is completely unavoidable but it takes a lot of work.


I would not say that we are not doing science.

I would rather say we are doing science of the artificial---which looks a lot like design.

See the Sciences of the Artificial by Herb Simon (a true legend):

http://www.amazon.com/The-Sciences-Artificial-3rd-Edition/dp...


What does the acronym 'PL' stand for in the sentence: "...partitioned software into boxes: OS, PL, DB, UI, networking..."?

OS = Operating Sytem, DB = DataBase, UI = User Interface, PL = I can't work it out but I'm sure it will be stupidly obvious once it's pointed out...


Programming language?


Haha! Stupidly obvious indeed. Thank you both.


Programming Languages


There's another thing that encourages complexity: vested interests. Once a complex paradigm is established you'll have vendors with an interest in maintaining or even increasing that complexity.

The corporate IT ecosystem is rife with this around areas like security and network administration.


Is Hypercard always mentioned in 'make programming easier' manifestos because it was really easy or because we remember it nostalgically? I used it a few times in school but can't remember much about it. Never really got into programming in it much.


I used Hypercard for a solid year in high school. It was black-and-white and ran on Macs that were old even for 1994, but you could do stuff that was just awesome.

That was 1994--20 years ago--and my anatomy teachersomehow had a lab for Hypercard computers. He had the entire class design animated anatomy presentations that you still couldn't match today using 2014 mainstream presentation software.

There's a lot of nostalgia in programming, but Hypercard was the real deal.


I think part of what made Hypercard so useful was that it made it really easy to make one specific kind of application, which at the time was 90% of what any single person could be expected to do by themselves. Most of the "multimedia experiences" people were making and buying were basically interactive slideshows, roughly as complex as a DVD menu. Hypercard made this easy, and you knew before you started whether or not it would be a good fit for your project. Under the covers, it was a Turning-complete language, but I'd imagine trying to make anything more complicated than Myst would be really painful. I suspect the developers of Myst probably ran into its limitations before they were done.

I think the main obstacle nowadays is deployment. Back in the day, you'd have Hypercard export a native application, and then copy it onto a disk that you passed around. Nowadays, the expectation is that people can run it in a browser, or download it from an app store. Web-based deployment means you have to embed it in a website using Javascript, while app stores require setting up a developer account, and building with complicated toolchains.

This isn't to say it definitively can't be done, just an explanation of why it hasn't yet.


"Inessential complexity is the root of all evil" this should be the wallpaper of every programmer.


Of course, why hasn't anyone thought about that?!

/s


Natural language programming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: