Hacker News new | past | comments | ask | show | jobs | submit login
A Generation Lost in the Bazaar (2012) (acm.org)
444 points by akkartik on Aug 8, 2016 | hide | past | favorite | 341 comments



Slightly OT, but I enjoyed the discussion about (physical building) construction in the book "the Checklist Manifesto". In The Mythical Man Month, Brooks advocates the use of something like the "master builder" model for software. It turns out that in actual (physical building) construction, the master builder model is no longer used, because the construction process has become much too complex to be understood by a single person.

Instead, in construction, they use a system of checks to ensure that different experts consult one another so that every decision is reviewed by a relevant expert.

I suspect that the "chief architect" approach that Brooks advocates may have become obsolete as well since the Mythical Man Month was written. Perhaps software developers could learn something from the newer methods that replaced the "master builder" model in construction.


David Brooks himself wrote as much in 1986:

I still remember the jolt I felt in 1958 when I first heard a friend talk about building a program, as opposed to writing one. In a flash he broadened my whole view of the software process. The metaphor shift was powerful, and accurate. Today we understand how like other building processes the construction of software is, and we freely use other elements of the metaphor, such as specifications, assembly of components, and scaffolding.

The building metaphor has outlived its usefulness. It is time to change again. If, as I believe, the conceptual structures we construct today are too complicated to be specified accurately in advance, and too complex to be built faultlessly, then we must take a radically different approach.

Let us turn nature and study complexity in living things, instead of just the dead works of man. Here we find constructs whose complexities thrill us with awe. The brain alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-protecting, and selfrenewing. The secret is that it is grown, not built.


(psst... Fred Brooks is the software engineer; David Brooks is the New York Times' conservative commentator.)


Thanks for correction! Sadly I can't edit it anymore.


That quote is beautiful but it reminds me a lot of tales of hubris, although I do not believe in any god(s), I always felt some truth in that mythical sin.

Can we grow designed software? Or, can we design truly growing software? I guess it depends on the definition of growing you use, "scalable" ? "alive" ?


Refactor, refactor, refactor. Growing things adapt to changing circumstances. Software must adapt to changing requirements. Refactoring is the only way.


This is so important. We all know beautifully "designed" API's or libraries.

The secret is that the "elegant" API design you see is usually the n-th iteration.

Growing code is much like growing trees. The tree grows itself, what it requires is that it is pruned and helped to reshape in a fashion that will allow it to stay alive for a long time and bear fruit.

But one needs to constantly prune the damn thing.


I like the tree metaphor.

You start planting trees, maybe it's in an empty field, or maybe it's in an old forest. At some point it takes root and multiplies. People help plant your forest in unexpected places and it expands. At some point you try pruning and controlling the trees. And at some point a forest fire destroys it making way for a new forest to grow.


software is gardening, not sculpture.


Is it?

To continue the metaphor, one of the most important pieces of evolutionary growth is death.


We can - but if we keep the analogy of grown things we can only grow stuff that has already been grown before and make only tiny changes. It's worth remembering that evolution has neither purpose nor direction. Even "adapting" is an aimless process, the changes are predictable only for very small adaptations. Major changes may lead to anything, including a "restart" (extinction event followed by something different).

So the model has its limitations for human projects where we don't want to have quite that propensity for surprising outcomes.


See, I've always seen the statement that evolution is intentionless to be a bit arrogant. Consciousness, as we are just starting to learn, is a property of emergent networks. Further, we are just starting to discover that phenotypical expression is so much more complex than 1/2mom + 1/2dad = baby. Evolution is calculating something, even if it's inscrutable to us.


    > I've always seen the statement that evolution is intentionless to be a bit arrogant
Assuming intent without any proof is the by far more outrageous statement. I have no intention of entering any discussion based on "believe" though.


In rapid changing environment, growth also implies learning. So perhaps we can also use learning as a new analogy of software development?

An ongoing example of learning and growing a design is "Swift Evolution Process"

https://github.com/apple/swift-evolution/blob/master/process...


Individual components must be designed to fail without taking down the entire system.

This is the Free Market vs. Planned Economy debate all over again.


> The secret is that it is grown, not built.

s/is grown/has evolved/


Both happen.

By designing systems in layers of interacting components, we can grow more and more complex systems. The internals of different components are irrelevant so long as the external interface is consistent.

We evolve them by improving the interfaces (see APIs deprecating functions/methods/messages) and by refactoring/rewriting the internals or implementation.


>Perhaps software developers could learn something from the newer methods that replaced the "master builder" model in construction

I don't know. I get the feeling that large construction projects suffer from many of the same issues that affect software projects.

And every time something goes wrong enough to end up in the media, the reason is invevitably that the various contractors/experts didn't coordinate properly, that no one really feels responsible, that major design flaws were overlooked early on and now there's no way to undo them without going even further over budget. (Yes, I am aware of the selection bias)

Big projects are hard and their hardness increases disproportionately with size. If a project's goals are such that there can be no "master builder", then things become a lot riskier and they go wrong more often.

So I think what we actually want is a bazaar of projects where each project is architected and overseen by a master builder.

Like with the endless state vs market debates, it's always a question of balance, not one of either or.


What you're advocating for seems like a SOA (or micro-services) approach?

I agree with your reasoning. It resonates with what for example Richard Brandson says about companies: split them up when you reach ~50 employees.

By splitting large projects into smaller sub-projects, we've now created the need for a higher level master builder to oversee how the sub-projects integrate, haven't we?


>What you're advocating for seems like a SOA (or micro-services) approach?

No, I think SOA and micro-services are technical considerations that have relatively little bearing on division of labour or organisational/project structure. You can statically link a library created by someone you don't even know, or you could have one huge dysfunctional in-house team split up a monolith into fifty different micro-services without creating any new ownership rules at all.

So I think splitting up projects in smaller subj-projects alone is not good enough if all it does is create a deeper organisational hierarchy based on exactly the same command and control principles as before. As you say, it just creates new planning issues on the next level.

We need manageable units of planning, and we need to know when things become too complex for planning and require some principle of self organisation. The difficulty is that self organising systems are not easily steered towards a single goal.

It's interesting that you quote Branson, because the most important feature of a separate company is that it is a unit of economic responsibility and ownership. It usually serves more than one customer, unlike a micro-service that is a fragment of some in-house monolith. And it sets its own goals, mostly independently.


Yes, microservices merely shift the complexity into the "it's somebody else's problem" (to integrate all the services) corner.


The book "To Engineer is Human" by Henry Pertroski describes many, many examples of this. Every software engineer should read it.


...Wait. That sounds a heck of a lot like Linux's leadership model (although Linux has somebody on top as well).


I have posted multiple times and in multiple venues that I think large software projects need to adopt a proper systems engineering process. Rather than a "master builder" you have a "lead systems engineer" whose job is to manage the complexity of the project. It goes beyond just making sure the experts and team leads communicate. It includes deciding how functionality will be divided between the functional boxes of the system, how those boxes will interact, and managing changes to the plan so everyone affected will understand what is going on and have input into how things change.

Traditional multidisciplined engineering domains (aerospace, automotive, marine) have been using this model for decades. It's about time that kind of engineering rigor came to software.


I'll hijack your OT comment to just mention that all this talk of Cathedrals and Master Builders reminds me of the historical novel "Pillars of the Earth" which I found a truly fascinating read.


When reading the Mythical Man Month I got the impression that their idea of a "chief architect" is more like today's "product manager", not today's "chief architect".


Except that the Brooks' chief architect is a person, who has the chops to build everything himself, but doesn't have enough time.

While the product manager in practice is usually a glorified secretary getting beaten by both management and engineering.


The checklist manifesto was a great book. Probably time i read it again.


It was too breezy, too slim and too hyped for my taste. The insights themselves still are valuable, but perhaps worth only half the size of the book.


But would you have paid $15 for a 25 page pamphlet? :)


That's a very good point - I've always felt it was a shame that in order for a thesis to be "publishable" you have to frame it in 300 to 500 pages. The Design of Everyday Things by Don Norman is a great example. It's an awesome book, and anyone who's designing user-facing-anything should read it, but I think the story could have been told in about half as many words.


I felt this way about Thinking: Fast and Slow. One of my favorite books, but could have been half the length. At least they included the original white paper in the back.


Are you offended that you got that extra words for free? I enjoyed every page -- should Norman have published two versions of the book?


Are you offended when you get extra lines of code for free?


No, I liked the book. My point is that there are other great ideas out there that we're not exposed to because they're 100 page ideas rather than 300 page ones. It's less of an issue as we go more digital and rely less on traditional publishing channels, but there's still a knee there right now.


I've thought this way about other books, and seen this thought expressed about The Checklist Manifesto ("it should have been a 20 page pamphlet"). But I feel that most of the length (say, at least 75% of the book's published length) is necessary and justified.

Not because the concept itself requires so many pages to understand, but because it takes several repeated high-profile examples of checklists making a big difference, before the feeling of "it's just a checklist, what's the big deal" - that a lot of people express upon hearing the idea first - is replaced with understanding and internalization.


I actually did. I bought it (accidentally) from http://www.goodreads.com/book/show/19327621-summary which is a summary of the book. So glad I did - I thought it was a little long even in this format.


Something to consider, modern buildings are often terrible. Checklists are great for building the minimal possible thing that works, but deeper insight is often necessary for enduring projects.

Microsoft could probably rewrite Windows at great expense to use 10% as much code and it would still be a bloated mess. But, well why would they waste the money?


Cathedrals are beautiful. They represent the vision of one person or a very small group made real by the hard work of hundreds or thousands of skilled and semi-skilled workers over dozens of years.

Cathedrals are not, generally speaking, profitable. They represent the expenditure of lots of capital over a long period of time.

Bazaars don't cost much to start. You can start quite small and have a functioning system that does useful things for people. They can grow quite large, and when they grow too large it becomes difficult to find exactly what you want without a really good map. But you can probably quickly find a bunch of things that are more or less close to what you want.

Cathedrals are not easy or cheap to repair, but the investment is so large that people usually prefer to repair them. A bazaar that doesn't work out makes some local people sad, but they will go to another bazaar that is a little less convenient for them, and perhaps do better there.

It's nice to have some cathedrals, because they feed the soul. But you need to eat every day, so there will always be bazaars, and if you need to make a choice, the bazaar is going to win unless you have a lot of resources stored up to fall back on.


Perhaps what we need are shopping malls. Aren't they, after all, the modern cathedrals of consumerism? (And aren't they just what we need more of in this world?) :P

Shopping malls aspire to be beautiful (primarily on the inside), but they can never be as beautiful or as well-architected as cathedrals, in form or in spirit.

Shopping malls have a bigger overhead than bazaars, and are less flexible. But their overhead is much less, and their flexibility much greater, than that of cathedrals.

Shopping malls provide more foundation and infrastructure than bazaars do. A bazaar minus its stalls is just a dusty field. A shopping mall minus its shops still has multiple levels, maps / directories, elevators / escalators, parking, loading docks, etc.

It costs more to set up shop in a mall than in a bazaar, but your shop will be more trusted, you won't have to squabble for space with your neighbours every day, and you won't get blown away by the next storm.

Shopping malls are designed with dedicated spaces that are tailored to different businesses. A supermarket has very different needs compared to a shoe shop.

Shopping malls are bazaars with a tonne of design, engineering, and regulation thrown in. But they're still bazaars. They're still where you get your groceries.


A shopping mall is also an example of what some theorists call "antimarkets": places which resemble a typical market with competition and choice but which are heavily regulated and controlled to exclude open participation.

Kind of like if the iOS App Store only had room for 15 apps, and Apple Inc decided which, and the roster was only updated every few years at most.


To a first approximation, it does only have room for the top 10/20 apps per category. Discovery is notoriously difficult and the benefits of buying your way onto the top 10 are validated.


you're on to something good

are you the one who writes "The Shopping Mall and the Bazaar" and leads us into a better era?


"Software development is hard. Let's go shopping!" -Tech Talk Barbie


> Software development is hard. Let's go shopping!" -Tech Talk Barbie

...or Yahoo, or Google, or IBM, or Hooli, or <Big-Fat-Tech-Giant>.


> Cathedrals are not, generally speaking, profitable.

This is getting lost in the metaphore, instead of the topic. Architected software is profitable.


Architected software would be profitable in fewer scenarios if we (software developers) didn't maintain such a high barrier to entry. We design our tools for ourselves and other professional programmers so that we don't have to compete with non-professionals with a better sense of the requirements. We force people to come to us, to defer to our understanding of how much architecture is necessary. We make our codebases just easy enough for a pro developer and no easier, refactoring and simplifying only when it becomes painful for a pro developer, and no sooner. That way they have to keep us as gatekeepers to all code.

It's essentially like you only made building tools for cathedrals, and no tools for thatched stick homes, and you create a culture around always building a cathedral so no one really knows how to build a thatched home. Essentially, creating a market irregularity through cultural expectations about how serious software has to be and who is going to be writing it.

(The complaint people always make when I say this is that making software isn't easy, it's hard, and notices couldn't possibly build useful software. I would half agree: some software problems are hard, and require a grizzled developer and some hard planning. But much of software involves no difficult computer science problems, and is more about understanding requirements well enough to be able to assign them to the right basic programming primitive, or a handful of common libraries. This is the kind of code that we use cultural friction to keep inside Engineering, and build cathedral-style, even though it could be done in thatched-home style by notices, if we structured our codebases for that.)


I almost spit out my drink. High barrier to entry? The barrier is lower than it ever has been. Any nut-job with a few weeks of training can make a shitty website with PHP or Node.js and have it instantly accessible to the most of the English-speaking world.

Most of the tools I see for software development seem to be organized around the needs of the bazaar (or thatched huts) not the needs of the cathedral. A million toy languages which might solve your problem well, but don't scale to a million users. Websites like GitLab and GitHub so you can share last week's 1 kloc project with collaborators. Libraries that do that one weird thing you need for your project, and nothing else.

By comparison, the cathedral builders (Google, Facebook, Apple, Microsoft, etc.) seem to be building a lot of their own tools. This includes programming languages, frameworks, build systems, version control, operating systems, and so many other things. They build their own stuff because the tools of the bazaar don't work quite well enough for cathedrals.


> Most of the tools I see for software development seem to be organized around the needs of the bazaar (or thatched huts) not the needs of the cathedral. A million toy languages which might solve your problem well, but don't scale to a million users

Not the OP, but when I see comments like this one I do realize that sometime HN has a very strong echo-chamber. The world doesn't need more than 100 (give-or-take, maybe 1,000, maybe 10,000) apps/websites which need to scale to "millions of users", a situation which doesn't keep that many workers occupied (Google&FB and the like employ much less people compared to the industrial giants of the early 20th century).

But the world does need millions of apps for the 10-100-1000 users, if needed built using the "toy languages" you decry. If we make it easy enough for people to build these apps, using "toy languages" if need be, the world would be in a much better place (we'd have higher productivity).

I'll give you my example at the company I used to work for in the early 2000s (when the "Cathedral and the Bazaar was written"). I was doing some office work, along with my other 20 or so colleagues, which involved having to check that two separate folders on our computers had the same files. This took each of us about one-hour each, so, there were 20 man-hours spent each working day on this mundane task. Lucky me I was a (already close-to-dropout) CS student, and I had heard about Python and about how easy it was to do stuff with it, and lo and behold, it really was. Just:

> import os

> l1 = os.listdir('first folder')

> l2 = os.listdir('second folder')

>a_call_to_a_custom_function_which_was_comparing_l1_to_l2() //which was probably quadratic, but it didn't matter

then use py2exe to put it all up in an .exe file which could be also run on my colleagues' computers (along with some inputs and the like) and that was about it.

A task that used to take an hour each day now was only requiring a script/program call. I fail to see how this program would have required a grown-up language which would need to be scaled to "millions of users", even though it proved to be pretty useful. And there are countless examples like my anecdotal one I gave above, you just need to go into any institution or company office, look at how people work on their computers and realize that the world needs millions of small programs like mine that would substantially increase productivity. The problem is, like the OP said, that us, "programmers", like to keep the playing field only for ourselves.


I think there's a huge unmet gap there that's actually being blocked by big businesses for business (not engineering) reasons. The ability of end-user to quickly and simply tune their device to their needs and automate stuff.

The very concept flies in the face of what is today's accepted UX "best practices", i.e. to make software trivial, engaging and masterable in 5 seconds. It naturally happens by removing any kind of thing there is to be mastered.

The task you performed with Python should be easily scriptable at the OS level. It shouldn't require one to know complex programming languages and toolkits. Similarly, I think that a tool like Tasker[0], maybe with a bit better interface, should be available by default in vanilla Android. We're vastly underutilizing the power of computing devices by restricting end user's ability to work with them.

[0] - https://play.google.com/store/apps/details?id=net.dinglisch....


And there are countless examples like my anecdotal one I gave above, you just need to go into any institution or company office, look at how people work on their computers and realize that the world needs millions of small programs like mine that would substantially increase productivity. The problem is, like the OP said, that us, "programmers", like to keep the playing field only for ourselves.

You're going to need a better example than that. This program already exists, it's called diff(1), md5sum(1), or cmp(1). You could wrap its use up in a shell script to make it even easier, or the companies/people could spend some money/time to learn how to use the tools already at their disposal. In a lot of cases, lack of training is the issue that should be addressed. I've said before "Those who don't learn /bin are doomed to reinvent it, poorly"

This isn't to say you're wrong about a lot more little, customized programs that could be written. The focus needs to be on the specific processes of a particular company, because every company's processes are unique (maybe not on all axes, but on more than one). Enterprise software often errs in either of the two extremes: either it's overly customizable and doesn't fit anyone's needs completely; or it's highly specific and tries to force working its way. And this is done out of a desire, from the software vendors, to capture market share. Customized software is highly expensive, and people are expecting something tangible for that purchase. What they should be concentrating on (at least from an efficiency standpoint) is empowering their own people to automate the processes they do: after all, they are the experts in these processes.


In the business software world, the barrier to entry for end-users seems a lot higher than it used to be. Learning PHP, standing up a database and getting it all hosted is a lot more difficult that taking an application suite like MS Access, creating some tables, and then putting forms up on top of them.

Microsoft created a tool a few years ago called Lighswitch that allowed end-users to throw together CRUD apps quickly, and it seems to have been met with deafening silence. I wonder if managers and CIOs in BigCorps would tolerate their end-users throwing together little apps that solved their problems in today's equivalent of VB6, or MS Access (the ultimate agile experience since users are solving their own problems). Experience suggests not, and although those apps could have become unmaintainable, it seems that there is little effort being made by vendors to address that market, and to provide ease of use, with better maintainability and scalability.


In the business world the barrier to entry is often imposed (and for many good and bad reasons) by the IT/OPs/Security teams.

Going to the PHP example, you could pick one of a number of deploy and hosting providers and have your code running and world visible in minutes for less than a Starbucks coffee a week (specific example Laravel Forge + Digital Ocean).

The problem is that even mediocre software developers with a couple years of experience can miss critical things in any language with any framework that can leave them incredibly vulnerable to attack.

For homegrown internal systems, the barrier to entry isn't the code, it's putting it somewhere people can access. In ye olde days you could slap together some VB6 and throw it in an Excel template and have a workable product- but have you ever inherited something like that? I have, multiple times. It's AWFUL- but I also have made a lot of money on not making it awful.

As an engineer, my rapid prototype basically means I eschew some things like a cache layer or performance optimization for just getting the concept out- but at an organization with no real devs, I can see the value in someone who can hack together anything with whatever they have to prove the idea, then calling in the mercenaries like myself to make the concept a real thing. The problem (and expense) usually lies in the fact that they wait until the concept is completely untenable in its current state and everyone is in a panic.


I agree with you largely. I've inherited plenty of unmaintainable code, including a few user-created abominations. But that doesn't necessarily mean that the idea of end-users writing applications is bad per se, rather, that more effort should go into making it harder for them to shoot themselves in the foot.


I think yes and no. For example, for public web application development the barrier is already low enough that you can put your entire company at risk pretty easily. I think there is little reverence for what it actually means to craft a proper web based application, and that it's not even all about the code, and then there's the never ending maintenance and administration of the server(s).

Now, if you're a spreadsheet jockey and you just need to gather and display your data in a non-trivial way, there are quite a number of things already out there. Business Objects (or whatever it's called now) and Tableau have basically formed large companies upon this idea and there's open source options like Jasper Reports.

I think the days of being able to slap some VB together and write a desktop application are just about completely dead in most situations, which means you really do need a vast breadth of knowledge that a weekend warrior developer didn't need to have a number of years ago.


>Microsoft created a tool a few years ago called Lighswitch that allowed end-users to throw together CRUD apps quickly,

Lightswitch relied on Silverlight and VB Studio, which made it useless for almost everyone.

The problem with bazaar culture is its obsession with tools and systems, and its lack of interest in users. When you get a product that inverts that - like Wordpress - it's often incredibly successful, in spite of its many the technical shortcomings.

The hierarchy of value in bazaar-land is:

1. New tool/framework/language/OS (that looks good on my CV) 2. Elegant, powerful product for customers 3. Fully productised, reliable, scalable, and easy to maintain combination of 1 & 2.

2 and 3 are more or less on equal levels. 1 is far, far ahead.

Because the culture is so tool-obsessed, a whole lot of makework and work-around fixing is needed just to get things to build, never mind work well for customers.

Basically there are dumb tools, dumb products, and occasionally elegant commercial products fall out of the combination - but usually only when they're designed by someone who cares about the user experience.

Hacking culture massively undervalues the user experience, and massively overvalues tinkering and tool-making as ends in themselves.

There's a basic disconnect between the talent needed to write code that works, and the talent needed to design a user experience that's powerful but elegant - whether the user is a non-technical user, or another developer.

The cathedral/bazaar metaphor is utterly unhelpful here, because neither really captures the true dynamic.


manyxcxi summed it nicely. I want to highlight part of what (s)he wrote.

I've watched this play out for 25 years with dbase, Paradox, Access, and countless other tools intended to empower end-users. Typically only one person in a User Area (UA) has the gumption to want to develop an application. It's wildly successful at first. As time goes along, the person devlops the app based on new requirements as is true with any app. At some point, the complexity exceeds the user's skill and time. Often, it's when they want the app to become multi-concurrent user.

I saw that one play out around 1995 with an app built on Access 2.0. The department had a copy installed on each of 20 desktops. The manager came to realize it needed to be a shared app. The power user didn't know how. My colleague spent the better part of a year doing it.

Whatever the reason, IT gets called in. Then we have to salvage a good-for-an-amateur app. Usually the app has become critical to that department so the developer resource has to be pulled from other priorities to salvage the situation.

The problem isn't the lack of tools or CIO's protecting their turf. It's IT being left with messes when a power user gets into trouble. Whether it's Oracle Glue, Access, Gupta SqlWindows, Crystal Reports, or Frontpage, the scenario consistently plays out the same way.


I don't see a problem here. Basically, the amateur build the MVP and validated the use case. And when it was shown that the software served actually a need (maybe one people couldn't even articulate before, but when they saw the app they knew "that's fine, I just need this feature too) the app got used more. At some point the app will have to be replaced. Software ages and rots. My software, your software, everyones software.

So, now we are at the point where the app is breaking down under its own weight. What do we have now?

- Clear specification: The users already know what they want from the app, something very rare in our business

- Proven value: The app is not something someone designed by looking at people from the outside and saying "I think that can be done better ..." but something which stems from their own daily needs and pains.

- Experience with likely extension points: From the history of the app and where new features had to be bolted on you can already see where new feature requests will likely come in, so a new design can accomodate that

And last but not least: A working app, so you have less stress to finish something, but instead can iterate on your new version until it really is better than the current version, without anyone bothering "when is it finished? when is it finished? We need that yesterday. When is it finished?!"


...plus a long, long list of new requirements such as, "it has to be blue" and "it must send email, which must be received, but only on Thursday when the stars are right."

And it must work exactly like the existing semi-manual system, including the ability to make random edits on legal records.

I've done these a few times before, and usually pulled it off, but there are solid reasons why they say, "don't rewrite software".

In particular, the "clear specification" usually has to be thrown out immediately and previous extensions are no guide to extensions for a new system.

And no one wants to do a serious job of it until the absolute last possible moment, so "when is it finished?" is the most important question.


If the organization is setup such that empowered super-users develop apps to the extent of their knowledge and then have a scheduled handoff to a developer in IT, what you’re describing can work quite well. I haven’t seen it work that way in any organization. Usually a department decides to let their super-user develop something without informing IT or they inform us in the vein of “We’re doing this one on our own because we’re tired of waiting for project approval.”

The Access example, from my previous comment, was the “we’re tired of waiting” vein. The app was a critical part of their work day: they used it while on the phone with customers. We had to get involved when the app had become unusable. The developer had to be drawn from another project to “throw it on a server” so it could be shared. Unfortunately, Access 2.0 had a primitive locking scheme that prevented it from being shared between 20 or so people. To compound the lunacy, they fought recommendations, like migrating to a relational database, every step. We had a developer unavailable for the better part of a year while she had to make the desktop app into a department-level app. She had to make the changes while the app was in active use. This example is not one of a partnership for a planned MVP handoff to IT. It was, probably unintentionally, a way to jump the queue to have their project done.

I’m all for a partnership like you described. But, it has to be a partnership with the parties involved agreeing on some kind of a schedule so resources can be available without hurting other projects/UA’s.


Maybe this is an argument for the inherent complexity of the solution being (at least) an order of magnitude more than the tools themselves?

Honestly, I'm very unimpressed with tools these days solving actually useful problems BECAUSE they're so dependent on their assumptions of the simplicity of the problem space.

I don't think we're disagreeing, necessarily. Just speculating on how to put a conclusion on the end of your thought.


> High barrier to entry? The barrier is lower than it ever has been.

I agree with you, but everything is relative. It's expensive to produce a custom microprocessor, but it's cheaper than it's ever been.

> Any nut-job with a few weeks of training can make a shitty website with PHP or Node.js and have it instantly accessible to the most of the English-speaking world.

The barrier to entry can be much lower than that. Someone without any programming experience could fork and deploy a Node service in 60 seconds if the tools were designed for that. I think you and I are just putting our parameters for "low" and "high" in different places. You are comparing Google (cathedral) to entry-level programmers (bazaar). I am comparing a random engineer in your company (cathedral) to one of your customer support staff who is requesting a copy change (bazaar).

Two totally separate conversations.


> It's expensive to produce a custom microprocessor, but it's cheaper than it's ever been.

My impression is that it's more expensive these days, which is why we don't see as many startups like MOS or Acorn, and see instead partnerships between larger companies. It also seems less likely for anyone producing an ASIC to get funded in the first place these days. I couldn't find good data to settle the cost issue, though.

> I am comparing a random engineer in your company (cathedral) to one of your customer support staff who is requesting a copy change (bazaar).

I don't understand this argument. I'm not sure what "copy change" means in context, and I don't know how customer support relates to the discussion.

I guess the main point I was trying to make was that the tooling for bazaar-style development is at your fingertips from the moment you sit down at a computer, but the cathedral is harder to make and the publicly available tools aren't as good.


Customer support are the people who know which words in the software should change to confuse customers less. When I say "copy change" I mean changing some words in the software. The barrier to entry I'm talking about is the one preventing that support person from making that change, instead of having to ask their boss to ask one of the engineering bosses to ask one of the engineers to do it.


Okay, but if you lift that barrier there is still a major fundamental problem: the people in customer support don't know how to code. The few people in customer support I've known who knew how to code changed jobs on fairly short order.

The fact is, even in the bazaar model where the barrier is low, when does customer support make code changes? I'm talking here about instances where customer support for open-source projects exists.


We design our tools for ourselves and other professional programmers so that we don't have to compete with non-professionals with a better sense of the requirements.

In all my decades of programming, I have never met a non-professional with good, let alone better sense of requirements. A layman does not think in terms of details; they think in terms of abstractions, often in terms of castles in the sky. The problem is that computers are the exact opposite of abstractions and castles in the sky: exact, unforgiving, and dumb.

In fact, in all my decades of programming and working with computers, in my journeys across two continents, the number of professionals with a good sense of requirements I have met can be counted on the fingers of my one hand. If that is not disheartening, I do not know what is. It's emotionally and psychologically devastating to me personally. It's extremely depressing to even think about it. What does it say about our profession?

As for writing tools for ourselves, learn UNIX, and then you'll learn of the UNIX programming model:

write programs which work with other programs; write programs with the notion that the output of your program could very well become another program's input. Write programs which accept ASCII input from other programs, for that is a universal interface. Be liberal in what you accept, and conservative in what you send.


I mostly agree, though:

> As for writing tools for ourselves, learn UNIX, and then you'll learn of the UNIX programming model

And then learn some history, understand how UNIX actually was a huge step backwards for computing and how we utterly fucked up the industry. Modularization is fine, programs that work with other programs are great (for many definition of "programs", not just "UNIX process"). However, unstructured text communication is a waste of resources and cesspool full of bugs, and we knew better in the past. We're regaining some modicum of sanity with the lightweight structured text formats of today, but it's sad we had to take a decades-long detour to rediscover that.


If you're referring to structured records, I saw the mainframe, I used the mainframe, and I was unimpressed.

As for unstructured text communication, say what?!? Every good UNIX engineer knows: build in a -m switch for versioned machine readable output, and if possible, make that output a stable interface. That's clear, at least to me. That isn't clear to you?

And I hope by structured text, you don't mean garbage like JSON, one of the most inconsistent and idiotic formats I have ever seen?

Hopefully you also don't mean XML, which is terrible to parse with standard UNIX tools like grep, sed, and AWK. More complications for negligible gains.


>Hopefully you also don't mean XML, which is terrible to parse with standard UNIX tools like grep, sed, and AWK. More complications for negligible gains.

I'd prefer a type system so I can use these tools like a library. Most of them only work on piped data or files.

A recent example is that I needed to diff files. There are existing programs and I didn't want to reinvent the wheel, I just needed that particular wheel to build something else.

To use the existing programs I had to write to a file, which is too slow for my use case. It would be much easier if I could hand these tools a pointer to my in memory data structures and get the diff back in another structure.

This is one reason why we often see libraries replicated /bin. Powershell did a good job of solving this (but was too flawed in other ways).


Textual interfaces enforce decoupling. In a lispy system, with richer interfaces, you can couple your apps and functions as tightly as you want. In Unix, the textual interchange limits you.

However, if you have more complex data to send, text may be problematic. And if you're going to send structured data via text, you need a standard, easily parsable format so that people can easily parse your data without having to roll their own, incredibly buggy, parser. JSON and DSV are both easy to parse, and so those are the formats people use, like it or not. And no, it's not inconsistant. It wouldn't be so easy to parse if it was.

Also, I have never seen a tool with -m. Maybe it's because I'm running Linux.


"If you're referring to structured records, I saw the mainframe, I used the mainframe, and I was unimpressed."

You saw a mainframe. I saw a number that were quite different from each other. The parent said "a step back," though, not mainframes or a specific mainframe. There were many architectures that came before or after UNIX with better attributes as I list here:

https://news.ycombinator.com/item?id=10957020

If we're talking minimal hardware, let's look at two other approaches. One was Wirth's. They do an idealized assembly language to smooth over hardware or portability issues. It's very fast due to being close to bare-metal. Simple so amateurs can implement it. They design a safer, system language that's consistent, easy to compile, type-checks interfaces, can insert eg bounds-checks, and compiles to fast code. They write whole system in that. Various functions are modules that directly call other modules. High-level language, rapid compilation, and low debugging means that two people crank out whole system & tooling in about 2 years. Undergrads repeatedly extend or improve it, including ISA ports, in 6mo-2yr per person. A2 Bluebottle runs insanely fast on my 8-year-old hardware despite little optimization and OS running in a garbage-collected language. Brinch Hansen et al did something similar in parallel on Solo OS except he eliminated data races at compile time with his Concurrent Pascal. Later did a Wirth-style system on PDP-11 with similar benefits called Edison.

On functional end, various parties created the ultimate, hacker language in LISP. Important properties were easy DSL creation, incremental compilation of individual functions, live updates, ability to simulate any development paradigm, memory safety, and higher-level in general. The LISP machines implemented most of their OS's and IDE's in these languages. Imagine REPL-style coding of an application that would run very fast whose exceptions, even at IDE or OS level, could be caught, analyzed at source form, and patched while it was running. Holy. Shit. They targeted large machines but Chez Scheme (8-bit) and PreScheme (C competitor) showed many benefits could be had by small machines. Jonathan Rees even made a capability-secure version of Scheme which, combined with language safety benefits, made it one of most powerful for reliability or security via isolation. A project to combine the three concepts could have amazing potential.

So, yeah, UNIX/C was a huge step back in compiler speed/consistency, speed/safety tradeoffs in production, flexibility for maintenance, integration, debugging, reliability, security, and so on. Tons of architectures or languages better on each of these with some having easier programming models. That Thompson and Ritchie's perfect set of language features for C replacement were collectively an Oberon-2 clone (Go) is also an implicit endorsement of competing system. Plenty of nails in the coffin. Sociology, economics, and luck are reasons driving it. The tech is horrible.


UNIX was the best thing at the time. It had good interfaces for IPC, could run on most systems, not just big, expensive ones, and was relatively portable. And sometimes, Worse really is Better. Wirth's architecture was late, and more expensive compuatationally. Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical.

Unix was and is sucessful because it was good enough, and far more platform, language, and tecnique agnostic than the competition. Unix reccomends a lot, but ultimately perscribes little.


"Wirth's architecture was late"

You're missing the point: abstracting some machine differences behind a system module then building on it in a safer, easy-to-compile language with optional efficiency/flexbility tradeoffs. Thompson and Ritchie could've done that given prior art but they wanted a trimmed-down MULTICS with that BCPL language Thompson had a preference for. Around 5 years later, Wirth et al had a weak system to work on and did what I described with much better results in technical aspects. His prior work, Pascal/P, got ported to around 70 architectures ranging from 8-bit to mainframes in about 2 years by amateurs. Imagine if UNIX had been done the Wirth way then spread like wildfire. Portability, safety, compiles, modifications, integrations... all would've been better. Safety stuff off initially where necessary due to huge impact on performance but gradually enabled as a compiler option as hardware improved. As Wirth et al did. I included Edison System reference because Hansen did Wirth style on PDP-11, proving it could've been done by UNIX authors.

"Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical."

Choices of the authors. Similar to above, they could've done what PreScheme and Chez people did in making an efficient, variant of LISP with or without GC's. Glorified, high-level assembly if nothing else. PreScheme could even piggy-back on C compilers given they were prevalent at time it was written. Took till the 90's before someone was wise enough to do that although I may have missed one in LISP's long history. They also formally verified for correctness down to x86, PPC, and ARM. Would've benefited any app or OS written in it later. Pulling that off for C took a few decades... using Coq and ML languages. :)

"Unix reccomends a lot, but ultimately perscribes little."

My recommendations do that by means of being simple, functional or imperative languages with modules. Many academics and professionals were able to easily modify those compilers or systems to bring in cutting-edge results due to tractable analysis. UNIX is the opposite. It prescribes a specific architecture, style, and often language that made high-security or high-integrity improvements hard to impossible in many projects. The likes of UCLA Secure UNIX failed to achieve objective even on simple UNIX. Most of the field just gave up with result being some emulation layer or VM running on top of something better to get the apps in there. Also the current approach in most cloud models leveraging UNIX stacks. It wasn't until relatively recently that groups like CompCert, Astree, SVA-OS or Cambridge's CHERI started coming up with believable ways to get that mess to work reliably & securely. It's so hard people are getting PhD's pulling it off vs undergrads or Masters students for alternatives.

So, yeah, definitely something wrong with that approach given alternatives can do the same thing with less labor. Hell, ATS, Ocaml, and Scheme have all been implemented on 8-bit CPU's with their advantages. You can run OpenVMS, MCP, Genera LISP, or MINIX 3 (self-healing) on a desktop now directly or emulated. You can get the advantages I mentioned today with reasonable performance. Just gotta ditch UNIX and pool FOSS/commercial labor into better models. Also improve UNIX & others for interim benefits.


You can't run Genera in anything like a sane manner. I've tries.


You can run it, though, which is the point. It doesn't require a supercomputer or mainframe. It can be cloned with a combo of dynamic LISP (flexibility/safety) and static LISP (low-level/performance) where latter might use Rust-style safety as in Carp. You can still isolate drivers and/or app domains in various ways for reliability as in JX OS. Necessary components are there for modern, fast, desktop, LISP machine with its old benefits.

People just use monoliths in C instead & call it good design/architecture despite limitations. Saying "it's good enough for my needs" is reasonable justification for inferior technology. Just not good to pretend it's something it isn't. When you don't pretend, you get amazing things like the BeOS or QNX desktop demos that did what UNIX/Linux desktop users might have thought impossible at the time. Since UNIX/Linux were "better." ;)


Who said writing monoliths was a good idea? Because that wasn't me. Monoliths are bad. And yeah, you shouldn't write your app in C.


I agree. But I think the reason that laypeople don't have a good sense of what is required is that we wall them off from the software so that they have little sense for how the existing stuff is constructed.

I do think that other people in the organization usually have a better sense of needs. And so if they could have a better understanding of the materials, they could do a better job of managing requirements than an engineer who looks at code all day, and is typically not observing the customers.

Your advice about UNIX is good. I try not to write modules larger than a few hundred lines of code. Anything bigger than that gets split into fully isolated modules with well defined interfaces.

Also: I'm sad you're sad. And I'm sad because I feel my tools isolate me from the people I would like to be working closely with. But I'm very optimistic about solving this problem. I think all of the building blocks are there to solve it, we just haven't made a concerted effort as a community because we're mostly under the impression that it's impossible for non-coders to understand code.


> I feel my tools isolate me from the people I would like to be working closely with.

I don't understand this tools argument. It seems akin to saying the reason I find myself isolated from collaborating with particle physicists is due to the fact that I don't know how to operate a large hadron collider, while completely ignoring the fact that I can't even read a Feynman diagram.


Write programs which accept ASCII input from other programs, for that is a universal interface

Can we maybe update that to UTF-8?


You make it sound like we deliberately make our code hard to work with. That's absurd. Code being hard to work with is the natural state, unless you work really hard to avoid it.


My thoughts exactly. I'll be damned if I'm going to put additional time and effort into meeting some "just hard enough" spec in order to secure my job for the future.

1) I'm lazy 2) I am not worried about job security

Does anyone actually, honestly, incorporate "how can I keep the application of my skills here the right amount of inaccessible to others?" into their time spent on project? Shame on you.


Unless you are taking extra time beyond the requirements to make your software easier for other people in the organization to access, then you are doing exactly what I'm saying: making it just easy enough for you to manage.


I think you have a blind spot for just how much shantytown style software development goes on in tools like MS Excel, MS Access and Oracle APEX.


I'm aware of it, but it's a ghetto... In the sense that it is largely kept separate from professionally maintained codebases.

For me, I don't like working in Excel, and I want to have better working relationships with designers, customer support people, business folks, etc. I want us to be able to work on the same projects together, which means not Excel because Excel is extremely limited and difficult to work with. Not difficult for a random person to use for some calculation. Difficult for me to get the things I want to get done in Excel when I'm trying to build arbitrary web apps.


Software is probably one of the most accessible discipline, constantly trying to simplify its tools for the lesser professionals. The information is mostly free and easily searchable. The thing is, no company DIY important use cases. The cost of waiting longer for an inferior solution is rarely cheaper then the price of a qualified professional.

What businesses want is not for software to be easier to build, that's what developers want. Businesses want software that can be more quickly used to solve their use cases. This is not an easy problem. That's why they hire engineers to solve it.

To be more clear, software is quite easy to build, easy does not mean quick. All the hard problems are solved through a library or a framework. The computer science problems left are too hard for even the professional developer to solve.

Software Engineers should specialise in knowing a lot of already written quality software, and they should be good at figuring quick ways to reuse them and combine them and adapt them to the businesses use cases.


Unfortunately, it seems to have a high failure rate.


A few points just for fun:

1. Christianity, the group who builds Cathedrals, is probably the most profitable organization in the history of mankind.

2. Bazaars blow away when the wind picks up. If too many people show up, things start falling down. Cathedrals, and their close cousins, castles, last centuries.

3. I think Bazaars have their place, when you really need to ramp up something to show. We used to call that prototyping. If it gets past that, you gotta build it right eventually.


2. Bazaars pop back up seconds after they blow away. http://youtu.be/MENjFkEAj9g


Yeah, but nothing's quite where you left it, your favourite curry stall is gone for good, and you'll never be able to get a refund on that thing you bought that stopped working the day before the storm.


An epic win for cross platform compatibility ;)


Point 3 is what I consider a good model for the building metaphor. Let Bazaars spring up where needed and wanted. When the ground is proven, then clear and build a well-architectured Bazaar, not necessarily a full-blown cathedral unless it really calls for it. This sometimes happens when codebase if re-written from scratch with the old code as reference, not just cleaning up old code.

I think AutoDesk did this when Inventor came along, but I am not sure. A big re-write if ever there were one.


Inventor is still pretty much a Bazaar. If you go far enough down you'll find dialogs and menus at least a decade old. Very similar to Windows, actually.

Like someone else said about Python, "it's a cathedral where it counts".


> 1. Christianity, the group who builds Cathedrals, is probably the most profitable organization in the history of mankind.

I don't need to be the richest organization ever. I'd be happy for 1000th place and those billions rather than overreaching for number one and the risk that entails.


> Cathedrals, and their close cousins, castles, last centuries.

Is that even a good thing? Aside from being beautiful artistic and historical works, I think that Cathedrals have outlived their usefulness. Just look at St. Patrick’s in NYC - that thing cost 177 million USD over 3 years just to restore it. I mean, obviously it's nice, but in terms of functionality it's a huge waste. You could build another 2 Lakewood Churches (each with a capacity of 16k people), using modern steel building techniques, just for the restoration costs of St. Patrick’s.

I can see the case for over-engineering on things that are effectively "solved" problems (hashing algorithm implementations, knife design, non-electrical hand tools, JSON parsers...) but when it comes to complex systems like buildings or operating system architectures, I don't think we're at a point of stability where anything should be expected to last centuries (or more than a few decades, in the case of software).

We're still seeing fundamental shifts in the assumptions that these systems are built on - whether it's the Cathedral that couldn't have possibly foreseen its stone arches being replaced by steel girders or the early mainframe OS that was designed before cheap computer clusters became the norm.


2. Cathedrals, and their close cousins, castles, last centuries

Business problem spaces don't last centuries. Algorithm do.


You can build a cathedral around the stalls of a successful bazaar, no? Like OSX or Android.


Android a cathedral?

If you wanted tabs in Android prior to 13, you'd setup a TabHost with a LocalActivityManager ( https://developer.android.com/reference/android/widget/TabHo... ).

Then in version 11 ActionBar.Tab is added and LocalActivityManager is deprecated in version 13.

By version 21 they became bored with that and deprecated it ( https://developer.android.com/reference/android/app/ActionBa... ).

You wouldn't know it was deprecated though as the most current training documentation still recommends using this now-deprecated method ( https://developer.android.com/training/implementing-navigati... ). All you are left with is a pointer to a vague, ambiguous page in the context of the deprecation method. Welcome to the cathedral of Android developing.

BTW the release dates for these APIs:

    API 11 - February 2011
    API 13 - July 2011
    API 21 - November 2014
The annoyance isn't that they changed it, it's that they didn't like the old way, so they changed it, then three years later they changed it again. They do this all the time. Plus they deprecated it, but still recommend the deprecated method in the tutorial, and leave it as a mystery how it should be implemented in current best practices.

The current most up-to-date tutorial on their web site is chock full of recommending things which they deprecated already.


Agree. Definitely no cathedral.

And what about customizing a Spinner's font and background color and popupBackground color and putting a non-standard element at the top?

And the amount of time it took before Android got percentage based spacing in layouts?

And the fun of remembering what UI attribute-values like android:gravity="center_horizontal", android:layout_centerHorizontal="true" and android:layout_gravity="center" mean?

And the awkwardness of using non-standard typefaces and using things like the RecyclerView and it's weird Adapter?

And the fact that for some reason they used an XML element named "layout" when they implemented their data binding feature?

And the fact that some attribute names are camelCased while others are underscore_cased?

And the ease with which memory leaks are introduced by inadvertently closing around context instances because you need context instances all over the place because the Activity is a god object?

It's all so very ... not ... like ... a cathedral.


Sometimes I have this feeling that they just offshore everything and just do some kind of integration at Mountain View.


interesting. i often wonder how the Android SDK got the way it is too.

my hazy guess was that Google simply hired a bunch of extremely smart programmers and let them code whatever they wanted. but there is no equally smart architecture/design team to guide, organize and filter the work. there's just a build system.

maybe that's essentially the same as your theory.


I wish they would at least put dates on the documentation. I find myself reading halfway through a outdated tutorial before some names or versions makes me realize it.


I wish google would favor newer, more accurate solutions. Or at least provide away to flag no longer useful content.


And that is just one example, each Android release is full of such examples.

Then we also have the whole story with the NDK, after 3 years having officially deprecated Eclipse, only now they are finally getting something comparable to the CDT.

The experimental Gradle plugin for the NDK still doesn't work that well, just got replaced by something else in the stable Gradle plugin, which is also kind of legacy support, because the way forward seems to actually be cmake. All because that is what Clion knows anyway.

And as usual for these things, it is documentated across Android samples, Git commit messages and blogs.


macos (nee osx) is the result of combining freebsd and the mach kernel, both very cathedraly technologies.


And now we see just how confused this discussion is. It's conflating a bunch of concepts via a historically weighted analogy, which itself has a lot of baggage at this point.

For more, see this perceptive comment: https://news.ycombinator.com/item?id=12251705


I'd consider what's on top the more cathedraly part. It has the consistency in design and implementations. Merging FreeBSD with Mach might have been centralized developers looking for good results but actual results seem a bit more bazaar. QNX was the older cathedral of mixing UNIX and microkernels with good design + performance. There were others but it stands out.


> macos (nee osx) is the result of combining freebsd and the mach kernel, both very cathedraly technologies.

Not to mention the remains of NextStep. Also very Cacthedraly.

Edit: thinking about it, I wonder why Apple didn't keep using that name/brand. It's obviously there all over the API. Was the NextStep name/brand tainted as a doomed technology at the time?


Cathedrals are very profitable; as long as there's religion.


And yet..I think this critique gets less strong as time goes on.

The amount of productivity available to Mr. Kamp for free today is conservatively double or triple that available in 1999. Databases, web frameworks, scale know-how, IDEs, hosting platforms, the list goes on.

He harkens back, sadly, to an era in which codebases like Genuity Black Rocket cost $100k in licensing, and ran on $30k/month Sun servers. Seriously.

Languages are faster, development times are shorter, and chips are WAY faster. And, code can be pushed out for tinkering and innovation onto github for free. Combine that with his estimate that we have 100x more people in computing, and the combination is a riot of creativity, crap, fascinating tech and everything in between.

The bazaar is messy, but I'm not aware of any solid critiques which show cathedrals are more efficient at the multiples-of-efficiency kind of gains we get from legions of self-interested, self-motivated coders.


> The bazaar is messy, but I'm not aware of any solid critiques which show cathedrals are more efficient at the multiples-of-efficiency kind of gains we get from legions of self-interested, self-motivated coders.

The article isn't about efficiency, it's about quality. The assertion is "Quality happens only when someone is responsible for it."


> Languages are faster, development times are shorter, and chips are WAY faster.

This is due to Moore's law, not the software design choices that the article bemoans. Those $30k/month Sun servers were many times faster and cheaper than the earlier machines they replaced as well.


While Moore's law helps, languages are more expressive, safer, more performant and have more batteries included yielding a whole bunch of improvements.

We've had software and hardware gains, massive ones, and they compound.


> While Moore's law helps, languages are more expressive, safer, more performant and have more batteries included yielding a whole bunch of improvements.

I have to disagree, compilers may have gotten a bit better at making faster binaries. But languages, like new languages, are increasing in expressiveness and safety, sure, but very rarely efficiency. Go and Rust are not faster than C or C++, likely never will be (for one C has decades of lead time), Go and Rust may be faster than C was 20 years ago, but that doesn't matter.


If Rust is significantly slower than equivalent C or C++, it's a bug. Please file them.

(And yes, sometimes, it's faster. Today. Not always! Usually they're the same speed.)


My point is more like this chart [0]. C has so much lead time, Rust will probably never be able to catch up, be close? Sure. But C has decades of lead time.

[0] http://www.viva64.com/media/images/content/b/0324_Criticizin...


> My point is more like this chart <

As steveklabnik noted that is old data (which you would be normally be able to see from the date-stamp in the bottom-right corner, but that's been hidden).

This web page is updated several times a month, and presents the charts in context --

https://benchmarksgame.alioth.debian.org/u64q/which-programs...

(You might even think that you can tell which language implementations don't have programs written to use multi-core and which do.)


That chart is extremely old. We are sometimes faster than C in the benchmark games, with the exception of SIMD stuff due to it not being stable yet. (and, it can fluctuate, depending on the specific compiler version, of course.)

For example, here's a screenshot I took a few months ago: http://imgur.com/a/Of6XF

or today: http://imgur.com/a/U4Xsi

Here's the link for the actual programs: http://benchmarksgame.alioth.debian.org/u64q/rust.html

Today, we're faster in C than one program, very close in most, and behind where SIMD matters.

  > But C has decades of lead time.
Remember, Rust uses LLVM as a backend, which it shares with Clang. So all that work that's gone into codegen for making C programs fast also applies to Rust, and all of the work Apple and whomever else is working to improve it further, Rust gets for free.


I mean true, I'm playing devil's advocate here. I respect the Rust community (heck of all the nu-c languages I respect it the most, I even did a poster on 0.1 of it for my programming languages class), I will be quite impressed if they can pull off (and they are the most likely to be capable of it in my opinion) what so far has been an insurmountable task: beat an old guard language in general purpose performance (Fortran, C, etc.); languages that have every advantage but design foresight. If they do it, it will be a great historical case study on how to build a new programming language.

As an aside: As someone who has used LLVM to build a compiler, it doesn't quite work that way, yes rust has access to those gains, but it may not be able to effectively use them (due to differing assumptions and strategies).


Totally hear what you're saying on all counts :)


> languages are more expressive, safer

Not generally, no. Maybe the popular ones become so, but that's mostly by rediscovering the languages of old, which had better safety and more expressive power.


Moore's law is just an observation, and the only way chips can actually be made is through sustained, coordinated, and meticulous teamwork.


How many of those languages were developed in the bazaar style? All the ones I can think of came from a single person designing things from first principles and taking inspiration from other cathedrals around them. Lisp, Scala, Ruby, Smalltalk, Prolog, Typescript, etc. are cathedrals. The one bazaar IDE I can think of is Eclipse and it's terrible. Visual Studio and Visual Studio Code on the other hand are much more sensible and again it's because they're cathedrals.


Another bazaar IDE is Emacs, and I'd say it's not terrible, although YMMV.

I'm not sure how you can say Lisp is a cathedral; it's not even "a" anything. Common Lisp, Racket, Clojure, Emacs Lisp, etc., many of which are themselves bazaars. Ruby, for another example, may have started as one person's vision, but now the canonical implementation is a big multisourced effort, and there are other implementations with lots of uptake that aren't directed or blessed by the mainline Ruby.


Cathedral is never built by one person. It's not even entirely designed by one person. But you have people responsible for quality, instead of "anything goes" ad-hoc development.

You mentioned Common Lisp - it's a great example of a cathedral. A language carefully designed by a committee, which took into consideration all the previous Lisps that were in popular usage. You can tell there was a lot of thought behind the process.

As for Emacs and the bazaar, I think this is a good case study of good and bad aspects of bazaars. On the one hand, you have an incredibly flexible tool, which turns it into a perfect test environment optimizing workflow with text-based tasks. You have people writing Emacs modes for anything including kitchen sink, and it turns out many of those experiments offer superior workflow than standard, dedicated applications (especially when it comes to interoperability and maintaining focus/flow).

On the other hand, Emacs often requires you to hot-patch stuff here and there, and its language support is usually worse than that of a cathedral-like IDE dedicated to a particular programming ecosystem. And I say it as an Emacs lover. I still prefer Emacs to IDEs, but that's because of the flexibility benefits, which are unparalleled. But I'm not deluding myself that Emacs has better support for Java than IntelliJ, or better support for PHP than Eclipse, or whatever. For language ecosystems requiring complex tools to back them up, it's a PITA to set up your working environment in Emacs. Hence the negative side of bazaar - you don't get as much focused effort to make something of high quality.


> You mentioned Common Lisp - it's a great example of a cathedral. A language carefully designed by a committee, which took into consideration all the previous Lisps that were in popular usage. You can tell there was a lot of thought behind the process.

Common Lisp was designed as a unified successor to Maclisp, in response to an ARPA request.

Not to Scheme, Interlisp, Lisp 1.6, Standard Lisp, Lisp 2, LeLisp, ....

Scheme was further developed. Interlisp died, Standard Lisp had a Portable Standard Lisp variant and then mostly died. Lisp 2 was dead before, LeLisp died later.

The core of Common Lisp was designed in 1982/1983, decided mostly by a small team of Lisp implementors (those had their own Maclisp successors) with a larger group of people helping out.

1984 a book was published on the language and implementations followed.

Standardization then came as a more formal process later with goal of creating an ANSI CL standard - again it was mostly US-based, paid by ARPA. Areas were defined (language clean-up, objects and error handling), .... Proposals were made (like Common LOOPS by Xerox) and then subgroups implemented and specified those (CLOS, ...).

> You can tell there was a lot of thought behind the process.

There were a lot of people involved. Not just the X3J13 committee. It was also a community effort at that time.

https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node3.html#SE...


A fun thing to do when you find someone who really likes Common Lisp: ask them to explain the loop sub-language in detail.

JK. They might starve to death before they finish.

The greatest thing about CL is that it has so many features that you can use to make a CL-lover look like a deranged nutbar.


It's fully unclear, why I should explain all the features in detail to someone. I'd never do that and I know nobody who would be interested in it.

You don't need to know all of it in detail. It's good to have an overview and look up details as needed.

In a numerics library, I don't need to know every function in detail. I just look it up on demand.

Hyperbolic tangent for complex numbers? I don't know the details. Learning all is fruitless. When I need it, I look it up.

> They might starve to death before they finish.

Teach Yourself Programming in Ten Years. Why is everyone in such a rush? http://norvig.com/21-days.html

Java JEE in detail? Oops.

The Haskell type system in detail? Ooops.

> The greatest thing about CL is that it has so many features that you can use to make a CL-lover look like a deranged nutbar.

Wait until you get 'Scheme R7RS large'. Ever looked at recent languages specs for languages like Scala, Haskell, Java 8, Fortress, Ada, the coming Java 9, Racket, C++, ...

One thing you need to learn about Lisp languages: the language is not fixed. It can be an arbitrary amount of features, even user supplied.

What you need to learn is not all the features of one construct. What you need to learn is how to learn the details incrementally on demand, while having an overview of the concepts.

If you think LOOP is large and complicated, have a look at Common Lisp's ITERATE: even more features and more powerful. Even designed to be user extensible.

https://common-lisp.net/project/iterate/

And it's totally great.


Now look, I like Lisp. Common Lisp was the first language I fell in love with. I remember it fondly, even though at this point I prefer Scheme.

But you have to admit that there's something a little...off...about having an iteration sub-language with a 43 page (in PDF) manual. And I mean sub-language literally; one of the advertised features of ITERATE is that it has a more Lispy syntax, so your editor has a hope of indenting it correctly.


I find it very handy and use it all the time. Luckily there are LOOP versions for Scheme and Racket as well:

http://wiki.call-cc.org/eggref/4/loop

https://planet.racket-lang.org/display.ss?package=loop.plt&o...

LOOP is actually not a CL specific language construct and did not originate there. It was invented by Warren Teitelman for Interlisp. There it was called FOR. From there it was ported/reimplemented and extended to several Lisp dialects.


> You mentioned Common Lisp - it's a great example of a cathedral.

Sure, but I didn't say Common Lisp was a bazaar, either. I said it didn't make sense to say "Lisp" was a cathedral, because there are many Lisps, and some of them are bazaars.


You are confusing a concerted effort with a bazaar. Just because a committee or a group of volunteers work on a project as an open source project does not mean it is a bazaar. Others have already said this but the distinction is about the quality and vision, not how many people work on it.


Haskell is famously the work of a committee, with multiple implementations and multiple partially-compatible extensions.


Having seen the source for a non-Bazaar OS (Windows), I can say that they are not built like cathedrals:

Most of the original developers have long since moved on, there are design problems, various teams and managers rebuild or duplicate work, and management sometimes imposes big changes just before release.

Software quality is hard to judge from the outside, and takes longer to build.


That still sounds like a Cathedral. The whole point of a Bazaar is that there is nobody to impose big changes. Also, in the original essay the Cathedral wasn't Microsoft, it was GNU.


This is a good point, I wonder if cathedrals really exist in the real world? I suspect they are all just isolated bazaar's.


Windows (at least for userspace components) has significantly less self-contained modules (which thus contain larger amounts of functionality) than any modern unix-based system and there are almost no modules that are widely used and not supplied by Microsoft (excluding stuff that is ported from unix and various third-party hardware APIs). One of reasons for that is probably that before .NET, there was essentially no support in VS for building applications as anything other than big self-contained .EXEs and the current solution/project mechanism leaves much to be desired.


What the hell are you talking about?

Just take your Unix mentality and make a few substitutions:

* gcc -> cl.exe

* ar -> lib.exe

* ld -> link.exe

* make -> nmake.exe

* libfoo.so -> foo.dll

And there you have it, the world that "didn't exist before .NET" ... This is crazy amounts of ironic because Windows had DLLs at a time when shared libraries were not so much a thing on Unix. Not to mention things like COM which are all about creating de-coupled components.


I still miss COM. It had its problems, sure, but it worked really well.

I haven't seen anything since that allowed such decoupled development


I think you have rose tinted classes on.

COM is/was a rats nest of confusing and frequently duplicated APIs with insanely complicated rules that by the end really only Don Box understood. CoMarshalInterThreadInterfaceInStream was one of the simpler ones, iirc. COM attempted to abstract object language, location, thread safety, types, and then the layers on top tried to add serialisation and documented embedding too, except that the separation wasn't really clean because document embedding had come first.

Even just implementing IUnknown was riddled with sharp edges and the total lack of any kind of tooling meant people frequently screwed it up:

https://blogs.msdn.microsoft.com/oldnewthing/20040326-00/?p=...

The modern equivalent of COM is the JVM and it works wildly better, even if you look at the messy neglected bits (like serialisation and RPC).


I think the good ideas from COM are: IUnknown, consistent error handling through HRESULT, the coding style that emerges from being clear about method inputs and outputs.

Some things done not as well as these core ideas: registration done globally in the registry, anything to do with threading, serialization, IDispatch.

I think in many situations you can take lessons from the good parts and try to avoid the bad.

I don't see how pointing out common bugs helps your argument though. You can write bugs in any paradigm.


Yes you can write bugs in any paradigm, but some designs are just empirically worse than others when it comes to helping people write correct code.

IUnknown is a classic case of something that looks simple but in fact a correct implementation is not at all trivial, yet COM developers were expected to get it right by hand again and again. COM itself didn't help with it at all, so the ecosystem was very dependent on IDE generated code and (eventually) ATL and other standard libraries.

None of the things you highlight were good ideas, in my view, although probably the best you can do in C.


> The modern equivalent of COM is the JVM and it works wildly better, even if you look at the messy neglected bits (like serialisation and RPC).

Actually it is the WinRT introduced in Windows 8.


Well yes, except adoption of WinRT in industry appears to be zero.


Well, except that is the only way forward for new Windows applications, even classical Win32 is being coerced into it via Project Centipede.


New windows application that only target windows 10+. Backwards compatibility is a major limit to it's adoption.


It came back with a vengeance in windows 10.


really? I must have missed that... I stopped doing windows dev around Vista, because Vista.

Any details?


The UWP programming model, which was introduced in Windows 8 as WinRT is COM.

Basically it is the original idea of .NET, which as called COM+ Runtime, before they decided to create the CLR.

WinRT is nothing more than COM+ Runtime, but with .NET metadata instead of COM type libraries.

Also since Vista, the majority of new Windows native APIs are COM based, not plain C like ones.


I'm not saying that it wasn't possible, but that it wasn't exposed in Visual Studio in usable way.

As for COM, it seems to me that most common reason why something developed by some ISV is separate DLL is that it's COM component.


Er... Prior to .NET, the previous "enterprise targeted" language for Windows was Visual Basic. Visual Basic absolutely supported creating COM DLLs. On 6.0 (and perhaps earlier) there's even some nice templates for "ActiveX DLLs" (ActiveX being the Microsoft buzzword for certain COM objects at the time).


Firstly, I always remember VS having click through wizards for creating libraries. So I think you are even wrong on that.

Moreover, what a sad world for our profession when an IDE doesn't do something for you and people start to doubt it exists. Reminds me of kids on here saying a programming language without a package manager might as well not exist at all.


Do you remember DLL hell? Pepperidge Farms remembers /s


Library ABIs are hard. Same is true with shared libraries if you're not careful. I don't think it's necessarily the fault of the tooling.


The tooling didn't include version numbers.


On Unix there is nothing to stop you from creating libfoo.so without a version number in the filename. Conventions for doing the latter part came about over time. As they did on Windows too (see msvcrtxxx.dll, where xxx is a version number). These days there is also WinSXS on Windows which I don't really expect HN users to know about and is probably not relevant to the historical context.

Microsoft did really fuck it up historically (IE overwriting shell32 comes to mind) but the mechanism didn't have problems when applied by the right hands (sometimes a first party is the wrong hands :P)

If you want to talk problems in the mechansim, ask me sometime how it's possible for a Win32 process to host multiple incompatible malloc implementations in the same address space.


"As they did on Windows too (see msvcrtxxx.dll, where xxx is a version number)."

I didn't use Windows enough to know more about it than that let you have multiple copies of the libraries. Did it let you upgrade a compatible library?

"If you want to talk problems in the mechansim, ask me sometime how it's possible for a Win32 process to host multiple incompatible malloc implementations in the same address space."

I think I'll pass, thanks. :-)


At my first job (94-97) we were writing a (16bit) Windows application that was a relatively slim EXE and several hundred DLLs. This wasn't necessarily the mainstream approach, but it was supported just fine by the tools. What didn't really exist was a mechanism to safely use DLLs from 3rd parties, and sometimes even from MS. DLL hell was very real, but shared objects in any system will always have the same challenge. You are at the mercy of the module writer not breaking binary compatibility without bumping a version number (or part thereof)


Well, in all fairness, that's a 30+ year old codebase. There have been a lot of fingers in there and a lot of hard deadlines.


(Author of the essay here)

It is discussions like this which make me truly admire Douglas Adams for his insights and ability to express them.

For instance, when I read through the debate here, I can't help notice how many of the arguments are really variations of "It's a bypass! You've got to build bypasses! Not really any alternative."


For me the highlight was:

'That is the sorry reality of the bazaar Raymond praised in his book: a pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT "professionals" who wouldn't recognize sound IT architecture if you hit them over the head with it.'

I was hoping for some kind of expansion or attempt at a solution here (which of course would be non-trivial).


The (only) solution is for people to care about quality in computing.

There's still too much money to be made on kludges for that to happen.


.. and no money to be made on quality.

It's Akerlof's "Market For Lemons" writ large. Users can't assess the quality of software before they buy it and sink ages of their own time into learning it. Often users can't assess quality problems even after they've bought it. So the market isn't going to reward quality.

(The original paper was about cars; now we have software in our cars the problem is twice as bad. VW 'defeat devices' and Toyota 'unintended acceleration' passim).


Thanks for your words and your code, phk. Thanks especially for Varnish, a beautiful cathedral.


I've been thinking a lot about this in the context of software development tools. It is now just expected that IDEs, compilers, tooling etc are free and OSS. On the one hand this enables bottom-up innovation and shorter development cycles. On the other hand setting up a development environment is a royal pain in the butt. And a big turnoff for newbies -- they begin to think that programming is some sort of IT job about installing and troubleshooting software. Even when you manage to set everything up there is constant maintenance cost as you update software and as new things come out.

At the very least, I would love to see companies created around popular open source tools and verticals to create designed end-to-end experiences. Download, double-click, start coding, and see something on the screen.


There are non-terrible development stacks. Some of them are even free. Unfortunately too much of it is endless turtles-all-the-way-down yak shaving marathons, like in the JS world. I forget whether I'm supposed to grunt, gulp, babble, or barf.

You just have to get out of the churn-for-churn's-sake cesspools. There are high-quality, stable software stacks out there, where the Cambrian explosions and rediscovery of ideas from a generation ago have already passed.


>Unfortunately too much of it is endless turtles-all-the-way-down yak shaving marathons, like in the JS world. I forget whether I'm supposed to grunt, gulp, babble, or barf.

I don't use any of 'em. If you can afford to give the finger to those not running in a near-POSIX environment, you can just use makefiles or npm scripts: write your code, and run shell scripts to build it, the way God, Doug, Dennis, Brian, and Ken intended.

As for good dev environments, I will not leave my beloved emacs (C-x C-e in geiser-mode means that you can run your software as you write it, and I love it: Most dynamic languages have something similar), but that would intimidate newbies. Gedit and a shell is probably the best environment to start them with: It's about as simple as you get, and every developer is expected to have a rudimentary knowledge of shell, so best to start early.


> Unfortunately too much of it is endless turtles-all-the-way-down yak shaving marathons, like in the JS world

Laughed for 5 mins on this. So true! Somehow we are expected to take this into stride.


And if you work behind a corporate proxy, expect all of the above to be an order of magnitude worse. Oh you set the proxy setting in the IDE/Shell/Package Manager? You forgot about x, y and z; and websites foo, bar, and bas are blocked!


Transparent proxies and 802.1x auth have been around for a while. Even my high school got off the "you must configure a proxy server in every application" train. IMO that is just poor IT.


I remember a few years ago, before Git had NTLM support (for proxies), I had to use all kinds of terrible hacks to get online. Recently, the proxy I'm behind for at least 8 hours a day has been changed to accept Basic and Negotiate, but I didn't realise that until I'd spent a not insignificant amount of time yelling at NPM.

Even if you do get online, don't forget to configure the MITM CA cert!

Next up, apps that try to execute from %LOCALAPPDATA% (Squirrel installers). This is blocked by most AppLocker configs.

Isn't "enterprise" computing fun?


Package management's come a long long way in the last couple decades tho, setting up a dev environment (assuming you're connected to the internet and can write to the appropriate directories) is easier than ever.


It's a very nice read with many good points, but any person with some experience in IT projects could argue with it. The author is taking one side without any self-criticism.

It is true that configure scripts are probably doing some useless things, "31,085 lines of configure for libtool still check if <sys/stat.h> and <stdlib.h> exist, even though the Unixen, which lacked them, had neither sufficient memory to execute libtool nor disks big enough for its 16-MB source code", etc. But then what is the alternative? Writing a configure module each time by every programmer who wants to release some software? This is called code reuse and yes, it's not perfect but it saves time. By not reinventing the wheel again and again. By reusing something that is stable and has been there for some time. Probably such thing is generalizing over many architectures, and making useless things, but then again, who cares for some extra 5-10 seconds of the "configure" command, when you are covered for all those strange corner cases that it already handles?


The alternative ?

You could start out removing the autocrap checks for <sys/stat.h> and <stdlib.h>.

Then you could eliminate all the other autocrap checks which come out one and the same way on every single OS in existence.

And in all likelyhood, you would find out that you don't actually need autocrap at all, because the remaining two checks can be done with an #ifdef instead.


Yes, why can't we have

  Makefile.template
  Makefile.SunOS
  Makefile.Linux
  Makefile.FreeBSD
and then if a new port is desired, just

  cp Makefile.template Makefile.'uname' && vi Makefile.'uname'
UNIX: keeping it simple since 1970.


I built a fair bit of OSS on SUA back when that existed. Autoconf projects were wonderful: download the tarball, if the tarball is older than the OS I'm building on then replace config.sub, ./configure, make, make install. All standard, all scriptable; never once had a problem with a project that used autotools (or, in fairness, CMake or SCons - all the big reasonably standardized build systems work).

People who'd "kept it simple" like you suggest were the bane of my life. I spent more time debugging each of those builds than all of the builds that used actual tools combined.

(Ironically enough "all projects must use autotools" seems like a quite cathedrally attitude though)


Yeah, nowadays I just embrace OS agnostic languages with rich runtimes and don't care about those things anymore.

There are of course occasional glitches, but not like those ones.


Two problems:

1. That just pushes the issue back to the language implementors. Ever discover that your favorite language is unavailable on your new platform?

2. You get restricted to the least common denominator. Thay one OS feature that would make your app 10x faster? Sorry.


The first problem is common to all languages.

Do you think C had any major meaning outside UNIX, before it got largely adopted in the industry?

It was just yet another language with dialects that had some kind of compatibility, with Small-C being the most used one.

For the second point, also not an issue unless the language doesn't support FFI.

The beauty of runtimes, instead of bare bones C, is that they can be tuned to make the best of each OS APIs, while keeping the code portable.

This is nothing new, it was quite common outside AT&T.


Last I heard Microsoft Windows was obsolete something like ten years ago, is there an operating system other than UNIX left out there, that your application would have to be agnostic?


Yes, Web browser as OS, MacOS X, iOS, Android and all the IoT ones.

There isn't a standard UNIX way to do GUIs, bluetooth, NPC, GPGPU, touch screens, cryptography, printers, medical devices, ...

All that UNIX has is POSIX, http://pubs.opengroup.org/onlinepubs/9699919799/ , which only focus on CLI and daemons as applications.

Anything else are not portable APIs that don't have necessarily to do with UNIX.

It doesn't matter that the kernel is UNIX like, if what is on top of it isn't.

And anyone that only knows GNU/Linux as UNIX, should read "Advanced Programming in the UNIX Environment" from W. Richard Stevens/Stephen A. Rago, to see what actually means to write portable UNIX code across POSIX implementations.


Android? In this context, that's UNIX. MacOS X is UNIX. Even the z/OS on the mainframe has a UNIX03 compliant UNIX subsystem. You were saying...?

GUI is dead; if your application doesn't run on the server and display the results either on the command line or in a web browser, you're doing it wrong.


> Android? In this context, that's UNIX. MacOS X is UNIX. Even the z/OS on the mainframe has a UNIX03 compliant UNIX subsystem. You were saying...?

So you use VI and Emacs on Android, generate postscript and troff files, configure /etc/passwd and /etc/init.d

How does your Android .profile look like?

Yes z/OS has a POSIX subsystem, it also doesn't support anything besides CLI, daemos and batch processing.

Mac OS X is a certified UNIX, however none of the APIs that matter. You known, those written in Objective-C and Swift are UNIX.

> GUI is dead; if your application doesn't run on the server and display the results either on the command line or in a web browser, you're doing it wrong.

Better let all of those that earn money targeting infotainment systems, medical devices, factory control units, GPS units, iOS, Android, game consoles, smart watches, VR units, POS, ... that they are doing it wrong.


So you use VI and Emacs on Android, generate postscript and troff files, configure /etc/passwd and /etc/init.d

I don't use Android, because it's a ridiculously hacked-up version of GNU/Linux (as if being based on GNU/Linux isn't bad enough).

Have you spawned a shell on it? The filesystem is a royal mess, the likes of which I've never seen before. Could I run vi and groff and even SVR4 nroff on it? Yes, if I wanted to waste my time with it, I could.

How does your Android .profile look like?

I didn't touch .profile because I don't care for bash one bit, but it was there.

However, in this context, it's still UNIX. A hacked-up, ugly UNIX severely mutilated to run on mobile telephones and tables, but conceptually UNIX nevertheless (honestly, I have never seen anything to hacked-up and mutilated like Android, and you can bet that in 30+ years of working with computers, one sees all kinds of things).


Depends on if his employer decides to start targeting OS/2 ATM's, THEOS desktops, non-IBM mainframes, non-POSIX RTOS's, or the lonely market for Amiga's. ;)


Even AmigaOS has web browsers (just surfed the web from Aweb under AmigaOS). And a shell, so stdout/stderr works too.


Hey, that's cheating if you're putting all the OS specific functions in its own app that moves data to/from those. It's what high-security did for Ada, Java, and UNIX runtimes on separation kernels. Significant, performance penalties in many desktop applications.


Desktop is dead!!! The '90's of the past century called and said they want the desktop back!

People don't want a clunky computer any more; except for computer people, I don't know anybody from general population who has one. I'm offended that we're even wasting time discussing desktop anything!


People just happen to carry their desktops on their pockets and use this thing called apps on them.

Also I don't see anything here on these APIs,

https://developer.android.com/guide/index.html

https://developer.apple.com/reference/

https://developer.mozilla.org/en/docs/Web/HTML/Element

That relate to these ones:

http://pubs.opengroup.org/onlinepubs/9699919799/


People just happen to carry their desktops on their pockets

You mean they carry their portable UNIX servers in their pockets with them. Since they all come with a web browser, there's your application's or your server's front end.

and use this thing called apps on them.

I have a few of those on my mobile UNIX server as well. Stupidest thing I've ever seen or used, "apps". What for, when they could have used a web browser to display their front ends, or could have ran on a backend server and just sent the display to the web browser? Most of those "apps" I use won't function without an InterNet uplink anyway... idiocy pur.


Ironically the web browser as application runtime doesn't have anything to do with UNIX, it can even be running bare metal.

Which means the OS running on the server is equally irrelevant as long as it can serve pages, CSS and JavaScript.

Which ad absurdum makes UNIX APIs irrelevant for cloud computing.


It's only ad absurdum if you're completely unaware of the fact that UNIX (in this case not GNU/Linux, but illumos / SmartOS) is a high performing operating system with extensive facilities for preemption and traceability, which makes it ideal for web applications, and at scale, too. Haven't heard of SmartOS yet, have you, since you claim UNIX unfit for cloud applications?


An example from high-assurance, security field to back up your claim:

https://www.usenix.org/legacy/events/osdi10/tech/full_papers...


Interesting read, thanks.


The 90's desktop market was more interesting. Yet, you must have never met anyone doing applications that require fast CPU's and plenty RAM. Or looked at the desktop sales figures that are well above zero.

Hell, I built one for a guy's gaming rig a little while back. That replaced his hand-me-down from a company designing & installing sprinkler systems. Why he have that? They just bought new desktops for all there CAD users. Lot of CAD users out there probably have one too. ;)


The 90's desktop market was more interesting. Yet, you must have never met anyone doing applications that require fast CPU's and plenty RAM.

Plenty of RAM? Yes, but on supercomputers. My machines were sgi Origin 2000's and 3800's running a single compute intensive application doing finite element analysis and using 16 GB of RAM, across all the CPU's in the system. A single calculation would usually take a month.

On the desktop, you couldn't be more wrong: I was part of the cracking / demo scene, and we literally counted clock cycles in order to squeeze every last bit of performance in our assembler code, me included.


Im jealous thag you got to play with SGI Origins. I wanted one of them or an Onyx2 but cost too damn much. At this point, though, you're partly supporting my claim: certain workloads necessitate either a desktop, a server, or a bunch of them. These include artists, scientists, designers of various equipment, gamers, well-off laypersons wanting instant response time, privacy nuts needing cores for VM-based browsing, and so on. Not relegated only to "computer people" as you falsely claimed.

One can also look at the numbers. Last year, over 17 million PC's were sold in US. Think the buyers were really all computer people? Even with 3 year refresh cycle, low end, that be estimate of around 50 million computer people in this country that's been buying desktops over 3 years. Think they're really that big a demographic?


Well, I'd argue that all those people you listed are either professionals in diverse fields, or enthusiasts. If you take 17 million PC's sold, just in the United States, that's 17 / 300 * 100 = 5.66% of the population. And I was conservative in using 300 million as the total U.S. population, when I've read it's more like 321 million, so what does that tell you?

But if you look at the number of PC's sold year over year, the number is dwindling at the rate of roughly 15% - 18% per year. Look, for example, under the "Global Computer Sales" column, here:

http://www.statisticbrain.com/computer-sales-statistics/

the average laypeople don't want computers any more, and the sales reflect that. For their needs a tablet or a mobile telephone with a web browser is pretty much all they need, and the web can and does now deliver pretty much any kind of application they could ever need or want. And that's precisely where most of the sales of desktops were. Professionals using computer aided design and people like you and me are few and far in between, in comparison.

On an sgi related note, I myself owned several Octanes and even an sgi Challenge R10000 (with a corresponding electricity bill). I must have torn and rebuilt that Challenge four or five times, just for fun. My primary workstation for years (which I fixed and put together myself) was an sgi Indigo2 R10000, with an hp ScanJet II SCSI scanner, a 21" SONY Trinitron, and a Plextor CD-RW SCSI drive, back in the day when CD-RW was "the thing". With 256 MB of RAM when most PC's had something like 16 or 32 MB, it was a sweet setup. Ah, IRIX 6.5, how much I miss thee...


Show me a unix system as nice as the Surface Book and I'll be interested.


Is that one of those failed Microsoft tablet thingies? Why would anyone care about a GUI in the 21st century, when everything runs either on stdout/stderr or on the web?

Anyway, the answer to your question is iPad Pro by Apple Computer. It runs an operating system called "iOS" which is a heavily customized FreeBSD on top of a custom CMU Mach kernel. And it's UNIX03 compliant. UNIX! It's everywhere!


> Anyway, the answer to your question is iPad Pro by Apple Computer. It runs an operating system called "iOS" which is a heavily customized FreeBSD on top of a custom CMU Mach kernel. And it's UNIX03 compliant. UNIX! It's everywhere, it didn't go away, and it won't die!

I'm aware of it, it's not good enough. Its UI is terrible when you need to work with multiple applications, it's a pain to customize anything and even more of a pain to run your own programs.


How do you run XCode, sign apps and upload them to the store on iPad Pro?

How does you .profile on iOS look like?


I don't, what for? All I'd need is SSH to connect to my illumos based systems, and a web browser to use my applications. Compiling things on a lone desktop like back in the '90's? No, that's what I did when I was a kid and didn't know any better. I have infrastructure for that now. Cross compilers, too.


> who cares for some extra 5-10 seconds of the "configure" command

For me, it's closer to a minute. "configure" is good enough that it does the job, and it's hard to replace it. "configure" is bad enough that I loathe it with emotions that words cannot describe. It's design is terrible. It's slow. It's opaque and hard to understand. It doesn't understand recursion (module code? pshaw!)

automake is similarly terrible I looked at it 20 years ago, and realized that you could do 110% of what automake does with a simple GNU Makefile. So... that's what I've done.

I used to use libtool and libltdl in FreeRADIUS. They gradually became more pain than they were worth.

libtool is slow and disgusting. Pass "/foo/bar/libbaz.a", and it sometimes magically turns that to "-L/foo/bar -lbaz". Pass "-lbaz", and it sometimes magically turns it into linking against "/foo/bar/libbaz.a".

No, libtool. I know what I'm doing. It shouldn't mangle my build rules!

Couple that with the sheer idiocy of a tool to build C programs which is written in shell script. Really? You couldn't have "configure" assemble "libtool.c" from templates? It would only be 10x faster.

And libltld was just retarded. Depressingly retarded.

I took the effort a few years ago to replace them both. I picked up jlibtool and fixed it. I dumped libltld for just dlopen(). The build for 100K LoC and ~200 files is about 1/4 the time, and most of that is running "configure". Subsequent partial builds are ~2s.

If I every get enough time, I'll replace "configure", too. Many of it's checks are simply unnecessary in 2016. Many of the rest can be templated with simple scripts and GNU makefile rules.

Once that's done, I expect the build to be ~15s start to finish.

The whole debacle around configure / libtool / libltdl shows that terrible software practices aren't new. The whole NPM / left-pad issue is just "configure" writ large.


Actually, the configure debacle doesn't have anything to do with terrible software practices. All of Sun, AT&T, HP, SGI, IBM, DEC, the BSD guys, the semi-embedded guys, and every else had the best architects they could get. They were (and are) brilliant and did brilliant things. Kemp is one of them, for example. Heck, you can complain about Microsoft and Apple, but you cannot say they're incompetent.

Unfortunately, there are two problems.

1. They were all operating under different requirements.

2. They were all absolutely convinced that they were the best in the business and that they were right.

As a direct result, those of us who got to deal with more than one of the resulting systems want to beat them all to death with a baseball bat with nails driven into the end.


> Actually, the configure debacle doesn't have anything to do with terrible software practices.

I don't mean that reason to use configure is bad. There are many different systems, and being compatible with them all requires some kind of check / wrapper system.

I mean that the design of "autoconf" and the resulting "configure" script is terrible. Tens of thousands of lines of auto-generated shell scripts is (IMHO) objectively worse than a collection of simple tools.

See nginx for a different configure system. It has a few scripts like "look for library", and "look for header file". It then uses those scripts multiple times, with different input data.

In contrast, configure use the design pattern of "cut & paste & modify". Over and over and over and over again. :(


Here's an autoconf-compatible and much, much shorter configure system that isn't expanded from macros and remembers that bash has functions. Look, you can actually maintain it!

http://git.libav.org/?p=libav.git;a=blob;f=configure;h=9f836...

http://git.libav.org/?p=libav.git;a=blob;f=Makefile;h=4e026b...

The new thing seems to be generating configure from CMake which requires you to get a Ph.D to figure out how to override $CC.


Is that a 5400 line shell script?

    echo "This configure script requires a POSIX-compatible shell"
    echo "such as bash or ksh."
    echo "THIS IS NOT A BUG IN LIBAV, DO NOT REPORT IT AS SUCH."
That's always a good sign.


To compare with php (nearest autoconf I've got on hand):

    # wc -l ac* configure.in configure
        3033 acinclude.m4
        9398 aclocal.m4
        1682 configure.in
      117433 configure
      131546 total
Most of libav's configure script is dependency trees for all the codecs flags you can turn on/off, so it's quite compact without it. x264 reuses it at 1500 lines:

http://git.videolan.org/?p=x264.git;a=blob;f=configure;h=9d1...

Apparently the bash requirement isn't too bad since it works on Windows and Plan 9.


Mostly, he's lamenting the need for it. The fact that after 3 decades this thing that every modern OS needs isn't a standard.

It actually is kind of silly that you can't depend on this stuff being abstracted, but instead must individually test it instead of asking a reference on a given system.


And yet when it comes to browser compatibility, it's encouraged, nigh necessary, to test for capabilities rather than check version strings and assume that the version string is saying something meaningful about the environment.

"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36", indeed.


Insane complexity in build systems just makes your life sad. Trust me, I know -- I write java for a living. We have ivy, maven, ant, gradle, sbt, leiningen, and I'm sure a few more.


I write Scala for a living. Once you embrace the straightjacket of maven you never have to worry about your build again. You write a simple declarative definition, where you just list your project's name/version/etc. and your dependencies. Then you build it. When you want to do something e.g. releasing, you find the standard plugin for doing that and you do with it, and you deal with its defaults. When you decide you absolutely have to pass different compiler arguments depending on the phase of the moon, slap yourself until it passes. When you actually want to write a "reasonable" custom build step that makes sense, writing it as a plugin is not hard and makes it much easier to use good practices.


Hey, I write Java for a living and my build script is just a Python program that marshals a javac call.


I write scheme not for a living. My build script is a scheme file or a makefile, depending on the project.

The build complexity is one of the reasons I stay away from Java: If these people think they need XML for builds, what other needlessly complex horrors have they perpetrated? And that sort of thing.

Do you have your buildscript template on github?


The Maven POM syntax might look bloaty to modern post-XML eyes, but it's not actually complicated: a typical POM only specifies the name of the project and its dependencies. The rest is typically all done by convention. It's also purely declarative so ordering doesn't matter.

M4 and sh are very concise languages. Nonetheless autotools is orders of magnitude more complex than Maven. You really can't compare at all.

At any rate, if you want a more concise syntax there is gradle (but it's a bit slower as it's actually executing a real scripting language) and, perhaps a nice middle ground, a thing called Polyglot Maven which is the same build engine but with a variety of non-XML syntaxes. The YAML one is quite nice:

https://github.com/takari/polyglot-maven/blob/master/poms/po...

That way you get simple syntax and a simple model, but still with lots of features.


To this effect, I use leiningen (which is mostly a frontend atop Maven's dependency management) even when I'm building a Jython or a Java project. If there is something funky I need to do as a build/deploy step, I'd rather be writing it in Clojure than Java, and mostly I just want to paste [com.example/foo "1.5.0"] into my :dependencies.


Alright...


I'm beginning to think that build scripts should be written in the language they are building, or a more expressive language. So many seem to go in the other direction.


I disagree. The worst problems I see with build scripts are that they are trying to do too much, create branches/tags of themselves, be environmentally aware, etc.

Limited build tools are a good thing.


I really do agree that complexity is the enemy, but people keep shoving complexity into build scripts against better judgment. I've given up and would rather deal with a complex ruby script than a makefile of equal intrinsic complexity.


I've written build scripts in Rust. I take's flippin' forever (relative to shell or make). But at least it has proper error checking.

One of the guys working on Redox OS make a library called cake which provides macros for a Makefile style script: https://github.com/ticki/cake


That looks nice. It combines strong, static analysis with simple DSL and underlying power in language if necessary. Looks like The Right Thing approach in action.


I disagree. We need declarative build systems instead so building is auditable and doesn't require arbitrary code execution.


You actually NEED arbitrary code execution during builds. Allow me to explain...

Let's say we have a make format called dmake. It invokes $CC with the specified arguments for each file, and links them together into a binary/so/whatever, putting it into the build directory and cleaning artifacts. Okay.

Now say that you start a new project in rust. Well, crap, dmake doesn't work. You have to use rdmake, which is built by different people, and uses a more elegant syntax - which you don't know.

Then you write Haskell, and have to use hdmake - which of course is written as a haskell program, using a fancy monad you don't know, and python has to use pydmake, and ruby has to use rbdmake, and scheme has to use sdmake, and lisp has to use ldmake, and asm has to use 60 different dmakes, depending on which asm you're using.

Instead, we all use make. Make allows for arbitrary code to be executed, so no matter what programming environment you use, you can use a familiar build tool that everybody knows. Sure, java has Ant, Jelly, Gradle and god knows what else, and node has $NODE_BUILD_SYSTEM_OF_THE_WEEK, but even there, you can still use make.

That's the power of generic tools.


You haven't countered parent's point at all. You could've just as easily said the common, subset of SQL could be implemented extremely different in SQL Server, Oracle, Postgres, etc. Therefore, declarative SQL has no advantages over imperative, C API's for database engines. Funny stuff.

Let's try it then. The declarative, build system has a formal spec with types, files, modules, ways of describing their connections, platform-specific definitions, and so on. Enough to cover whatever systems while also being decidable during analysis. There's also a defined ordering of operations on these things kind of like how Prolog has unification or old expert systems had RETE. This spec could even be implemented in a reference implementation in a high-level language & test suite. Then, each implementation you mention, from rdmake to hdmake, is coded and tested against that specification for functional equivalence. We now have a simple DSL for builds that checks them for many errors and automagically handles them on any platform. Might even include versioning with rollback in case anything breaks due to inevitable problems. A higher-assurance version of something like this:

https://nixos.org/nixos/about.html

Instead, we all use make. Make allows for arbitrary code and configurations to be executed, so no matter what configuration problems you have, we can all use a familiar build tool that everybody knows. That's the power of generic, unsafe tools following Worse is Better approach. Gives us great threads like this. :)


From the perspective of security, make is not great, but there's always a more complicated build, requiring either generic tooling, or very complex specific tooling. This is why the JS ecosystem is always re-inventing the wheel. If you design your build tool around one abstraction, there will always be something that doesn't fit. What will happen if we build a tool akin to the one I described is that it will grow feature upon feature, until it's a nightmarish mess that nobody completely understands.

>You could've just as easily said the common, subset of SQL could be implemented extremely different in SQL Server, Oracle, Postgres, etc. Therefore, declarative SQL has no advantages over imperative, C API's for database engines. Funny stuff.

No, that's not my point, my point is that a build tool that meets parent's requirements would necessarily be non-generic, and that such a tool would suffer as a result.

>Instead, we all use make. Make allows for arbitrary code and configurations to be executed, so no matter what configuration problems you have, we can all use a familiar build tool that everybody knows. That's the power of generic, unsafe tools following Worse is Better approach. Gives us great threads like this. :)

Worse is Better has nothing to do with this. Really. Make is very Worse is Better in its implementation, but the idea of generic vs. non-generic build systems, which is what we're discussing, is entirely orthogonal to Worse is Better. If you disagree, I'd reccomend rereading Gabriel's paper (that being Lisp, The Good News, The Bad News, And How to Win Big, for the uninitiated). I'll never say that I'm 100% sure that I'm right, but I just reread it, and I'm pretty sure.


"No, that's not my point, my point is that a build tool that meets parent's requirements would necessarily be non-generic, and that such a tool would suffer as a result."

A build system is essentially supposed to take a list of things, check dependencies, do any platform-specific substitutions, build them in a certain order with specific tools, and output the result. Declarative languages handle more complicated things than that. Here's some examples:

https://cs.nyu.edu/~soule/DQE_pt1.pdf

I also already listed one (Nix) that handles a Linux distro. So, it's not theory so much as how much more remains to be solved/improved and if methods like in the link can cover it. What specific problems building applications do you think an imperative approach can handle that something like Nix or stuff in PDF can't?


...Nix actually uses SHELL for builds. Just like make. It's fully generic.

http://nixos.org/nix/manual/#sec-build-script


Didn't know that. Interesting. It looks like an execution detail. Something you could do with any imperative function but why not use what's there for this simple action. Nix also manages the executions of those to integrate it with their overall approach. Makes practical sense.

"It's fully generic."

It might help if you define what you mean by "generic." You keep using that word. I believe declarative models handle... generic... builds given you can describe about any of them with suitable language. I think imperative models also handle them. To me, it's irrelevant: issue being declarative has benefits & can work to replace existing build systems.

So, what's your definition of generic here? Why do declarative models not have it in this domain? And what else do declarative models w/ imperative plugins/IO-functions not have for building apps that full, imperative model (incl make) does better? Get to specific objections so I can decide whether to drop declarative model for build systems or find answers/improvements to stated deficiencies.


That wasn't what the original post by ashitlerferad was calling for. I have not problem with generic declararive-model build systems that can be used for anything. However, the original call was for build systems which don't require arbitrary code execution. A generic build system must deal with many different tools and compilers, and thus REQUIRES arbitrary code execution: Somewhere, there's got to be a piece of code telling the system how to build each file. And if you don't build that into the build system proper, you wind up either integrating everything into core, or adding an unweildly plugin architecture and winding up like grunt/gulp and all the other node build systems. Or you could just allow for arbitrary code execution, and dodge the problem all together. This is possible in a declaritive system, but it's a lot harder to do, and means at least part of your system must be declarative.


It seems some kind of arbitrary execution is necessary. I decided to come back to the problem out of curiosity to see if I could push that toward declarative or logic to gain its benefits. This isn't another argument so to speak so much as a brainstorm pushing envelope here. Could speculate all day but came up with a cheat: it would be true if anyone had replaced make or other imperative/arbitrary pieces with Prolog/HOL equivalents. Vast majority of effort outside I/O calls & runtime itself would be declarative. Found these:

http://www.cs.vu.nl//~kielmann/papers/THD-SP-1991-04.pdf

https://github.com/cmungall/plmake

Add to that Myreen et al's work extracting provers, machine code and hardware from HOL specs + FLINT team doing formal verification of OS-stuff (incl interrupts & I/O) + seL4/Verisoft doing kernels/OS's to find declarative, logic part could go from Nix-style tool down to logic-style make down to reactive kernel, drivers, machine code, and CPU itself. Only thing doing arbitrary execution, as opposed to arbitrary specs/logic, in such a model is what runs first tool extracting the CPU handed off to fab (ignoring non-digital components or PCB). Everything else done in logic with checks done automatically, configs/actions/code generated deterministically from declarative input, and final values extracted to checked data/code/transistors.

Hows that? Am I getting closer to replacing arbitrary make's? ;)


...I'm not sure I totally understand. Here's how I'd solve the problem:

Each filetype is accepted by a program. That program is what we'll want to use to compile or otherwise munge that file. So, in a file somewhere in the build, we put:

  *.c:$CC %f %a:-Wall
  *.o:$CC %f %a:-Wall
And so on. The first field is a glob to match on filetype,%f is filename, %a is args, and the third field is default args, added to every call.

The actual DMakefile looks like this:

  foo:foo.c:-o foo
  bar.o:bar.c:-c
  baz.o:baz.c:-c
  quux:bar.o baz.o:-o quux
  all:foo quux
Target all is run if no target is specified. The first field is the target name. The second field is list of files/targets of the same type, to be provided to compiler on run. It is assumed the target and its resultant file have the same name. The last field is a list of additional args to pass to the compiler.

This is something I came up with on the spot, there are certainly holes in it, but something like that could declaritivise the build process. However, this doesn't cover things like cleaning the build environment. Although this could be achieved by removing the resultant files of all targets, which could be determined automatically...


There you go! Nice thought experiment. Looks straight-forward. Also, individual pieces far as parsing could be auto-generated.

Far as what I was doing, I was just showing they'd done logical, correct-by-construction, generated code for everything in the stack up to OS plus someone had a Prolog make. That meant about the whole thing could be done declaratively and/or c-by-c with result extracted with basically no handwritten or arbitrary code. That's the theory based on worked examples. A clean, integration obviously doesn't exist. The Prolog make looked relatively easy, though. Mercury language make it even easier/safer.


Well, thanks for clarifying. That makes a lot more sense.


Something like plmake then? (plmake is a build system written in prolog)

https://github.com/cmungall/plmake


plmake allows arbitrary code execution during the build.


No we don't! GNU Make is exactly what we need; that's pah-lenty declarative in the right hands.

All you have to do now is make sure your hands are the right hands.

Like buddha said: right mind.


GNU Make allows arbitrary code execution during the build.


What's the problem with that, exactly?


You bet it does, and that's exactly what's needed! What's the problem with that, again?


Yep. If you're running it for a build, you're running unprivilaged - chrooted, jailed, or zoned if you want to be really safe - and if you're running it for install, than you trust the software in any case. And because makefiles are fairly transparent, you can check what the install is doing beforehand.


I would much rather deal with an xml file than a makefile.


XML files cannot be easily processed with standard UNIX tools like grep, sed, and AWK. XML requires specialized libraries and tools to process correctly, making it an extremely poor choice for... well, just about anything. It's a markup format for text, not a programming language.

Building software is a programmatic process. No XML please! We're decidedly not on Windows, and since I have the misfortune of fitting such square pegs into round holes, please don't use XML for applications which must run on UNIX. It's a nightmare. It's horrible. No!!!


There is no particular relationship between Windows and XML. And just to play devil's advocate, is the lack of XML support in grep, sed, and awk a problem with the data format or with the tools? Why can't we have new standard tools that operate on hierarchical formats such as XML / JSON / YAML? Current standard Unix tools have plenty of flaws and as forward thinking developers we shouldn't be afraid to replace them with something better.


I have noticed a particular relationship between Windows, Java, and XML: all Java programmers nowadays seem to come from Windows (and then I end up with ^M CR characters in all the text files, even shell scripts!), use Java, and write configuration in XML.

YAML doesn't need any special tools - it's ASCII and can easily be processed with AWK, for example.

I don't know about you, but the last thing I want is to have to have a whole new set of specialized tools, just so somebody could masturbate in XML and JSON.

XML is a markup language. That means it's for documents, possibly for documents with pictures, perhaps even with audio. It's not and never was meant for storing configuration or data inside of it. XML is designed to be used in tandem with XSLT, and XSLT's purpose is to transform the source XML document into (multiple) target(s): ASCII, ISO 9660, audio, image, PDF, HTML, whatever one writes as the transformation rules in the XSLT file. XML was never meant to be used standalone.

If you really want to put the configuration into an XML file, fine, but then write an XSLT stylesheet which generates a plain ASCII .cf or .conf file, so its processing and parsing can be simple afterwards. XML goes against the core UNIX tenet: keep it simple.

Do you like complex things? I do not, and life is too short.


If you must have structured data, use a lisp program. Congratulations on using a format that was designed to be executable. and if it's a build tool, you better believe it's executable. I suspect that Annatar is a Murray Hill purist (I don't know for sure), so he may disagree with me.

Of course, like any real programming language, it's hard to process with regex, but then again, I don't want to process makefiles with regex. And you might have some luck coaxing AWK or the SNOBOL family to parse it, and it would be far easier than doing the same with XML.

>please don't use XML for applications which must run on UNIX. It's a nightmare. It's horrible. No!!!

I'd disagree with you there. DocBook, HTML, and friends, are all good applications of XML (or near XML), doing what XML was designed for: Document Markup.

Seriously people, when you're writing a program in a language that has "Markup Language" in the name, does that not ring any alarm bells?


Are you seriously suggesting that you can awk a makefile and get anything useful out?


Why would I need to AWK a Makefile, when make will take macro definitions as arguments on the command line?


You were the one complaining that you couldn't awk an xml file in the context of "xml versus makefile".


No, I wrote that XML for use in applications is bad, as it cannot be easily processed with standard UNIX tools. And it's most definitely bad for building software, as it is limited by what the programmer of the build software thought should be supported. A really good example of that is ANT/NANT. make, on the other hand, doesn't limit one to what the author(s) thought should be supported. Need to run programs in order to get from A to B? No problem, put whatever you need in, and have it build you the desired target.


Yes. Don't use XML as an exchange format. Use JSON or DSV instead.

Yes, I said JSON. JSON is very easy to parse, and you can grab unique key/values, which are most of them, with this regex:

  /(,|\{)\w*\"<key>\"\w*:\w*(.*?)\w*(,|\})/


PCRE. So now you have to use Perl? And what happens when your single JSON record spans multiple lines, and has recursive structures?


First off, I simply used some of PCRE for the syntax, as it's what I'm familiar with. \w could be easily replaced, and non-greedy matching is a relatively common extension.

As for when your record spans multiple lines, with recursive structures, the previous regex is for extracting simple atomic data from a json file, which is usally what you want in these cases anyway. If not, the json(1) utility can, I believe, extract arbitrary fields, and composes well with awk, grep, etc.


Yes, the json utility can process a JSON file into key:value pairs. Now ask yourself: if you end up with key:value pairs on stdout, why couldn't that have been the format in the first place? Why artificially impose not one, but two layers of complications (one as JSON, the other as the specialized json tool to process JSON)? Why not just keep it simple and go directly to a flat file ASCII format to begin with?


Well, it means not rolling your own parser. But that's not hard. The real advantage is when you actually ARE dealing with structured data, with nested objects. Most standard UNIX formats are bad at this, and sometimes you find it necessary.

Also, because JSON is so common, you get really good tooling for handling structured data by defult, instead of kinda-okay tooling for 50 different slightly-incompatable formats. 10 operations on 10 datastructures vs 100 operations on 1, and all that.

But for unstructured data, or for one-level key/value data, JSON is overkill. You can use DSV, like this:

  key1:value1
  key2:value2
  and so:on


But I'd rather deal with a porcupine dipped in curare than XML.


Yeah, I mean, you CAN at least parse a porcupine dipped in curare with Regexes.


> writing a configure module each time

No, he suggested writing the code to be cross-platform so that configuration at compile time is unnecessary.


>who cares for some extra 5-10 seconds of the "configure" command

Building freebsd-7 ports on my athlon felt like "forever and 2 more days" back then. If it is not possible to remove autoconf/configure with all obsolete options, can we at least PLEASE stop doing the same thing again and again 220 times for each small package in enormous dependency list? Caching, anyone?


I'm not sure why you wouldn't pack the config chains like a fractal of dependencies and only probe (for instance) for Fortran if it's required.


Didn't libtool 2.0's autoconf macros do away with the need to check for CXX and F77 on totally irrelevant projects?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: