Hacker News new | past | comments | ask | show | jobs | submit login

Architected software would be profitable in fewer scenarios if we (software developers) didn't maintain such a high barrier to entry. We design our tools for ourselves and other professional programmers so that we don't have to compete with non-professionals with a better sense of the requirements. We force people to come to us, to defer to our understanding of how much architecture is necessary. We make our codebases just easy enough for a pro developer and no easier, refactoring and simplifying only when it becomes painful for a pro developer, and no sooner. That way they have to keep us as gatekeepers to all code.

It's essentially like you only made building tools for cathedrals, and no tools for thatched stick homes, and you create a culture around always building a cathedral so no one really knows how to build a thatched home. Essentially, creating a market irregularity through cultural expectations about how serious software has to be and who is going to be writing it.

(The complaint people always make when I say this is that making software isn't easy, it's hard, and notices couldn't possibly build useful software. I would half agree: some software problems are hard, and require a grizzled developer and some hard planning. But much of software involves no difficult computer science problems, and is more about understanding requirements well enough to be able to assign them to the right basic programming primitive, or a handful of common libraries. This is the kind of code that we use cultural friction to keep inside Engineering, and build cathedral-style, even though it could be done in thatched-home style by notices, if we structured our codebases for that.)




I almost spit out my drink. High barrier to entry? The barrier is lower than it ever has been. Any nut-job with a few weeks of training can make a shitty website with PHP or Node.js and have it instantly accessible to the most of the English-speaking world.

Most of the tools I see for software development seem to be organized around the needs of the bazaar (or thatched huts) not the needs of the cathedral. A million toy languages which might solve your problem well, but don't scale to a million users. Websites like GitLab and GitHub so you can share last week's 1 kloc project with collaborators. Libraries that do that one weird thing you need for your project, and nothing else.

By comparison, the cathedral builders (Google, Facebook, Apple, Microsoft, etc.) seem to be building a lot of their own tools. This includes programming languages, frameworks, build systems, version control, operating systems, and so many other things. They build their own stuff because the tools of the bazaar don't work quite well enough for cathedrals.


> Most of the tools I see for software development seem to be organized around the needs of the bazaar (or thatched huts) not the needs of the cathedral. A million toy languages which might solve your problem well, but don't scale to a million users

Not the OP, but when I see comments like this one I do realize that sometime HN has a very strong echo-chamber. The world doesn't need more than 100 (give-or-take, maybe 1,000, maybe 10,000) apps/websites which need to scale to "millions of users", a situation which doesn't keep that many workers occupied (Google&FB and the like employ much less people compared to the industrial giants of the early 20th century).

But the world does need millions of apps for the 10-100-1000 users, if needed built using the "toy languages" you decry. If we make it easy enough for people to build these apps, using "toy languages" if need be, the world would be in a much better place (we'd have higher productivity).

I'll give you my example at the company I used to work for in the early 2000s (when the "Cathedral and the Bazaar was written"). I was doing some office work, along with my other 20 or so colleagues, which involved having to check that two separate folders on our computers had the same files. This took each of us about one-hour each, so, there were 20 man-hours spent each working day on this mundane task. Lucky me I was a (already close-to-dropout) CS student, and I had heard about Python and about how easy it was to do stuff with it, and lo and behold, it really was. Just:

> import os

> l1 = os.listdir('first folder')

> l2 = os.listdir('second folder')

>a_call_to_a_custom_function_which_was_comparing_l1_to_l2() //which was probably quadratic, but it didn't matter

then use py2exe to put it all up in an .exe file which could be also run on my colleagues' computers (along with some inputs and the like) and that was about it.

A task that used to take an hour each day now was only requiring a script/program call. I fail to see how this program would have required a grown-up language which would need to be scaled to "millions of users", even though it proved to be pretty useful. And there are countless examples like my anecdotal one I gave above, you just need to go into any institution or company office, look at how people work on their computers and realize that the world needs millions of small programs like mine that would substantially increase productivity. The problem is, like the OP said, that us, "programmers", like to keep the playing field only for ourselves.


I think there's a huge unmet gap there that's actually being blocked by big businesses for business (not engineering) reasons. The ability of end-user to quickly and simply tune their device to their needs and automate stuff.

The very concept flies in the face of what is today's accepted UX "best practices", i.e. to make software trivial, engaging and masterable in 5 seconds. It naturally happens by removing any kind of thing there is to be mastered.

The task you performed with Python should be easily scriptable at the OS level. It shouldn't require one to know complex programming languages and toolkits. Similarly, I think that a tool like Tasker[0], maybe with a bit better interface, should be available by default in vanilla Android. We're vastly underutilizing the power of computing devices by restricting end user's ability to work with them.

[0] - https://play.google.com/store/apps/details?id=net.dinglisch....


And there are countless examples like my anecdotal one I gave above, you just need to go into any institution or company office, look at how people work on their computers and realize that the world needs millions of small programs like mine that would substantially increase productivity. The problem is, like the OP said, that us, "programmers", like to keep the playing field only for ourselves.

You're going to need a better example than that. This program already exists, it's called diff(1), md5sum(1), or cmp(1). You could wrap its use up in a shell script to make it even easier, or the companies/people could spend some money/time to learn how to use the tools already at their disposal. In a lot of cases, lack of training is the issue that should be addressed. I've said before "Those who don't learn /bin are doomed to reinvent it, poorly"

This isn't to say you're wrong about a lot more little, customized programs that could be written. The focus needs to be on the specific processes of a particular company, because every company's processes are unique (maybe not on all axes, but on more than one). Enterprise software often errs in either of the two extremes: either it's overly customizable and doesn't fit anyone's needs completely; or it's highly specific and tries to force working its way. And this is done out of a desire, from the software vendors, to capture market share. Customized software is highly expensive, and people are expecting something tangible for that purchase. What they should be concentrating on (at least from an efficiency standpoint) is empowering their own people to automate the processes they do: after all, they are the experts in these processes.


In the business software world, the barrier to entry for end-users seems a lot higher than it used to be. Learning PHP, standing up a database and getting it all hosted is a lot more difficult that taking an application suite like MS Access, creating some tables, and then putting forms up on top of them.

Microsoft created a tool a few years ago called Lighswitch that allowed end-users to throw together CRUD apps quickly, and it seems to have been met with deafening silence. I wonder if managers and CIOs in BigCorps would tolerate their end-users throwing together little apps that solved their problems in today's equivalent of VB6, or MS Access (the ultimate agile experience since users are solving their own problems). Experience suggests not, and although those apps could have become unmaintainable, it seems that there is little effort being made by vendors to address that market, and to provide ease of use, with better maintainability and scalability.


In the business world the barrier to entry is often imposed (and for many good and bad reasons) by the IT/OPs/Security teams.

Going to the PHP example, you could pick one of a number of deploy and hosting providers and have your code running and world visible in minutes for less than a Starbucks coffee a week (specific example Laravel Forge + Digital Ocean).

The problem is that even mediocre software developers with a couple years of experience can miss critical things in any language with any framework that can leave them incredibly vulnerable to attack.

For homegrown internal systems, the barrier to entry isn't the code, it's putting it somewhere people can access. In ye olde days you could slap together some VB6 and throw it in an Excel template and have a workable product- but have you ever inherited something like that? I have, multiple times. It's AWFUL- but I also have made a lot of money on not making it awful.

As an engineer, my rapid prototype basically means I eschew some things like a cache layer or performance optimization for just getting the concept out- but at an organization with no real devs, I can see the value in someone who can hack together anything with whatever they have to prove the idea, then calling in the mercenaries like myself to make the concept a real thing. The problem (and expense) usually lies in the fact that they wait until the concept is completely untenable in its current state and everyone is in a panic.


I agree with you largely. I've inherited plenty of unmaintainable code, including a few user-created abominations. But that doesn't necessarily mean that the idea of end-users writing applications is bad per se, rather, that more effort should go into making it harder for them to shoot themselves in the foot.


I think yes and no. For example, for public web application development the barrier is already low enough that you can put your entire company at risk pretty easily. I think there is little reverence for what it actually means to craft a proper web based application, and that it's not even all about the code, and then there's the never ending maintenance and administration of the server(s).

Now, if you're a spreadsheet jockey and you just need to gather and display your data in a non-trivial way, there are quite a number of things already out there. Business Objects (or whatever it's called now) and Tableau have basically formed large companies upon this idea and there's open source options like Jasper Reports.

I think the days of being able to slap some VB together and write a desktop application are just about completely dead in most situations, which means you really do need a vast breadth of knowledge that a weekend warrior developer didn't need to have a number of years ago.


>Microsoft created a tool a few years ago called Lighswitch that allowed end-users to throw together CRUD apps quickly,

Lightswitch relied on Silverlight and VB Studio, which made it useless for almost everyone.

The problem with bazaar culture is its obsession with tools and systems, and its lack of interest in users. When you get a product that inverts that - like Wordpress - it's often incredibly successful, in spite of its many the technical shortcomings.

The hierarchy of value in bazaar-land is:

1. New tool/framework/language/OS (that looks good on my CV) 2. Elegant, powerful product for customers 3. Fully productised, reliable, scalable, and easy to maintain combination of 1 & 2.

2 and 3 are more or less on equal levels. 1 is far, far ahead.

Because the culture is so tool-obsessed, a whole lot of makework and work-around fixing is needed just to get things to build, never mind work well for customers.

Basically there are dumb tools, dumb products, and occasionally elegant commercial products fall out of the combination - but usually only when they're designed by someone who cares about the user experience.

Hacking culture massively undervalues the user experience, and massively overvalues tinkering and tool-making as ends in themselves.

There's a basic disconnect between the talent needed to write code that works, and the talent needed to design a user experience that's powerful but elegant - whether the user is a non-technical user, or another developer.

The cathedral/bazaar metaphor is utterly unhelpful here, because neither really captures the true dynamic.


manyxcxi summed it nicely. I want to highlight part of what (s)he wrote.

I've watched this play out for 25 years with dbase, Paradox, Access, and countless other tools intended to empower end-users. Typically only one person in a User Area (UA) has the gumption to want to develop an application. It's wildly successful at first. As time goes along, the person devlops the app based on new requirements as is true with any app. At some point, the complexity exceeds the user's skill and time. Often, it's when they want the app to become multi-concurrent user.

I saw that one play out around 1995 with an app built on Access 2.0. The department had a copy installed on each of 20 desktops. The manager came to realize it needed to be a shared app. The power user didn't know how. My colleague spent the better part of a year doing it.

Whatever the reason, IT gets called in. Then we have to salvage a good-for-an-amateur app. Usually the app has become critical to that department so the developer resource has to be pulled from other priorities to salvage the situation.

The problem isn't the lack of tools or CIO's protecting their turf. It's IT being left with messes when a power user gets into trouble. Whether it's Oracle Glue, Access, Gupta SqlWindows, Crystal Reports, or Frontpage, the scenario consistently plays out the same way.


I don't see a problem here. Basically, the amateur build the MVP and validated the use case. And when it was shown that the software served actually a need (maybe one people couldn't even articulate before, but when they saw the app they knew "that's fine, I just need this feature too) the app got used more. At some point the app will have to be replaced. Software ages and rots. My software, your software, everyones software.

So, now we are at the point where the app is breaking down under its own weight. What do we have now?

- Clear specification: The users already know what they want from the app, something very rare in our business

- Proven value: The app is not something someone designed by looking at people from the outside and saying "I think that can be done better ..." but something which stems from their own daily needs and pains.

- Experience with likely extension points: From the history of the app and where new features had to be bolted on you can already see where new feature requests will likely come in, so a new design can accomodate that

And last but not least: A working app, so you have less stress to finish something, but instead can iterate on your new version until it really is better than the current version, without anyone bothering "when is it finished? when is it finished? We need that yesterday. When is it finished?!"


...plus a long, long list of new requirements such as, "it has to be blue" and "it must send email, which must be received, but only on Thursday when the stars are right."

And it must work exactly like the existing semi-manual system, including the ability to make random edits on legal records.

I've done these a few times before, and usually pulled it off, but there are solid reasons why they say, "don't rewrite software".

In particular, the "clear specification" usually has to be thrown out immediately and previous extensions are no guide to extensions for a new system.

And no one wants to do a serious job of it until the absolute last possible moment, so "when is it finished?" is the most important question.


If the organization is setup such that empowered super-users develop apps to the extent of their knowledge and then have a scheduled handoff to a developer in IT, what you’re describing can work quite well. I haven’t seen it work that way in any organization. Usually a department decides to let their super-user develop something without informing IT or they inform us in the vein of “We’re doing this one on our own because we’re tired of waiting for project approval.”

The Access example, from my previous comment, was the “we’re tired of waiting” vein. The app was a critical part of their work day: they used it while on the phone with customers. We had to get involved when the app had become unusable. The developer had to be drawn from another project to “throw it on a server” so it could be shared. Unfortunately, Access 2.0 had a primitive locking scheme that prevented it from being shared between 20 or so people. To compound the lunacy, they fought recommendations, like migrating to a relational database, every step. We had a developer unavailable for the better part of a year while she had to make the desktop app into a department-level app. She had to make the changes while the app was in active use. This example is not one of a partnership for a planned MVP handoff to IT. It was, probably unintentionally, a way to jump the queue to have their project done.

I’m all for a partnership like you described. But, it has to be a partnership with the parties involved agreeing on some kind of a schedule so resources can be available without hurting other projects/UA’s.


Maybe this is an argument for the inherent complexity of the solution being (at least) an order of magnitude more than the tools themselves?

Honestly, I'm very unimpressed with tools these days solving actually useful problems BECAUSE they're so dependent on their assumptions of the simplicity of the problem space.

I don't think we're disagreeing, necessarily. Just speculating on how to put a conclusion on the end of your thought.


> High barrier to entry? The barrier is lower than it ever has been.

I agree with you, but everything is relative. It's expensive to produce a custom microprocessor, but it's cheaper than it's ever been.

> Any nut-job with a few weeks of training can make a shitty website with PHP or Node.js and have it instantly accessible to the most of the English-speaking world.

The barrier to entry can be much lower than that. Someone without any programming experience could fork and deploy a Node service in 60 seconds if the tools were designed for that. I think you and I are just putting our parameters for "low" and "high" in different places. You are comparing Google (cathedral) to entry-level programmers (bazaar). I am comparing a random engineer in your company (cathedral) to one of your customer support staff who is requesting a copy change (bazaar).

Two totally separate conversations.


> It's expensive to produce a custom microprocessor, but it's cheaper than it's ever been.

My impression is that it's more expensive these days, which is why we don't see as many startups like MOS or Acorn, and see instead partnerships between larger companies. It also seems less likely for anyone producing an ASIC to get funded in the first place these days. I couldn't find good data to settle the cost issue, though.

> I am comparing a random engineer in your company (cathedral) to one of your customer support staff who is requesting a copy change (bazaar).

I don't understand this argument. I'm not sure what "copy change" means in context, and I don't know how customer support relates to the discussion.

I guess the main point I was trying to make was that the tooling for bazaar-style development is at your fingertips from the moment you sit down at a computer, but the cathedral is harder to make and the publicly available tools aren't as good.


Customer support are the people who know which words in the software should change to confuse customers less. When I say "copy change" I mean changing some words in the software. The barrier to entry I'm talking about is the one preventing that support person from making that change, instead of having to ask their boss to ask one of the engineering bosses to ask one of the engineers to do it.


Okay, but if you lift that barrier there is still a major fundamental problem: the people in customer support don't know how to code. The few people in customer support I've known who knew how to code changed jobs on fairly short order.

The fact is, even in the bazaar model where the barrier is low, when does customer support make code changes? I'm talking here about instances where customer support for open-source projects exists.


We design our tools for ourselves and other professional programmers so that we don't have to compete with non-professionals with a better sense of the requirements.

In all my decades of programming, I have never met a non-professional with good, let alone better sense of requirements. A layman does not think in terms of details; they think in terms of abstractions, often in terms of castles in the sky. The problem is that computers are the exact opposite of abstractions and castles in the sky: exact, unforgiving, and dumb.

In fact, in all my decades of programming and working with computers, in my journeys across two continents, the number of professionals with a good sense of requirements I have met can be counted on the fingers of my one hand. If that is not disheartening, I do not know what is. It's emotionally and psychologically devastating to me personally. It's extremely depressing to even think about it. What does it say about our profession?

As for writing tools for ourselves, learn UNIX, and then you'll learn of the UNIX programming model:

write programs which work with other programs; write programs with the notion that the output of your program could very well become another program's input. Write programs which accept ASCII input from other programs, for that is a universal interface. Be liberal in what you accept, and conservative in what you send.


I mostly agree, though:

> As for writing tools for ourselves, learn UNIX, and then you'll learn of the UNIX programming model

And then learn some history, understand how UNIX actually was a huge step backwards for computing and how we utterly fucked up the industry. Modularization is fine, programs that work with other programs are great (for many definition of "programs", not just "UNIX process"). However, unstructured text communication is a waste of resources and cesspool full of bugs, and we knew better in the past. We're regaining some modicum of sanity with the lightweight structured text formats of today, but it's sad we had to take a decades-long detour to rediscover that.


If you're referring to structured records, I saw the mainframe, I used the mainframe, and I was unimpressed.

As for unstructured text communication, say what?!? Every good UNIX engineer knows: build in a -m switch for versioned machine readable output, and if possible, make that output a stable interface. That's clear, at least to me. That isn't clear to you?

And I hope by structured text, you don't mean garbage like JSON, one of the most inconsistent and idiotic formats I have ever seen?

Hopefully you also don't mean XML, which is terrible to parse with standard UNIX tools like grep, sed, and AWK. More complications for negligible gains.


>Hopefully you also don't mean XML, which is terrible to parse with standard UNIX tools like grep, sed, and AWK. More complications for negligible gains.

I'd prefer a type system so I can use these tools like a library. Most of them only work on piped data or files.

A recent example is that I needed to diff files. There are existing programs and I didn't want to reinvent the wheel, I just needed that particular wheel to build something else.

To use the existing programs I had to write to a file, which is too slow for my use case. It would be much easier if I could hand these tools a pointer to my in memory data structures and get the diff back in another structure.

This is one reason why we often see libraries replicated /bin. Powershell did a good job of solving this (but was too flawed in other ways).


Textual interfaces enforce decoupling. In a lispy system, with richer interfaces, you can couple your apps and functions as tightly as you want. In Unix, the textual interchange limits you.

However, if you have more complex data to send, text may be problematic. And if you're going to send structured data via text, you need a standard, easily parsable format so that people can easily parse your data without having to roll their own, incredibly buggy, parser. JSON and DSV are both easy to parse, and so those are the formats people use, like it or not. And no, it's not inconsistant. It wouldn't be so easy to parse if it was.

Also, I have never seen a tool with -m. Maybe it's because I'm running Linux.


"If you're referring to structured records, I saw the mainframe, I used the mainframe, and I was unimpressed."

You saw a mainframe. I saw a number that were quite different from each other. The parent said "a step back," though, not mainframes or a specific mainframe. There were many architectures that came before or after UNIX with better attributes as I list here:

https://news.ycombinator.com/item?id=10957020

If we're talking minimal hardware, let's look at two other approaches. One was Wirth's. They do an idealized assembly language to smooth over hardware or portability issues. It's very fast due to being close to bare-metal. Simple so amateurs can implement it. They design a safer, system language that's consistent, easy to compile, type-checks interfaces, can insert eg bounds-checks, and compiles to fast code. They write whole system in that. Various functions are modules that directly call other modules. High-level language, rapid compilation, and low debugging means that two people crank out whole system & tooling in about 2 years. Undergrads repeatedly extend or improve it, including ISA ports, in 6mo-2yr per person. A2 Bluebottle runs insanely fast on my 8-year-old hardware despite little optimization and OS running in a garbage-collected language. Brinch Hansen et al did something similar in parallel on Solo OS except he eliminated data races at compile time with his Concurrent Pascal. Later did a Wirth-style system on PDP-11 with similar benefits called Edison.

On functional end, various parties created the ultimate, hacker language in LISP. Important properties were easy DSL creation, incremental compilation of individual functions, live updates, ability to simulate any development paradigm, memory safety, and higher-level in general. The LISP machines implemented most of their OS's and IDE's in these languages. Imagine REPL-style coding of an application that would run very fast whose exceptions, even at IDE or OS level, could be caught, analyzed at source form, and patched while it was running. Holy. Shit. They targeted large machines but Chez Scheme (8-bit) and PreScheme (C competitor) showed many benefits could be had by small machines. Jonathan Rees even made a capability-secure version of Scheme which, combined with language safety benefits, made it one of most powerful for reliability or security via isolation. A project to combine the three concepts could have amazing potential.

So, yeah, UNIX/C was a huge step back in compiler speed/consistency, speed/safety tradeoffs in production, flexibility for maintenance, integration, debugging, reliability, security, and so on. Tons of architectures or languages better on each of these with some having easier programming models. That Thompson and Ritchie's perfect set of language features for C replacement were collectively an Oberon-2 clone (Go) is also an implicit endorsement of competing system. Plenty of nails in the coffin. Sociology, economics, and luck are reasons driving it. The tech is horrible.


UNIX was the best thing at the time. It had good interfaces for IPC, could run on most systems, not just big, expensive ones, and was relatively portable. And sometimes, Worse really is Better. Wirth's architecture was late, and more expensive compuatationally. Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical.

Unix was and is sucessful because it was good enough, and far more platform, language, and tecnique agnostic than the competition. Unix reccomends a lot, but ultimately perscribes little.


"Wirth's architecture was late"

You're missing the point: abstracting some machine differences behind a system module then building on it in a safer, easy-to-compile language with optional efficiency/flexbility tradeoffs. Thompson and Ritchie could've done that given prior art but they wanted a trimmed-down MULTICS with that BCPL language Thompson had a preference for. Around 5 years later, Wirth et al had a weak system to work on and did what I described with much better results in technical aspects. His prior work, Pascal/P, got ported to around 70 architectures ranging from 8-bit to mainframes in about 2 years by amateurs. Imagine if UNIX had been done the Wirth way then spread like wildfire. Portability, safety, compiles, modifications, integrations... all would've been better. Safety stuff off initially where necessary due to huge impact on performance but gradually enabled as a compiler option as hardware improved. As Wirth et al did. I included Edison System reference because Hansen did Wirth style on PDP-11, proving it could've been done by UNIX authors.

"Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical."

Choices of the authors. Similar to above, they could've done what PreScheme and Chez people did in making an efficient, variant of LISP with or without GC's. Glorified, high-level assembly if nothing else. PreScheme could even piggy-back on C compilers given they were prevalent at time it was written. Took till the 90's before someone was wise enough to do that although I may have missed one in LISP's long history. They also formally verified for correctness down to x86, PPC, and ARM. Would've benefited any app or OS written in it later. Pulling that off for C took a few decades... using Coq and ML languages. :)

"Unix reccomends a lot, but ultimately perscribes little."

My recommendations do that by means of being simple, functional or imperative languages with modules. Many academics and professionals were able to easily modify those compilers or systems to bring in cutting-edge results due to tractable analysis. UNIX is the opposite. It prescribes a specific architecture, style, and often language that made high-security or high-integrity improvements hard to impossible in many projects. The likes of UCLA Secure UNIX failed to achieve objective even on simple UNIX. Most of the field just gave up with result being some emulation layer or VM running on top of something better to get the apps in there. Also the current approach in most cloud models leveraging UNIX stacks. It wasn't until relatively recently that groups like CompCert, Astree, SVA-OS or Cambridge's CHERI started coming up with believable ways to get that mess to work reliably & securely. It's so hard people are getting PhD's pulling it off vs undergrads or Masters students for alternatives.

So, yeah, definitely something wrong with that approach given alternatives can do the same thing with less labor. Hell, ATS, Ocaml, and Scheme have all been implemented on 8-bit CPU's with their advantages. You can run OpenVMS, MCP, Genera LISP, or MINIX 3 (self-healing) on a desktop now directly or emulated. You can get the advantages I mentioned today with reasonable performance. Just gotta ditch UNIX and pool FOSS/commercial labor into better models. Also improve UNIX & others for interim benefits.


You can't run Genera in anything like a sane manner. I've tries.


You can run it, though, which is the point. It doesn't require a supercomputer or mainframe. It can be cloned with a combo of dynamic LISP (flexibility/safety) and static LISP (low-level/performance) where latter might use Rust-style safety as in Carp. You can still isolate drivers and/or app domains in various ways for reliability as in JX OS. Necessary components are there for modern, fast, desktop, LISP machine with its old benefits.

People just use monoliths in C instead & call it good design/architecture despite limitations. Saying "it's good enough for my needs" is reasonable justification for inferior technology. Just not good to pretend it's something it isn't. When you don't pretend, you get amazing things like the BeOS or QNX desktop demos that did what UNIX/Linux desktop users might have thought impossible at the time. Since UNIX/Linux were "better." ;)


Who said writing monoliths was a good idea? Because that wasn't me. Monoliths are bad. And yeah, you shouldn't write your app in C.


I agree. But I think the reason that laypeople don't have a good sense of what is required is that we wall them off from the software so that they have little sense for how the existing stuff is constructed.

I do think that other people in the organization usually have a better sense of needs. And so if they could have a better understanding of the materials, they could do a better job of managing requirements than an engineer who looks at code all day, and is typically not observing the customers.

Your advice about UNIX is good. I try not to write modules larger than a few hundred lines of code. Anything bigger than that gets split into fully isolated modules with well defined interfaces.

Also: I'm sad you're sad. And I'm sad because I feel my tools isolate me from the people I would like to be working closely with. But I'm very optimistic about solving this problem. I think all of the building blocks are there to solve it, we just haven't made a concerted effort as a community because we're mostly under the impression that it's impossible for non-coders to understand code.


> I feel my tools isolate me from the people I would like to be working closely with.

I don't understand this tools argument. It seems akin to saying the reason I find myself isolated from collaborating with particle physicists is due to the fact that I don't know how to operate a large hadron collider, while completely ignoring the fact that I can't even read a Feynman diagram.


Write programs which accept ASCII input from other programs, for that is a universal interface

Can we maybe update that to UTF-8?


You make it sound like we deliberately make our code hard to work with. That's absurd. Code being hard to work with is the natural state, unless you work really hard to avoid it.


My thoughts exactly. I'll be damned if I'm going to put additional time and effort into meeting some "just hard enough" spec in order to secure my job for the future.

1) I'm lazy 2) I am not worried about job security

Does anyone actually, honestly, incorporate "how can I keep the application of my skills here the right amount of inaccessible to others?" into their time spent on project? Shame on you.


Unless you are taking extra time beyond the requirements to make your software easier for other people in the organization to access, then you are doing exactly what I'm saying: making it just easy enough for you to manage.


I think you have a blind spot for just how much shantytown style software development goes on in tools like MS Excel, MS Access and Oracle APEX.


I'm aware of it, but it's a ghetto... In the sense that it is largely kept separate from professionally maintained codebases.

For me, I don't like working in Excel, and I want to have better working relationships with designers, customer support people, business folks, etc. I want us to be able to work on the same projects together, which means not Excel because Excel is extremely limited and difficult to work with. Not difficult for a random person to use for some calculation. Difficult for me to get the things I want to get done in Excel when I'm trying to build arbitrary web apps.


Software is probably one of the most accessible discipline, constantly trying to simplify its tools for the lesser professionals. The information is mostly free and easily searchable. The thing is, no company DIY important use cases. The cost of waiting longer for an inferior solution is rarely cheaper then the price of a qualified professional.

What businesses want is not for software to be easier to build, that's what developers want. Businesses want software that can be more quickly used to solve their use cases. This is not an easy problem. That's why they hire engineers to solve it.

To be more clear, software is quite easy to build, easy does not mean quick. All the hard problems are solved through a library or a framework. The computer science problems left are too hard for even the professional developer to solve.

Software Engineers should specialise in knowing a lot of already written quality software, and they should be good at figuring quick ways to reuse them and combine them and adapt them to the businesses use cases.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: