In the stated examples there's are no benefits to the additional complexity. No one would argue that complexity for the sake of it is a good idea. That'd be insane. If Alice's restaurant could handle 5,000,000 covers a night with only 1 member of staff while Zola's restaurant could only handle 10,000 then you'd have a more realistic scenario to compare with the SaaS industry. The benefit of "complexity" is that you are able to do more things with less work.
The ideal is to build powerful systems from small, simple processes - if any single stage is trivial then anyone can understand it, fix it, modify it, and so on. With many little processes working together you can do amazing things. A good example in software is a build process - a good system can lint code, test it, uglify it, minify it, push it to version control, watch for changes, reload an output mechanism, clean up a distribution, and push it to a live server if it's working all from a single command. That's very 'complex', but really it's just a set of very simple linked processes.
I think the restaurants were a poor analogy. Much better for me was the rule right at the end:
> Innovate on your core product, not on your plumbing (this rule is extremely tempting for developers to break - see next rule)
I was first exposed to this during my final year university project in 2004. We had a sponsor who wanted to build a marketplace for tourism operators and B&Bs. I was the only real developer on the team of 4. I worked really hard with little sleep, producing a really great caching layer and a very impressive WYSIWYG editor. But when the project deadline arrived you couldn't even book a room on the site.
I still struggle with staying focused; the best technique I've discovered to deal with this is to log ALL work around a project, prioritise the tasks and stick to only working on logged tasks with the highest priorities assigned rather than do whatever I feel like.
(It took me even longer to realise what a business opportunity I had lost.)
He should have used the analogy of Alice making an automated wait staff (e.g. waiter drones) for the restaurant. Not an amazing electrical system. I wonder how that kind of story would fair in his scenario.
> The ideal is to build powerful systems from small, simple processes
That's only half of the story, because now the complexity lies in orchestrating those processes.
Let's stay at the example of build systems. These are usually a serial execution of simple processes, so they make up a "best case" scenario. Yet, build systems quickly reach a uncomfortable complexity. See Autotools, CMake, and so on. It is kind of ongoing research: Every few years we find better ways to orchestrate a build system. As of today, there is no build system which is 1) simple and easily comprehensible, 2) reliable and rock stable and 3) still able to build complex applications with all their (seemingly) quirky needs.
It's not only about having simple processes, but also about splitting the complex problem in an intelligent way, so the simple processes have simple interfaces and simple (and few!) interactions. Otherwise, the orchestrating itself becomes the main application, and may easily become even more complex than a monolithic approach would have been.
As of today, there is no build system which is 1) simple and easily comprehensible, 2) reliable and rock stable and 3) still able to build complex applications with all their (seemingly) quirky needs.
Redo solves some problems with reliability, which is good [1], but regarding complexity reduction and simplicity it seems to be not better than a plain Makefile. [2]
Also, I don't think it is a good idea to implement this in Go, because you are limiting your userbase to those willing to install Go and willing to compile and build your stuff. From another perspective: Redo is not a tough task, so why not using a ubiquitous language such as Perl or Python? That way, it would run out of the box on almost every machine. Heck, you could even implement it in portable shell script with acceptable effort. If you ever want to establish a new build system, the entry barrier should be as low as possible.
[1] But honestly, redo don't address any issue that "make clean; make" wouldn't solve. So the practical advantage is rather limited.
[2] Nothing wrong with plain Makefiles, though, I use that approach successfully for many small projects.
> regarding complexity reduction and simplicity it seems to be not better than a plain Makefile.
Makefiles work great most of the time, but become difficult when you need to do things that don't fit well with the make model. I do a lot of multi level code generation, for instance, and make requires a lot of incantations to get right. Whereas redo works exactly the same way regardless of the complexity. I used make for many, many years and got very good at using it before I decided to implement something new.
> Also, I don't think it is a good idea to implement this in Go...
I choose Go because I would not have enjoyed it as much in C. I did not use Perl or Python because redo is used recursively. Perl and Python startup were too slow. I actually wrote a shell implementation that served me well for a while, but it was too slow.
Likely, there are those who won't use it because it's Go and that's fine. I've solved my problem and made the solution available to anyone else to whom it might be useful.
>But honestly, redo don't address any issue that "make clean; make" wouldn't solve.
Fixing make's reliability issue with "make clean; make" is like rebooting Windows when it hangs.
Yeah, you can do that, but it doesn't actually solve the underlying problem. With redo, you don't need need to do that.
The redo-inspired build tool I wrote abstracts the tasks of composing
a build system, by replacing the idea of writing a build description
file with command-line primitives which customize production rules
from a library. So cleanly compiling a C file into an executable
looks something like this:
Find and delete standard list of files which credo generates, and derived objects which are targets of *.do scripts:
> cre/rm std
Customize a library template shell script to become the file hello.do,
which defines what to do to make hello from hello.c:
> cre/libdo (c cc c '') hello
Run the current build graph to create hello:
> cre/do hello
Obviously this particular translation is already baked into make,
so isn't anything new, but the approach of pulling templated
transitions from a library by name scales well to very custom
transitions created by one person or team and consumed at
build-construction-time by another.
I think this approach reduces the complexity of the build system by
separating the definition of the file translations from the construction
of a custom build system. These primitives abstract constructing the
dependency graph and production rules, so I think it's also simpler
to use. Driving the build system construction from the shell also
enables all the variability in that build system that you want without
generating build-description files, which I think is new, and also
simpler to use than current build-tool approaches. Whether all-DSL
(eg make), document-driven (eg ant), or embedded DSL (eg scons),
build tools usually force you to write or generate complicated build
description files which do not scale well.
Credo is also inspired by redo, but runs in Inferno, which is even
more infrequently used than Go (and developed by some of the same
people). I used Inferno because I work in it daily, and wanted to
take advantage of some of the features of the OS, that Linux and bash
don't have. Just today I ran into a potential user that was turned
off by the Inferno requirement, so I'll probably have to port it to
Linux/bash, and lose some of those features (eg, /env), to validate
its usability in a context other than my own.
EDIT: Replaced old way, to call script to find and delete standard derived objects, with newer command.
There's not much in the way of DJB's documentation other than a conceptual sketch so there's much
room for interpretation.
There are many differences between the two implementations, some quite fundamental. redux uses sha1 checksums instead of timestamps. Timestamps cause all sorts of problems as we know from make.
apenwarr redo has all sorts of extra utilities to ameliorate the problems.
redux has the minimum functionality needed for the task (init, redo, redo-ifchange, redo-ifcreate)
and I don't think it needs any others.
Redux passes most apenwarr tests. The few it does not pass are apenwarr specific.
I've not tried the converse. Might be interesting.
I actually have a task list to add an apenwarr compatibility mode to redux so users can switch easily. At that point, it should pass all the apenwarr tests.
No one would argue that complexity for the sake of it is a good idea.
Yet somehow we've convinced ourselves that frontend development now requires package managers, compilers, CSS test cases, CSS frameworks, etc.
Sure there are web-based apps with a rich functionality that require some of these things, but the vast majority of pages that I see out there are just as simple as they were five years ago.
In the case of some front-end tools, they are an absolute must.
When your CSS compiler takes care of all the stupid -webkit-/-moz-/... prefixes, allow you to define variables, mixins, use loops, you gain a great amount of power and your code is a lot cleaner.
When your build system integrates with livereload and make your changes appear instantly as you save, you gain speed and comfort.
That's major benefits for little to no inconvenience.
I don't know about the necessity of CSS frameworks, styleguides or package managers, but some tools are just too good.
It's not about the end result, it's about using better tools to get to it.
After gaining a fair amount of experience with various technologies over the years (by which I mean, getting burned (especially by things that seem wonderful and then cease to exist for a variety of reasons)), that has become the single most important question I ask about a new technology. If it's been around for ten years, it'll probably still be around in ten years. If it hasn't then it may not, especially in a form recognizable as related to today's.
C and Java are, well, not horrible, but bad. However, C has been C for my entire career. And I'm fairly sure that if I write something in simple, plain Java, in five or ten years when it needs some significant work then that work won't start with a complete rewrite.
I don't understand this: with open-source, it doesn't matter that your CSS compiler has been abandoned: as long as you are satisfied with the current feature set, the code isn't going to disappear.
Stylus is only about 3.5 years old. But if all development were to stop now, I'll still be able to use it 5 years down the road. It probably helps that npm dependencies are pinned to a specific version.
C has been around for dozens of years, but you can consider its feature-set pretty much frozen. That's about the same thing as being satisfied with (CSS compiler X)'s feature set and sticking with it.
DSLs like the ones for CSS compilers are feature-complete fast enough. You don't need to wait 10 years for it.
I was merely using the CSS compiler as an example. For something like that, just put a copy in your source repository and forget about it---it'll be fine for a couple of generations of browsers, until the "best current practices" have gone beyond deprecation.
I'm sitting here looking at a JRuby on Rails application using JRuby 1.1.4 and a suitably ancient version of Rails (2.2, maybe?). It's been in production for roughly five years and has received essentially no love during that period; it Just Worked[TM]. The poor sod who was responsible for it and our one other Rails application, our one and only Rails developer, finally managed to move on to other things.
At this point, security issues and (most likely) the inability to slather on new features have percolated up the chain of command and it has been agreed that Something Must Be Done. Since the upgrade path seems to recapitulate the phylogeny of Rails, our options seem to be to rewrite the applications in a modern Rails and kick the can down the street, hoping it'll work better this time, or rewrite it using our common tools around here (and because I've got something to do with it, in the simplest manner possible). (And against the usual "never rewrite anything, ever" meme---which isn't an option---it's going reasonably well.)
As for C, yes, it's feature set has been more-or-less frozen, which is good. It's environment is not. I was reasonably happy with the first version of gcc I used; call it 1.37 or so. Do you think you could use gcc 1.37 today?
You don't need a build system during development. I work on a 500.000 line web app, and it has no dev time build step. You check out the source into your web root, and run the app. Edit + f5 is fast enough, no need for smart build systems. Build steps are ofcourse inevitable for the jump into production.
If you're satisfied with pure CSS and Javascript, the only thing you really need is a build tool for concatenation, minification and gzip. And livereload. And probably normalize.css.
-> I use a Makefile, because it's the most simple to modify and adapt. When I have several targets of the same type (ex: build.min.js and debug.min.js), I use a yaml file with a custom python script for concatenation. See gulp.js and a thousand others for alternatives, YMMV (my Makefile nowadays: https://gist.github.com/idlewan/11012492). Magical bit for running make every time a file changes:
watch:
@inotifywait -m --exclude ".*\.swp|.*~" -e moved_to,close_write -r . | \
while read; do make -s; done
To get more from just CSS, you need a CSS precompiler.
-> I use Stylus with the nib library (https://github.com/visionmedia/nib). Alternatives: LESS, SASS, and some others. You might want to add some grid and 'responsive shortcut' mixins on top.
For webapp needs, you need a template library (some people prefer using a fat framework that does everything for them, e.g. Angular).
-> I use Jade (https://github.com/visionmedia/jade), because it runs on the server for Python, Node.js and soon Nimrod, and on the client with precompiled templates. Alternatives: Mustache and a thousand others, YMMV.
I can easily see how this post could be mis interpreted, so i'll add my personnal experience :
I had the occasion of building the same system in two different companies : one was a start up, the other a huge company.
For the start up i could choose the tech i wanted, and decided to go for python + app engine + backbone ( new at that time ). Those tech were "hype" yet not absolutely brand new. I took some risks by choosing them but thought it could be worth it.
For the big company i had to go with java spring mvc + sencha, they didn't want to hear of any new tech that would be different from what they were used to. They deployed it in their own infra.
Now, the start up project took 3 man month, the big company took more than 7, and a year total before being deployed.
The startup only paid an intern to maintain the software,and almost nothing in infrastructure fees. The big company outsourced maintenance to a software service company that proved unable to do even the most basic troubleshooting.
I designed and coded the two systems, and i wasn't a guru of any specific tech, so it's not an issue with the people. Sometimes, under the right circumstance new technologies are way better.
If you had been able to choose Python/Backbone etc. in the big company, chances are that it would have taken even longer. In big companies, everything is different, and they have to take other things in regard than mere development time. They are most likely prepared for one or two kinds of tech stacks, and everything is in place for that. If suddenly they have to deploy new technology, they for example have to:
* build custom monitoring for the components in the new stack
* look at support contracts and patching procedures for these components
* figure out how to be able to respond to security incidents with these new components
* train and _keep_ training people specifically for this new special snowflake you just created for the years to come
* look into how various operational concerns are implemented with this technology: scaling, separation of concerns (can I deploy the admin part separately?), logging and log rotation, and so forth
In startups, developer cost and time to market may be the most important things. In bigger companies, a lot of other things start to matter.
I can fully understand why big company does not want to tie itself to app engine or other Google services, especially if they are able to keep their own infrastructure. If they are European company, having customers data on app engine might not even be legal. Google was not able to guarantee that they will not move data over border and there was regulation about it. Not sure about current status.
Big company has to maintain many projects. Managing too many technologies and contracts with too many suppliers is huge hassle.
Outsourced maintenance was probably simply incapable, there is huge amount of parasitic service companies living off typical big companies decision making faults. It probably had nothing to do with technology. Long term before deployment is also probably product of big company processes, other projects competing for administrators time etc.
That being said Python + app engine + backbone is much easier technology to start with and learn then java spring mvc + sencha. When you do small project in isolation, then former seems like much better choice. Those big frameworks usually pay only if you do something big.
I don't think bsaul was saying he/she was going to use Google App Engine, just that the big company would not consider anything else other than their java/spring stack.
Here is the quote from bsaul:
For the big company i had to go with java spring mvc + sencha, they didn't want to hear of any new tech that would be different from what they were used to.
Managing too many technologies with too many suppliers is huge hassle even if you remove contracts from question. One man 7 months project is not worth adding new technology to the stack.
They use their own infrastructure, so anything that can not run on it is probably a no go. If you do not use java, then they have to add python of whatever application server into mix and then maintain it forever. On the other hand, another small java is just that, they probably deploy it by one script.
Various contractors have various favorite technologies and various small projects would be done two months faster with various tools. It can add up pretty fast. The last thing they want is to run three different java application servers, two pythons and three ruby.
It should be easier to switch sencha for something else, of maybe even spring mvc for other spring compatible java mvc. However, it still is way easier to maintain many projects in spring mvc then five in spring mvc, six in wicket, three ja JSF etc.
Maintenance often costs much more time and money then initial release, even if all involved people are super capable. Having one technology for many projects reduces time needed to learn all possible quirks of all possible technologies they maintain.
Plus, one man 7 months is super small super cheap in the world of big companies. Even if whoever is leading it would want it and had enough power to allow it, it probably would not find it worth involved negotiations.
The arguments of most of these types of posts can be hand-waved with "use the right tool for the job."
But honestly, plumbing and electrical infrastructure is a terrible metaphor for a development framework. There is no situation in which a handyman would be praised for advocating complex systems. They complicate every single layer of the build.
A sizable chunk of the "hype" tech facilitates faster, more efficient development at fairly low cost (maybe sacrificing some scalability or stability).
It would be reticent of me not to mention how much the deluge of JavaScript frameworks pisses me off. It is overwhelming trying to keep up.
Development in big companies is always slower, even using identical technologies. Once you get to a certain size even the most basic tasks become bureaucratic.
I am currently taking some risks with new technologies building my product, so I'm definitely in favour for it (there will be no progress if everyone stays away from the unknown). But I think the author wanted to compare 2 startups using new vs. proven.
Yet, this is your experience, and I, for one, was glad to read a success story with a new tech. :)
So, let's drastically reduce our productivity so that they'll hire more of us?
Sounds great! Except now you are a drop in an ocean of developers, which are all working on crappy tech with low productivity, each providing very little value to the business.
Well said. I can't tell you how nice it is to have software in production on a boring stack. It gives you freedom to do other things.
I can (and often do) go entire months without touching the codebase of my main rent-paying products. It means I can, among other things, pick up a full-time development gig to sock away some extra runway, take off and go backpacking around the world, or better still, build yet another rent-paying product without having to spend a significant amount of time keeping the old stuff alive.
It seems like on a lot of stacks, keeping the server alive, patched and serving webpages is a part-time job in itself. In my world, that's Windows Update's job. Big New Releases come and go, but they're all 100% backwards compatible, so when you get around to upgrading it's just a few minutes of point and clicking with nothing broken.
I see it as analogous to Compound Interest, but to productivity. The less effort you need to spend on maintenance, the more pace you can keep going forward.
This is why Java is used widely. It works. It works well. And this is also why Java is great for huge systems (not in terms of users, disk space or bandwith, but in terms of code size). The same can be said about a lot of "old" technologies, and certainly about almost every industry standard out there.
On the other hand, once in a while the Alice/Albert bet happens to win; be it because the new system is really better (as in: easy to maintain, or really being capable of managing higher amounts of workload), for non-technological reasons (Alice/Albert just happen to have a great idea), or just because of luck. Over time their technology may even become the new industry standard. The problem here is that it's the Alices/Alberts from the world who make it progress by trying new things (and failing often), but we're afraid of failure.
So, yes, it's completely natural that corps resort to Java or C#, while startups use Scala or Ruby.
For all of you doing startups in shiny new technologies: this means that even failure has a bright side, since even in that case you've put your grain of sand to make the technology more mature.
It's not just a question of language or platform but also culture.
Java is probably the #1 culprit for building huge systems to solve simple problems. The goal is small, simple, fast, cheap, and 'enterprise' solutions are rarely a good fit for new companies. Now, if your United Airlines and you have complex problems involving hundreds of systems yea your in a world of pain and Java is a great option for dealing with complexity.
That said, their are plenty of rock solid Java programs that avoid the bloat there just rarely websites.
> Java is probably the #1 culprit for building huge systems to solve simple problems.
Don't blame the language for the enterprise culture.
Before Java existed I have seen similar complex systems done in C, C++, Clipper, VB and quite a few other languages.
Just to cite a concrete example, before J2EE existed, we had CORBA and DCOM systems, plugged with distributed transactions across multiple OS systems. Great debugging experiences.
JDBC is perhaps a better example. It's short and simple but designed around raw SQL which ends up creating a huge mess vs something like LINQ. Sure, there are hundreds of 3rd party solutions to this problem just pick one except external library's might pick something else...
This example can be quite easily explained: its generated from a designer tool, as it is part of the Nimbus LaF. Its not really part of the JRE API, even though the class may be public.
Btw., the longest name for .NET is not much shorter: UpDownBase+UpDownButtons+UpDownButtonsAccessibleObject+DirectionButtonAccessibleObject
Eh I really strongly disagree with this common criticism -- "huge overcomplex etc etc". Basically if you use a tool suite properly designed for your development area, you basically need to know how to setup your project/IDE integration with server runtimes etc. (sometimes not easy to do if you don't know how but once you do it's 15 minutes, once per project... consult a senior dev for first 1 or 2 project creations).
Then you can usually plug in external systems that obscure mounds & mounds of complexity (CMS, persistence, etc.) just by making a few minimal calls to their APIs. Then extra features (caching, asset compilation) are a bonus -- the project is so rock solid to begin with that often you don't need to optimize every aspect of the code.
EDIT: I've also found this to be why corporate devs are not miserable. The tools are so reliable that the companies don't mind paying devs to use this targeted, minimal approach. And for the devs it means they can try their hand at pet projects with novel code without worrying about ever getting caught in a bad timecrunch. Corps with standard needs don't seem to require overtime work to keep the wheels spinning.
You are placing a lot of merit on Java. Most of it should be directed to the JVM.
There's nothing inherently great about Java, other than the fact that it constrains programmers so much, that the damage a bad programmer can inflict is greatly reduced. At the same time, so is productivity. This can be mitigated up to a point by having more bodies, which is a thing that bigcorp can afford but startups cannot.
Replace Java with anything else running on the JVM and you'll reap most of the benefits (libraries, application servers, deployment, etc), without many of the constraints.
C# is closer to Scala than to Java these days. The type system might not be that advanced but there are value types, nice generics, proper annotations etc., and there is quite a bit of syntax sugar with more coming:
We have part of a system at work that was written in Java and it is horrible because the team that built it has a bad case of NIH. They ended up implementing a very poorly designed file system based database. Now that part of the system has horrible I/O performance because it is opening, reading, writing literally 10's of millions of files per day. The development team wanted to be clever and innovative on something that should have been a dartboard throw at Oracle, MySQL, MS SQL, or Postgresql.
The analogy to plumbing and electricians is funny because typically those types of contractors are really skeptical of new-fangled products and they usually want to stick with tried and true solutions. But those are industries that have been around for a couple hundred years, right?
The point of the article is to focus on your core product and leave the plumbing alone by picking boring stable stuff. You can build a business with all boring stable stuff too, but that may not attract "rock star" developers.
I am seeing the same crap on a project for the opposite reasons. They want to use NoSQL for the main queries, and are actively avoiding using a relational database. Means they are going to use the filesystem as a database, whereas a relational database on top of it would be a far better system.
The fun part is that you can be either Alice or Zola with nothing but perl.
The moral of the story might just be "stop trying to be clever and start trying to be done", with all of the usual yaddayadda about preoptimized yak razors.
This is pretty much my point of view. How new or buzz-wordy a bit of technology is, isn't really the issue. It's whether or not the thing in question will allow you to end up with a simpler solution.
Simplicity and supportability come hand in hand for me. I'd much rather debug/fix a really simple app in a language I don't know, than a really complex one in a language I do.
A simple rule - you should always remove complexity from a project and never adding it. It builds on its own so any tech you add must remove some complexity from the current project.
Warning sings for tech that brings more complexity than it is usually worth - extensive xml configs, hiding of executable code, stack traces more than 240 levels deep.
Current favorite offender - GWT - I just love when something blows up in the javascript and it just tells you - well signature object object object is not what I expect in javascript apply. And you have no idea where exactly it was generated.
So it is a KISS - the project must be of the least possible complexity to solve the problem.
Boring system themselves do not ALWAYS build badass businesses. It's knowing when to stick to boring systems vs taking the chance on something new. A lot of systems start off as someone's side-project. It's a calculated risk when you pick something that brings different things to the table.
This post highlights something that can be a problem with the contracting of workers. Namely, that Albert will be more in demand than Zip, despite having built an inferior system. The failure of the business, in real life, is probably not attributable to him - there are many other variables that one could point the finger at (low demand, location, infrastructure, sourcing prices, etc..). And, the manager will often not understand what truly constitutes a "best-practice", maintainable solution. So, by default, he probably ends up being paid more, and is seen to be more important and accomplished as well.
So, where is the incentive for the handyman to act like Albert? And how do you identify these people?
Jane came to us with a great reputation. We thought she was going to be as brilliant as Fred. But she hasn't really proved herself yet. We've given her a few problems that we thought were going to be really tough, but when she finished it turned out they weren't really difficult at all. Most of them turned out pretty simple.
The maintenance aspect is huge. I've been able to observe how fancy, complex systems fare over a long period of time (as opposed to simple systems): in almost every case the "cool" complex system required way more maintenance. There are just more things that can break.
Unfortunately, we don't normally record the "long tail" cost of a feature. We build and deploy, but don't keep an eye on how much time it takes to maintain that feature.
Unproven innovative technologies are not necessary bad. They can give you edge over competition. The real problem is to restrain yourself when applying them, and have a fallback plan.
I liked the usage of the word restrain. I try to introduce one or two new things on projects. Too many new technologies and it you will increase the risk of the project, and you will spend more time learning than doing.
"There are many ways to achieve developer happiness, but making your core business products a playground for developers seeking novelty is the path to hell."
Excellent point. This also applies to programmers who'd like write everything themselves so that they can learn more in the process. My current job involves maintaining an application that has everything written in-house - logging, html templating, url mapping, validation, form bean binding, scheduling and what not! - all this is possible with just using slf4j, Freemarker, Spring and a bunch of other lightweight libs. Some of the stuff is good and so it makes me think the only reason would have been to become more proficient in OOPs and Java.
The problem is, simplicity is not an objective measure. Take monads, for example. To most developers these are a foreign and possibly scary concept. Once you understand them, however, they seem ridiculously simple. This is one of the problems with monad tutorials -- they are so simple there is almost nothing there. I know I spent a long time trying to find a "deep" concept when learning monads, before I realised there isn't one.
Building a system with monads, if you understand them, is simple. You can write together components easily, and have concurrency, logging, error handling and more all nicely abstracted away. But is this a simple system? It depends entirely on your background.
This is an argument for buy vs build. As others have stated, the question is whether complexity is worth it. My bias is we tend to underestimate the complexity of small additions, and overestimate the benefit of having control over a system. The implication is too much complexity in things we build ourselves. Sometimes the industry standard solutions aren't appropriate, but it all depends on what a company wants to focus on.
And I'm not sure of the reference for Zola's restaurant, but I like the Guthrie inspired complexity of Alice's restaurant.
Of course if you are working on a boring problem it is a mistake to try to make it more interesting by incorporating interesting tools. This is a common problem in web development for a lot of people. On the other hand if the problem you want to solve is interesting and hard then probably you will not go far with the boring standard solutions. (see: Oculus Rift).
Summary: if you don't want to be bored, choose interesting problems, not just interesting tools.
Even though it's still hip, this is why Go works in capable hands. As a new programming language / platform, it's academically quite boring. In fact, that's the number one criticism of its detractors.
It's not the boring that bothers me, it's the terrible type system. Even Java, the boringest of boring languages, eventually (1.5) figured out that it needed something more powerful.
That's what I love about it. No surprises and rather straightforward. Plus it's the first language I feel comfortable using a plain text editor to work with. I use Sublime Text.
The cases presented in this article are contrived.
1) There are room for both boring and cutting edge technology in any business. Albert didn't drop the ball by choosing cutting edge tech. Albert exhibited poor risk management skills. Alice wouldn't be complaining if Albert took a controlled risk and installed a next generation flash fryer that gave a clear competitive advantage over Zola in terms of personnel and order-to-delivery time.
2) Good ideas require both Albert and Zip. Zip keeps the lights on and the costs down for all the mundane BS required to run a business. Albert is the disrupter. He is the reason starting the business was a good idea. He is an iconoclast that looks at the state of the world and said "I can do this better".
The title of this article should be Boring Systems Build Benign Businesses
Albert doesn't start the business. He is the plumber on payroll and should behave as such. Alice and Zola start the business and innovate in the realm of cooking, not plumbing.
I didn't say Albert started the business. I said that Albert was the reason starting the business was a good idea. What I did say is that either Alice or Zola would be better served by having elements of both Albert and Zip.
I would consider hard to find a decent software engineer that would introduce 20 unproven tools to for something allegorically related to "plumbing" like say your web server.
Additionally, a lot of this is based on perspective. If I suggested standing up a web infrastructure on "nginx" am I an Albert or a Zip. I mean "Apache" has a much longer track record...right? By this article's logic we should write everything in FORTRAN and use single-tasking operating systems. Those have been around since the 50's ... rock solid.
Net out is "proven track records" and "best practices" are convenient stop gaps for ignorance. They are a way for people that don't know any better to manage risk. "I don't get it ... but it worked for her". Being competent and knowledgable, having good risk management skills, and understanding the problem will serve an organization much better than being conservative in your choice of tools.
Why add the complexity of a car when you can simply walk?! The car will break down, you will have to waste days taking it to the service station, you will have to get a driver's license, you will get tickets, you may kill someone and get in trouble, you can't drink if you will be driving. Why add so much complexity to your life?
The idea that complexity in itself is bad is flawed. Sometimes innovation does require complexity. Complexity for the sake of complexity is bad.
>"But [some new unproven system] is really cool! Even [some big company] uses it!"
A company I know uses a big/buggy oil pipeline leak detection software. It is very complex and very buggy. Tech support has to be called in every few months. But they still use it. Why? Because it will detect oil leaks much faster. Potentially saving them millions in case of something bad. Should we stop innovation because we are scared of 'complexity'? I wouldn't suggest using a system because it is 'really cool' and a 'big company uses it'. But why do they use it and why is it 'cool'? Can it make you more money? Those are the questions worth asking.
>"Innovate on your core product, not on your plumbing "
Every bit of complexity deployed to make more money is good. Can you tell and prove how it will make money?
Every bit of complexity added because it is 'cool' is flawed! If plumbing can make me more money, then hell yeh it requires some investment. Every situation is different.
I wouldn't say that boring solutions are always the best, unless they satisfied the conditions below.
Here's what is REALLY important:
A) How fast can you get your first product in front of customers?
B) How often can you measure and iterate on that and get the new version in front of customers.
You should pick the best solution that optimizes for A & B. Both are really important because they will help you discover the actual thing you need to be building.
I am involved in an ongoing discussion at work. The guys implementing the project want to go with a NoSQL / Angular / Node solution. I see a relational database as being a better fit.
"Its so fast, even running on my laptop"
"you only have 3Gb of data just now"
"Its so quick to develop, just plug Angular straight into elsaticsearch, no schema,..."
The thing is, I just see lots of problems stored up for the future. Sure you have to define a schema up front if you go for a relational database, but changing it isn't such a big deal, not until you are in production with live data. Sure you might save time up front, but you are essentially cutting corners to achieve this, and storing work for the future.
I run/work in a small shop supporting a real estate brokerage with custom software. For every new core project, I ask if the problem can be solved in java. Not because it is the best language, or the one I am most familiar with, but because it is the language that almost all the other systems are run in. Ditto for databases--projects should use mysql unless there is a very compelling reason not to. And we use one data processing tool--pentaho kettle.
Now, we've actually had some other languages 'sneak in'--perl, bash, python, javascript. But they were for closely scoped projects and, in a couple of cases, I asked for a prototype in java and the other language first.
It is hard to do this, because I read hacker news and am interested in keeping my skills up to date with the latest and greatest interesting technologies. I and my team have other alternatives to explore (hackfests, not-work side projects), which makes it easier. I'm doing the right thing for this company, and that's the right thing to do.
I get the rather bluntly hammered home moral to use safe reliable tech but that misses a lot of subtlety.
I think the moral of the story is to load test before you dump a bunch of customers onto your system. Regardless of the tech you use you can easily fail in this regard. And secondly not to value people for the effort they put in but the results they achieve.
This post spoke to me because I'm in the middle of an interesting decision - for our public/marketing website, should we go with Wordpress (something a lot of people know, etc) or a static site generator (fill in your favorite - Harp, Docpad, etc.)? The argument for going with the static site is that we'll have a much faster site (it will be static) that will likely be easier to customize (we don't need a lot of what Wordpress offers) but the potential downside is that most developers don't know the static tools so if I hire someone new, I'm likely training them. Now, I don't think training will be that hard if you get someone with a decent background, but you get the idea.
What would you choose? Safe and stable Wordpress with more customization effort, or the static site generator idea with less installed based of developers?
Go with WordPress. All the advantages you listed, plus you can always use a caching plugin to generate static files. I think WP SuperCache is the standard. If you get to the point where that combo is too slow to handle all the traffic you're getting, you've officially found a Good Problem To Have™.
Static sites are much more robust under stress than dynamic ones. To get that robustness back for a dynamic site, you have to add at least one layer of complexity, namely caching. A dynamic site generator like WordPress also exposes a much larger attack surface for security exploits. So from a sysadmin perspective, a static site is undeniably better.
However, expect some resistance from your content authors. "You mean I have to use Git and the command line? I can't just paste into a text box on a web page, like in every well-designed system since about 2000?" Luckily, there's at least one solution for that: http://prose.io/
I think static site generators aren't really "new" technology. Originally, the whole web was based on serving static pages. In a sense, it's the oldest web tech there is. And you're simplifying your stack on the deployment platform.
But Wordpress does add a whole bunch of features that you might need.
I can't see how learning to use a static site generator could be difficult. Wouldn't people mostly just be editing the content and re-generating the pages, etc?
I don't think it's difficult, but there aren't the same number of people that know Wordpress. For content editors, I don't think it matters - but for programmers, I can find people that know Wordpress. Finding someone that knows a given static site generator is much harder.
Any good static site generator can be learned in an hour.
You shouldn't search only for people who already know the static site generator you plan to use: any good developer that uses the CLI will have no problem using it.
Well, I mean, it's all about understanding both the tools and the needs and selecting the tool that fits the need. Some "restaurants" have exotic needs that good ol' Zip might not be able to satisfy using his system. Or Albert might have ways to do things that are way cheaper -- require fewer resources, less time, etc -- but have the drawback of using newer tools that might become abandoned, have low developer numbers, etc.
But, as a developer, this is why you have a conversation with your client and understand what their needs are. So you can understand these trade-offs and make the best possible recommendation. Neither Alice's way nor Zola's way is the Way Things Should Work 100% of the Time.
I think this is the precise reason why every industry and function has a process framework. Working around processes/functions and its value maps while creating a solution is the best way to not just meet customer expectations but also ensure that your products play well with other products a buyer might have.
Processes and sub process maps like Procure 2 Pay, Order 2 Cash, etc, are there for a reason. They tend to make life simpler for buyers making a choice and also help ensure that your product doesn't have process blind spots that will kill it.
The big guys in the ERP space have perfected this approach and it's something a lot of business oriented startups don't seem to consider.
A better analogy would be Alice doing a bunch of research up-front to think that she needs all kinds of cool features before opening and then hires the plumber to enable them, rather than just hiring him because he knows how to build complex systems. The latter really isn't too realistic while the former happens every day in many, many companies.
This overbuilding is usually borne from a lack of understanding of what is really needed and in lieu of learning through doing and reacting, people try to build up-front in anticipation, putting a lot of importance on things that ultimately turn out to be completely trivial.
Although there has to be a distinction made between using the latest technology and how you choose to architect and design a product with that technology.
I've seen super complicated web architectures with something as widely used as PHP. So it's not so much about what tools you use but rather how you use them. Choose the right tool for the job while keeping the business constraints in mind.
From the title I was expecting this to be about the area of the businesses, perhaps suggesting to solve boring and real problems for real people who pay real money instead of, as is fashionable today, building "buisnesses" for sharing your crap to another crap and liking and commenting and following this and that whilst being fed with advertisements.
There might be another way to look at it in terms of overall risk appetite:
- when maintaining a traditional venture, innovate more on tech to attract more talent and interest. (Microsoft Azure folks, Basecamp (formerly 37signals))
- when innovating in something game changing, don't take huge risks on plumbing. (Facebook, Craigslist)
Simply put: Choose your battles wisely, and the ground you do it on even more so.
Though one can gain serious competitive advantage by using something new to compete against established players or use something well tested in an innovative and new way.
what was up with the Github and 37signals references? They both use Ruby on Rails which could be considered a niche technology, 37signals even admits to use the most bleeding edge version live in production before they even put out a beta of the version to the public so if anything referencing them is actually hurting the article
The ideal is to build powerful systems from small, simple processes - if any single stage is trivial then anyone can understand it, fix it, modify it, and so on. With many little processes working together you can do amazing things. A good example in software is a build process - a good system can lint code, test it, uglify it, minify it, push it to version control, watch for changes, reload an output mechanism, clean up a distribution, and push it to a live server if it's working all from a single command. That's very 'complex', but really it's just a set of very simple linked processes.