Hacker News new | past | comments | ask | show | jobs | submit login
The sad state of web app deployment (eev.ee)
267 points by cespare on Sept 18, 2015 | hide | past | favorite | 160 comments



As another old-school type, I really enjoy this rant. And my understanding is that the piece of software in question is a nest of snakes, so I can well believe that there is no good way to install it, only bad ways and worse ways.

But having tried out Docker for some production deployments, I think that it (or some less goofy successor) is the way forward. You get a sealed box with all the necessary dependencies, and the box has just enough holes poked in it that you can connect up the bits you care about. It turns apps into appliances, not things you need expert craftspeople to lovingly install.

As much as I have enjoyed 25 years of doing everything on an OS whose conceptual model is a 70s university department minicomputer, this era is coming to a close. We already know it's a poor match for existing hardware and use cases because we now mostly run it in virtual servers. But "virtual server" is the new "horseless carriage". It's a phrase that tells us we are living in a future we don't yet fully understand.


I love Docker; it's solved a lot of problems for me. But this article highlights the laziness that it can enable: there has to be a middle ground between traditional package management and all of the curation/QA that goes into getting stable releases out to multiple distributions, versus chucking shit over the wall into a git repo with a Dockerfile and calling it done.

It's a better, more isolated mess - but for anyone trying to enforce configuration policy into all of the running services in their environment, untangling gortesquely basic shit like not granting superuser privs on a database to your webapps - which would never fly in a traditional distro package - becomes even more work than the old source tarballs with INSTALL.txt.


> the laziness that it can enable

This is the entire value proposition of all successful technology. Dubious business practices notwithstanding, all successful yet imperfect technology is successful because more people found it let them be lazier than the competition. But in reality of course it's not laziness, it's efficiency. They can get on with other tasks sooner than before.


> But in reality of course it's not laziness, it's efficiency. They can get on with other tasks sooner than before.

Indeed, and I use Docker myself to improve my own efficiency, and I've seen great stuff built with Docker that has been architected well.

However, it is painfully obvious that without the pressures that used to force most developers to keep their shit sane, there's now more workload for serious users that actually need to untangle this mess in order to support, secure and get stuff deployed.


Very few of the problems described are actually docker issues besides the security concern which is described here is that you're a) already on the host and b) have permission to start docker containers, which c) allows you to root the host.

The other security issue mentioned, piping curl into sh isn't really that big a deal. As long as you're a) using HTTPS and b) you resonably trust the source you were probably never going to read the script anyways.


Well yes, the fact they chose to require that the webapp have a superuser database account is not docker-specific, but the manner (and it should be said: ease) with which docker is used to neatly package an opaque ball of string creates a trend of ever more difficult-to-untangle software making it way harder than it should be to properly deploy apps conforming to your environment's security, config management standards.

I mean, when I first evaluated this app, it didn't even have a sane launcher script. Instead, ~15 lines of ruby config and a ruby script that I had to follow just to discover it was a weird, idiosyncratic way of executing "docker run".

Edit: I am not saying that Docker is somehow inherently flawed in this respect. It's solved a lot of problems for me and you can build great, well-architected stuff with it. But there is a trend to use it to wallpaper over poor development and deployment practices.


All that stuff could probably be done with apt, yum, dnf or any other package manager though. The reason they're better is because you have some fairly skilled contributors acting as the gatekeeper of the projects.

Now the nice thing about docker is you could have spent about 5 minutes to trivially docker run the app, decide if it's actually worth the effort, then untangle the mess beneath.

That's a net win in my eyes, though by the time I actually run something in production, I generally try and untangle the mess before I do. I also find it's easier to get help developers deliver something legible as sadly many of them often don't even know which packages they installed to get something running.

Listen, I'm not saying it's perfect. I specifically really don't like that Docker is basically root without Selinux (which is often turned off). I'm happy to see docker be replaced. I am however, not really interested in returning to puppet modules to handle dependencies when a container is, in my opinion, so much better.


Could you say more about this? I agree the app described is a mess. If a database is a shared resource, I think it's important for a single app not to have massive privileges in the database. But if the database is also containerized, then I'd rather the app just have its own isolated database. So it seems to me that problem is that the app has been only partly containerized, not fully. Would that approach solve your concerns just as well?


I cannot agree more, and this is coming from a young whipper-snapper that tries to stay on the leading edge of technology.

There's a vast tectonic shift occurring and it couldn't come sooner -- "tinkering" with an increasingly complex environment of microservices, message queues, RPC servers, databases, caches, and so forth is headache-inducing and everybody knows it, as it detracts from actual application development.

I like the nest of snakes analogy, it's quite accurate. The twelve-factor application architecture[1] is here and many are moving towards it in one way or another, be it simply starting out writing unit tests or refactoring large applications into something easier to grok.

http://12factor.net/


As much as I have enjoyed 25 years of doing everything on an OS whose conceptual model is a 70s university department minicomputer, this era is coming to a close. We already know it's a poor match for existing hardware and use cases because we now mostly run it in virtual servers. But "virtual server" is the new "horseless carriage". It's a phrase that tells us we are living in a future we don't yet fully understand.

https://mirage.io/

You'll need to learn OCaml (though in my mind, that's actually a major plus).

Caveat emptor: AFAIK, MirageOS is still undergoing heavy development. That said, I believe it is essentially functional at this point (pun not intended).


OSv seems like the much more practical alternative. That being said I suspect we will just be using more and more stripped down linuxes and containers for quite some time.


I honestly think the "gnu/linux operating system" model is in its final phase of life.

Kernel ABI stability has brought forth options like running docker containers on FreeBSD and Solaris/Illumos/SmartOS Standardised workflows around "containers" are breaking down the need to care what your container is on the inside. Imagine running statically compiled Go binaries that have been compiled against s CloudABI tool chain deployed to CloudABI native hosting with maximum flexibility.

I've got a notebook full of details about "Linux compatible" system designs. Throwing away the Linux kernel for fun and profit! It's closer than you think.


Interesting; I had not heard of OSv, thanks.

I am still hoping that something as radical as MirageOS will eventually take off, since we really seem to need it--at the very least, from a security perspective, if not a complexity one. Just by dumping C and Unix, and using a memory-safe language (such as OCaml) instead, we eliminate an entire class of exploits.

I will readily admit, though, that trying to force the entire world to use OCaml will surely be the fastest way to fail at this goal. Perhaps aggressively sandboxing is an adequate alternative, which would allow people to write code in environments sitting on stacks containing code from unsafe languages, without single exploits leading to entire system compromise.


Yup, yet another band aid to cover for a wrong turn in the late 70's.


The problem with docker is that it's Yet Another Tool you have to learn how to install, configure and use. Once you grok what it's doing, how to install it, how to run it on OS X, etc, it's powerful, but being forced to learn a new tool to install another, is a bit of a yak shave, and quite frustrating.


Yeah, from the point of view of a 70s university department minicomputer, Docker is another tool. A rapidly evolving one, one that has focused on developer adoption over other concerns. If you're using it to install just one thing on a server where you've installed many other things the old way, it's a pain.

When it gets interesting is when you invert the relationship, where you have multiple apps in Docker. Then the main server starts to look quite simple and bare, much easier to maintain. And then things like ECS start to make sense:

https://aws.amazon.com/ecs/

At that point, the OS becomes irrelevant except as a host for containers.


The os should already be an irrelevant host for processes, if developers stopped with this dynamic linking and spreading files and configuration all over the os, instead of a simple contained directory with all it's dependencies and configuration except for the kernel. Like we did in the good old DOS days.


I heartily agree with a successor to Docker being the way forward - Docker is large and embroiled in lots of "ooh cool look at this shiny thing".


What are the main gripes against Docker? Security is one that I hear a lot, but is a bunch of Docker containers somehow inherently more insecure than running the same apps on the same host?

Docker seems to be moving forward really fast and gets a lot of hype which tend to make my old-school sysadmin self a bit suspicious, but I'd love to hear critique that is based on the actual usage. (N.B. I'm just about to start making my first Docker deployment)


I really love Docker, but I think there a few areas where I'm not a fan. For one, if I'm using Docker on a production machine, I make sure that machine is only for Docker containers. There have been a bunch of ways to escalate privileges to the broader machine, so it doesn't seem (to me) to be a great idea to run Docker alongside other more traditionally deployed apps on the same machine. Part of that is philosophical though and I think the use cases where Docker shines dictate that it owns the box.

Many people aren't ready to further abstract away from bare metal by adding a container layer for process management on top of the operating system which may be running inside a hypervisor. And there are situations where that's problematic. So it's not a great fit there either.


Your last two sentences there are superb.


Huh. Why the downvotes?


It was a really good comment, I assume there are just a lot of haters around these days.


No idea. After I wrote the comment it dropped pretty much instantly to -1, which is why I asked. Coming back this morning, the main comment is way up, but asking about the downvotes is at -4. Odd.


Wow. This is ridiculous.

"I've got a weird set up OS for a reason, but this software should work even though no one else would ever run this."

"Whatever- this database should work, even though I'm using a 4 year old version of the free software(that the docs specifically say needs to be a newer version)."

"Also, I'm extremely inexperienced installing Rails and Rails apps, and despite the fact that it's a language and server that we literally teach the newest of the new, it's just impossible to do anything with."

This isn't a story about the state of web app deployment, this is about the state of the server from hell. This sounds like the kind of thing you hear out of megacorps with 20 year old Mainframes.

Should open source be supporting such convoluted, needlessly minor cases of awful environments?


Yes, my OS is slightly "weird" for reasons I did not choose. Funny story: while I was trying to figure out what the hell RVM was doing, I mostly ran across people running OS X who had the same setup: 32-bit OS on a 64-bit chip. (And for what it's worth, Python native extensions build for the architecture of the Python executable — you know, the thing that has to load them — rather than the architecture of the machine.)

I never complained that it didn't work with an old database. Ubuntu's versioning was just a fun surprise. I don't see why it wouldn't work with Postgres 9.1; I just figured I'd do the upgrade while I already had my foot in the door.

Maybe if installing Rails apps is still a huge headache after the fourth or fifth time I've done it, something is wrong with Rails. Maybe.

I have a fairly mundane server running the latest Ubuntu LTS with all stock vendor packages. If your app is such brittle crap that this is the "server from hell", well, it's no wonder everyone is using Docker.


I'm sorry it was difficult to get Discourse up and running!

If it helps, I'd be glad to throw in a free Digital Ocean $20/month droplet -- I can set it up following our guide at http://blog.discourse.org/2014/04/install-discourse-in-under... and then hand it over to you for everything else. We also have a special Mandrill reseller code so we can give you a Mandrill account with 50k emails/month as part of the package. Just email us at team@discourse.org and I'll make it happen.

(Obviously I am a fan, because PHP fractals, man.)

Yes, this will require Docker, which means a 64-bit OS and a modern Linux kernel. We initially tried supporting arbitrary Discourse installs but it quickly became a support nightmare for our small 7 person team. We adopted Docker because we saw it as the only way forward to have sane support both internally (for hosting) and externally. We are all-in on Docker, for what it's worth.

I'd argue Rails has historically had very little incentive to support super easy server installs; how many large open source Rails projects can you even name? Certainly 37signals nee Basecamp isn't too concerned about how hard it is to install their webapps on a server...

Long term, the only real solution is VPS and Docker. I think that has a lot of other benefits for the whole hosting ecosystem, too -- it opens the door to not just Rails deployments but all kinds of alternatives.


Installation of discourse isn't that hard. But it's overshadowed by your behavior banning all TDWTF users from your support forum, hiding bug threads, etc.


Just because it doesn't work the way you want it to work doesn't mean it doesn't work. If it literally took 4 days to install the forum software in question, wouldn't it be fair to say that your situation is clearly an outlier? Is the 30 minute "simple" installation claim really just an 'ol bait and switch? Is it really taking everyone 4 days?

I'm sorry about your frustration -- I've been there many times, but taking your hellish experience and parlaying that into "web deployments suck these days because my development environment didn't meet the system requirements and therefore their methods suck" is a bit nonsensical.

I would direct you to the system requirements for the installation in question:

Hardware Requirements

Dual core CPU recommended 1 GB RAM minimum (with swap) 64 bit Linux compatible with Docker Software Requirements

Postgres 9.3+ Redis 2.6+ Ruby 2.0+ (we recommend 2.0.0-p353 or higher)

I would say that 32bit != 64 bit. Blaming the system seems like blaming a toaster that won't make tea. We could certainly hack the toaster into making tea, but your inability to quickly make tea with a toaster is not the fault of the toaster, toast or people that make toasters. Dismissing toast as a silly breakfast food and lamenting why more tea isn't made is pretty much the point of your rant.

You are certainly welcome to install the forum or do whatever it is you want, it is Open Source. However, it's rather uncouth to blame a process for which the problem was your situation. Rather than ranting about how badly Docker sucks, perhaps you could contribute back to the forum software in question and create your own Awesome Deployment System. Nothing is stopping you.


You lost me right around the point where you compared x86 vs x86_64 to a toaster and a teapot.

Nothing about Ruby forum software necessitates a 64-bit kernel. (Which I have, incidentally.) According to Docker, nothing about Docker necessitates a 64-bit kernel either! It's an entirely arbitrary requirement, inherited by the forum software for support reasons rather than technical reasons, and for some reason you are blaming me for not meeting it.

I certainly believe that people manage the 30 minute installation. I would've been much happier if I managed it as well.

And Web deployments must suck, or we wouldn't need Docker. Right? I mean, the whole appeal is that we can take all this arduous ad-hoc crap and just shove it in a box. Unfortunately, it's still arduous ad-hoc crap; we just don't have to look at it as often now.


Docker largely doesnt support 64 bit as the docker hub doesnt yet, and even if it did there are no prebuilt images for 32 bit, so you would not be able to install any software. This is unlikely to change as no one is going to build 32 bit images for all software given that x86 32 bit is extremely rare (guessing under 1% of installs perhaps less)


32 bit x86 setups are very rare now under Linux, and you are really on your own - people no longer test with them and some software no longer supports 32 bit (eg Docker; I gather it will go multi architecture soon but I doubt much stuff will be built or tested on 32 bit x86 anyway). As your hardware can run 64 bit you have no reason to run 32 bit.


I hope you can imagine why I might not want to reinstall the OS and have to redeploy half a dozen other things, just so this particular one gets easier.


Because you are managing your servers as pets not cattle? Because you did the installs manually and can't replicate them? Thats what you need to fix, with automated deployments that is a non issue.


So why wasn't that at least documented?


  Hardware Requirements
  [...]
   - 64 bit Linux compatible with Docker [1]

[1]: https://github.com/discourse/discourse/blob/master/docs/INST...


PEBKAC.


I agree... Docker's 64-bit preference is called out ALL OVER the place... it's true that some people are running on ARM or in 32-bit environments but that is definitely not a normal setup for 99% of Docker users.

RVM itself is kind of a hack. Of course, if you have a working Docker environment, no reason to worry about RVM, as each container can have a full Ruby stack with whatever gems you'd like.

In an alternate universe where the author had a 64-bit virtual machine (takes about 55 seconds to set one up on Digital Ocean, with Docker pre-installed) I can subtract at least 8 hours from this story, as a prebuilt Ruby image and a prebuilt PostgreSQL image are both available.

Docker, like most kool-aid, is best if you buy in completely.

* Docker on 32-bit: weird, non-standard deployment * RVM by itself: not the best to begin with * RVM on 32 bit linux: not tested

Although I agree with the lack of dependency management on modern stacks and a few other points from this essay -- it seems like the core idea which led to this whole situation:

"I have a non-standard environment and it was tough to roll out things to it"


I tuned out at

    I actually have a 64-bit kernel, but a 32-bit userspace
Eevee right there decided to swim upstream. If you play computers on hard mode, it's gonna be hard.


Why should that cause any issues? I run 32-bit applications (built against a 32-bit libc) on my 64-bit machine (with 64-bit kernel) all the time.


This has never caused any problems before. I'd actually completely forgotten about it until RVM or Ruby or RubyGems (I haven't decided yet who to blame) choked.

And it wasn't my choice — I started out 32-bit to save RAM on what used to be a fairly dinky plan. Some years later, Linode said I oughta switch to a 64-bit kernel for some obscure reason I can't remember.


Its a cloud server, just install a new one, copy your data over and you are done. It is not a long term investment.


I will not stop repeating this: the fact that we can have multiple "brands" of the same software, possibly forked from each other, is both the strength and weakness of open source and we just cannot cover our eyes and pretend only mod_php exists. Open source is a community first. Quoting the same eevee:

> If two features exist, someday, someone will find a reason to use them together.

I strongly believe that if you find a valid reason to use one specific brand of software, this has to be both clearly documented and explained. From my own experience, Ruby apps tend to be written with rather hard dependencies and are actually hell to deploy if any of the components you have preinstalled (be that process supervisor, message/job queue, web server) differ from recommended ones. Last time I needed to deploy a Rails app, I've gone down the route "bring up a new vm, secure it and deploy on that" and that was a breeze in comparison with attempts to integrate with already installed software.


I won't disagree that it could be better- but doesn't this ring of pie in the sky thinking? If you're the only person who is using a thing, and you want it supported- you'll probably have to do it yourself.

Now, if there are simply two features, that if used together, fail for whatever reason? That's clearly a user pitfall, and should be fixed. But to endlessly try to support the Nth scenario is unsustainable.

Edit: Also, taking a look at what the author was(here: https://github.com/discourse/discourse/blob/master/docs/INST...), no wonder they want to use a system like Docker- it's an incredibly complex, full-featured setup. This isn't one-purpose software, this is a monolithic solution, that if you want to modify, you've gotta dedicate real time and effort toward.

This means that if you don't want to use defaults, you'll be constantly getting further and further away from the master project, which makes updating and maintenance so much more difficult. In which case- aren't you already signing up for the long haul, here?


We have already learnt that tightly coupling components inside of application is a huge code-smell/design issue. Why is it suddenly different for multiple applications?


Rails may be a mess, but it's popular and finding a way to control the deployment is in ways a more sensible reaction than attempting to fix open source fragmentation. It's definitely a tragedy of the commons issue, but I can't blame a single team for wanting to make their product work for their users.


Using rvm makes running multiple Rails apps on one machine way easier. I only do this for development, though; for production, it's VMs or containers.


My experiences (could be outdated, last time I've tried RVM was at least a year ago) are that setting up a VM is much less painful than convincing different applications to load the same environment. Your mileage may vary


In the old days things were supposed to just work. `./configure && make && make install` should always have worked around any local peculiarities.


Looking back, open source teams could support those varied environments partly because a) the tools were often small and composeable, and b) huge numbers of people worked to get them nicely packaged in distro package managers. I'm sure this oversimplifies a complex issue, but with the huge proliferation of open source projects that are quite complex, and the fact that packaging for a bunch of distros is a huge chore, I can totally understand why projects have only one or two supported installation methods like Docker.

That said, it represents a significant departure from being able to

  apt-get install foobar
for almost anything you'd want. I can understand being nostalgic for that, but the ecosystem was generally smaller then (at least that's how it seems to me).


I've tried to install the exact software the author of this article was trying to install, and he/she/they didn't even encounter the insanity that turned me off.

There's giving up on traditional package management, and then there's throwing yourself off the cliff of decent release/config/dependency management that used to be a side-effect of traditional package management.

"Works for me on my computer" is now "Works for me with my Dockerfile [ and all the configuration assumptions I couldn't be bothered untangling from my app ]"


I think that's pretty fair. But I still think that using docker is probably better than having the user do that manually. In a more ideal world, the dependency, version and configuration management would be better, but using something to wrap around those defects and get something shipped isn't inherently bad.


> But I still think that using docker is probably better than having the user do that manually

When it comes to deploying, supporting and securing complex web application software, it is very rare that an out-of-the-box reference image from the project or vendor will have much relevance to your own environment's security and configuration standards.

Unless you're paying for support, and the vendor insists you'll have to suck it up and accept whatever madness their support agreement entails.

I'm all for convenient docker images, it just frustrates me to see the total and utter lack of imagination demonstrated by some projects that can't imagine their users ever locking down their database access, or running with SELinux, or separately patching the software/service components they depend on, etc.

And again, I don't expect the reference docker image for a given project to allow all of that flexibility and configurability, it's more the fact that these possibilities seem to be actively sabotaged with reckless abandon.


speaking of the ideal world one should be able to start any app, point it to any existing database, possibly a folder for storage and be done with it :D


"Also, I'm extremely inexperienced installing Rails and Rails apps, and despite the fact that it's a language and server that we literally teach the newest of the new, it's just impossible to do anything with."

That's why nearly every newbie tutorial recommends Heroku.


The tone of this article is very upsetting. It begins with "I like to think I’m pretty alright at computers", but frankly I don't really agree that this is the case. Good at programming, maybe, but clearly not very good at ops.

Rather than accept that there are skills and experience they don't have, the author instead chooses to blame the entire world for their ignorance. The frustration on display is understandable and real and I totally get it, but the attitude is less forgivable.

Faced with a problem, instead stopping for a hot second to read some documentation, the author instead concludes that Docker is garbage, does some insane shit and then claims that the software industry has failed them. What incredible arrogance.

This article says way more about the character of this person than it does about the state of web app deployment.


Sorry, I forgot to painstakingly document the innumerable hours I spent reading documentation (or trying to find critical documentation that didn't exist). I didn't think that would be a compelling read.


Indeed, not to make this a distro vs distro comment, but with a modern Arch install a few AUR scripts would eliminate (and automate) 50% of the problems here. The other 50% could easily be solved by investing time in becoming familiar with the technology he is working with, or not choosing newish unstable tech when his apparent preference is more suited towards established stable software that has been built (or evolved via usage) to run cleanly on a broad spectrum of systems.


But AUR scripts are limited to Arch and so pretty much a minority interest. Dockerfiles run pretty much anywhere.


Your reaction to "server software deployment could be better" is to scoff at the problems and ad-hom the author.

http://i.imgur.com/SMPYGEU.jpg


The author's reaction to "we've decided to try and simplify deployment of our hideously complex app" is to ignore the supported deployment, bury themselves neck deep in the guts of the complex deployment the devs worked to abstract away, and then rage about it. I'm not really seeing "server software deployment could be better" as a fair read of that.


If you're lucky enough to run in a production environment that has zero configuration policy or minimum security requirements, then yes, granting your webapp superuser privs to the database mightn't be a deal-breaker.

For everyone else, it's giant shit sandwich, and a constant reminder that the project couldn't arsed untangling their own weird idiosyncrasies from their distributed app, and that this crap wouldn't fly with a traditional distro package, or wouldn't have mattered with the more traditional tarball/REQUIREMENTS/INSTALL.txt that made way fewer assumptions about the end-user's environment.

I love docker, but it's letting people get away with murder.


His point is not "I don't know how to deploy a Rails app", it's "there is no standard way for a server program to tell the OS how what services to configure, or to query the OS for what services are available. Instead of fixing this, we wrap a pre-configured-OS-image and run it inside our OS. This is daft". As well as "the instructions are incorrect and incomplete, and that's normal".

e.g. Set aside the oddball tool breakage and consider that if you follow the instructions to the letter, this web forum requires: cloning (not installing!) the software’s source code and modifying it in-place. Copy-pasting hundreds of lines of configuration into nginx, as root, and hoping it doesn’t change when you upgrade. Installing non-default Postgres extensions, as root. Running someone else’s arbitrary database commands as a superuser.

and

You’d think that a web app could just have some metadata saying “I need Postgres and, optionally, Redis”, but this doesn’t exist. And the other side, where the system can enumerate the services it has available for a user, similarly doesn’t exist. So there’s no chance of discovery. If you’re missing a service the app needs but failed to document, or you set it up wrong, you’ll just find out on the first request.

and

"What we should have by now" section.

and

And I’m really not picking on Ruby, or Rails, or this particular app. I hate deploying my own web software, because there are so many parts all over the system that only barely know about each other, but if any of them fail then the whole shebang stops working.


Service discovery, orchestration, and on-the-fly config/environment changes are things that are solved by tools like consul and mesos (to scratch the surface). The ops world is so much more than just dockerfiles. There is absolutely a "chance of discovery". Using consul has helped us move to zero-configuration, discoverable microservices. Then for the more hobbyist stuff, there's docker-compose. Not looking into the options and ranting is not the same as there not being any options.


There is no ad hominem fallacy in that comment. You are probably misusing the term to refer to any negative comment about a person.

The ad hominem fallacy is a fallacy of irrelevance, of the form "the arguer's argument is invalid because the arguer has this negative quality," when the negative quality is irrelevant to the validity of the argument.


> See, I actually have a 64-bit kernel, but a 32-bit userspace. (There’s a perfectly good reason for this.)

Well, maybe there is, but a lot of this article seems like "the sad state of nobody has this web app set up to be installed on my frankenlinux."


This is a weak article. The author's never worked with Docker before, so got tripped up by a few gotchas (but seriously, installing a third-party PPA isn't that hard or weird), but it doesn't take that much googling or trial/error to figure out how to do it right.

And the point of that is, once you know Docker, now installing just about any set of dependencies becomes a skill you know. I've never worked with Discourse. I've never worked with Rails. But because I've done Docker-based deployments, I bet I could indeed get it up and working in 30 minutes.


I've used docker before and my install was to a clean ubuntu 14 64bit - discourse installed + up and running in about 3.5 minutes. I have never worked with Rails or Ruby, or had to touch rvm for this case.

Incidentally, my hosting machine is basically clean - I don't use it as a personal or dev box, it does not have a ton of random userspace stuff in 32bit, it just hosts things.


Ok, I get it. I've felt like writing this article many times. I didn't, but I give the author props for letting off some steam in a constructive way. However... this statement drove me a little bonkers...

>> The 30-minute claim came because the software only officially supports being installed via Docker, the shiny new container gizmo that everyone loves because it’s shiny and new, which already set off some red flags. I managed to install an entire interoperating desktop environment without needing Docker for any of it, but a web forum is so complex that it needs its own quasivirtualized OS? Hmm.

I understand the reflexive dismissal of things that become popular topics. I'm guilty of it myself repeatedly, most recently with "microservices." But sometimes technologies become popular because they are good and useful.

I haven't run into Docker's 32-bit install problem. They should fix that. But to dismiss the whole technology as some sort of obviously useless quasi-virtualized OS mumbo jumbo is taking the rant too far. Dependency isolation is a good thing. Deploying server applications with one command is a good thing. Knowing that your runtime environment is always the same is a good thing. Having a simple source controlled script that completely describes that runtime environment is a good thing. Some people try to use containers for unreasonably complicated things, or to hide unreasonably complicated software, but that is not an indictment of the technology. It's popular for a reason.


As a happy Docker user with a lot of respect for what Docker has achieved, what concerns me is that Docker is clearly an enabler for people to totally abandon proper release and dependency management along with sane sysadmin friendly configuration.

I can't tell you the number of times I've hoped to use a public docker image or just the Dockerfile, only to spend hours futzing with it because I was unhappy with the grotesquely insecure configuration or because I needed to work around a bunch of assumptions that are invalid once I've tuned the Dockerfile for my environment.


Fix the insecurities upstream or dont use the software. Else upgrades will be a nightmare too.


I agree, and I do (if I end up actually keeping a piece of software running).

It's also quite likely that a pre-built docker image isn't going to satisfy everyone's deployment requirements.

Starting out with a philosophy that you'll make all the configuration and security choices for every user of your project isn't a great start, and that's how I read all these hard-coded assumptions.


Also, the author's explicitly makes the assumption that Docker is only appropriate once an application surpasses a certain level of complexity. I don't think that's a very good assumption. I think that it's perfectly reasonable for any open source web app with any dependencies to support or even encourage development and installation via Docker. I don't see why the choice to use Docker has any relevance to the complexity of the application.


The operative word is needs, since Docker is the only supported way to install this particular software. Which is, I cannot stress enough, a forum.


Sorry, I just can't buy into the dripping sarcasm of emphasizing that it's only a forum. Yeah, it's a forum. Forums are big software applications. It's not unreasonable to distribute the forum backend with Docker.


It's only the most advanced, most thought-out web forum. It's leaps and bounds ahead of phpbb and Invision because the lead developer knows that details matter. That includes testing, automatic updates, spam protection, user experience, and all that stuff that old PHP boards suck at.


Agreed in general, but I'm not so sure Docker should be expected to support installing a complex container manager into a 32-bit userspace on a 64-bit OS. They've done a lot of good work (doing the things that you've called out above), and I personally think it's fine to say that the tool is mostly for the kind of commodity just-imaged OS that typically runs in the cloud. Putting Docker on a snowflake server doesn't seem like a great idea even if it works.


GitLab is the other large open source Rails project. We choose to package below the container level with Omnibus packages. These run without all the Docker dependencies and install in 2 minutes. We did this because many of our users could not run a recent/custom kernel with Docker. We're very happy with the choice and are able to run the Omnibus packages on top of our images https://gitlab.com/gitlab-org/gitlab-ce/tree/master/docker

That being said, it is still a bit silly how hard a Rails app is to install.


There actually is a point. I'm not a rails developer but as a Python developer I can definitely say deploying applications is far more difficult than it should be. To deploy a simple Flask app as FastCGI I had to download some nasty .fcgi file from MIT's 6.170 course website to detect code changes and reload the app. To deploy with nginx+uwsgi I spent ages configuring it and with poor documentation.

Seriously, things like uwsgi should have an "auto-configure" feature where when an error is encountered with a dependency, it searches the hell out of your /usr filesystem and caches the resulting configuration. If a module is missing, automatically try to apt-get and pip install it. Nginx should portscan the hell out of localhost and detect uwsgi servers. Install uwsgi if a wsgi app is detected but uwsgi isn't installed. Write the .ini file automatically. This process is so automatable that there really should be a "magic deploy" feature where I can just drop a Flask app as /var/www/html/some_app/index.py and it should be instantly up and running at http://localhost/some_app/ with zero questions asked.


Nice article, I can empathize having recently inherited a PHP project which included a puphpet (vagrant/puppet) VM. It wasn't touched for a some time so the configuration was a few months old.

After installing the required dependencies (downgrade VirtualBox etc) all that was left was: vagrant up. My lord, what had I gotten myself into. Que problems with the configuration, ruby version during provisioning, paths, puphpet had gone through multiple breaking changes in the meantime, the documentation was unhelpful. The only inspiration were GitHub issues on tangentially related projects ... all I know is it was 3am having stared 6 hours earlier.

Then, in a moment of madness I deleted everything and created a completely new/fresh puphpet configuration using the puphpet site. Again during provisioning I was met with a problem, there are non-ascii characters in the php-fpm upstart configuration file (the authors name!). Luckily, this was an issue already discovered a couple of hours earlier, so a small change to that and the app was up and running. It was 4am.

Needless to say, I was very frustrated with this state of affairs of these supposed aids to setting up the dev environment. Granted, my mistake was going in the wrong direction trying to fix an outdated configuration and all the problems it generated rather than just generating the config again, but I had never used this tool before, and hadn't looked at the PHP ecosystem for the past few years, it seemed crazy at first that a config which worked in the past, presumably, would not work on my very mundane Ubuntu dev environment.


Oh gosh, puphpet and vagrant. Had a similar architecture with a Silverstripe framework recently inherited, it was well documented and all, but due to new versions everything was broken.

I ended up doing deployment with git pull.


I've been doing Rails now for a decade, and I love ruby, but Rails is not suitable for installable open source software. It is one of the worst languages for that use case. The whole mentality of the Rails community runs counter to the goals of providing easily deployable packages. There is certainly a lot of low-hanging fruit to work on these issues, but I don't see it becoming a priority anytime soon because if you value these things you're probably already using some other language than ruby.

The sweet spot for Rails is custom apps that is continuously maintained over a long period, or discardable prototypes. It is not a good choice for deploy-and-forget, or organizations without any in-house programmers.


I was going to come comment that there must be a good way to solve this, since the Ruby community has such a rich ecosystem of tools.

Heroku, for example, has a nice one-click deploy button for Rails (and many more languages/frameworks). It works straight from the source code, such as with this open-source rails app, and it's really quite impressive:

https://github.com/heroku/starboard#deploy-the-app

The author also calls out error reporting as being terrible. And there's also great tools for managing that, such as newrelic and airbrake.

So surely this author was just unused to the tool ecosystem, I thought. What a perfect opportunity for a constructive yet snarky comment! But lo and behold, discourse has deprecated all non-docker installs:

https://github.com/discourse/discourse/blob/master/docs/inst...

I was, and am, completely baffled at that decision. And I learned a valuable lesson about trying to out-snark snarky blog posts.


Supporting multiple deployment options introduces a lot of overhead, and Docker (in particular) seems really good at reducing deployment overhead for both developers and admins. I'm not sure why it's a bad thing when it means that the team can focus resources on a single canonical deployment method that just works. Docker containers can also be run on a variety of public cloud services, so while Heroku may not be available, pointing your Dockerfile at EC2 or GCE shouldn't be too hard?


> The author also calls out error reporting as being terrible.

Actually, he never got far enough to see the error reporting tool, Logster.


If you use 32bit Linux in 2015 you deserve the pain you just endured.

The problem is not the state of deployment (which is still admittedly quite bad), it's the state of your system.

Install a fully 64bit version of 14.04 and watch all your problems just disappear.


Two things come to mind

1. Tinfoil hat time! "Pisshorse" make money off of hosting their own software. So they are financially incentivized to make it as difficult as possible to install yourself, but at the same time they get to go "woooo, open source" as much as they like.

I'm convinced Oracle did the same sort of thing by making their DB product impossible for normal people to understand, in order to charge outrageously expensive consulting fees to companies. Or so I hear; I've never used it so I could be wrong.

2. Your users aren't always who you think they are. I learned this one switching from back-end/n-tier work over to front-end CMS based web dev. Yes, the users are the people browsing the website, but the users are also the people trying to use the damn CMS, so the website UX and design extends to those people as well.

Meaning, instead of forcing your hapless marketers to use some crazy admin panel with thousands of options and checkboxes, or even try and edit XML configuration (I've actually seen this), any time one extends the functionality of a CMS, creating some custom front-end UI to control it is a basic necessity.

In the same vein, any sort of software (and hardware! printers, tools, cars, whatever) has to consider the installation and maintenance of itself as a UX/design concern and the fact that it has multiple domains of users.


Well, it is possible to install Discourse in 30 minutes:

http://blog.discourse.org/2014/04/install-discourse-in-under...

This does require Docker, though, which means 64-bit and a modern Kernel.

It's absolutely true that Rails is a very tough stack to install (and let me put my tinfoil hat on: do you think 37signals nee basecamp was ever incentivized to make it easy for people to install Rails apps?), but it needs to be competitive with PHP in the long term if we want things to change.

Docker was the only sane way we saw to move forward and support the community with our small (currently 7 person) team. We're all-in on Docker, all our internal hosting uses Docker, and it's the only officially supported install method for Discourse.

So never let it be said that we don't eat our own dogfood. Because we do, and it's goddamn delicious.


Regarding tinfoil hats, I don't even think it requires thoughtful greed on the part of Discourse and Oracle. Making something really simple to use takes a great deal of work. It's not that they have chosen to make a simple product complicated; they only have to fail to make a complex product simple.


I observe that they'll happily charge you $99 to install it on your machine for you.


Nope, that's on a 64-bit Digital Ocean machine that we provision for you. The only secret sauce in the $99 install is the Mandrill reseller code.


Really, 90% of this article is an argument for docker. His docker installation annoyance was nothing compared to the pain ensuing from his decision to chuck docker out of the window.


This. He says "We need to find a solution to install those apps with a dozen dependencies, config files, crontabs and daemons". Well, although it may not be mature security wise, this solution is still Docker.


I had the same trouble setting up discourse, so I setup a template on Nitrous using Docker that you can definitely use to get Discourse up and running in 30 seconds. (I just confirmed, I went from no environment to running discourse in less than 30 seconds). Just `cd code/discourse && ./start-app`.

This, IMO is where Docker shines. It shouldn't matter if it's setup with a microservice 12-factor architecture or if everything is setup in a monolithic VM-like container. I don't have the patience for ops -- I just want something that works. That's the point of having isolated, replicable containers.

In any case, I encourage you to try out the discourse container on Nitrous. I was actually surprised it happens to be the least popular container for us. I assumed because it's such a pain in the ass to get started, that it would be more popular =p


So I actually remember running into the "Unable to locate package docker-engine" issue a while ago, and it seemed like an issue on Docker's end because for me, the issue actually only lasted about half a day before it started working again without any changes on my part.

So I think in the end this was just a case of really unfortunate timing, because if it weren't for the Docker installation issue, the only real complaint left in this post would have been the fact that the official method of installing Docker was to curl a script and pipe into sh[1]. And the rest of the post would have been singing praises of how amazing Docker is to be able to take setting up a Rails app along with all its dependencies and turn it into such a simple, painless process.

[1] Which is a perfectly valid criticism, by the way. They should really document the much saner method of installing through their official repos:

https://blog.docker.com/2015/07/new-apt-and-yum-repos/


Yeah, the really weird thing about this article is that it's all snarky about Docker, and then it goes off into weird manual-install land (which makes about as much sense as saying "I couldn't get the .msi installer to run, so I started copying files around and registering DLLs by hand" -- maybe you'll get that to work eventually, but it's never going to be pleasant), and then makes a call for something that... solves the problems Docker solves.

The whole article is basically a "there has to be an easier way!" infomercial for Docker, only it doesn't realize it.


It's funny because the project actually provides a Vagrantfile for creating a regular VM image.


Vagrant's only intended as a development solution, though, not for production deployment.


I have to say that Docker used to be very annoying to work with on Mac OS X but with the latest release of Docker Toolbox it has a much better "works out of the box" experience.

The article is pretty ranty but I can agree with the author that many things nowadays are way more complex than they have to be. As a plug, I'm going to say that this is one of the reasons why we started the CloudMonkey.io project. It lets you deploy a docker container with no fuss to production. It's up to you, however, to ensure that you don't over complicate your system unnecessarily.

I've done the mistake in the past where very early on in the project I started using a web server, Redis, ElasticSearch, MySQL, memcached, RabbitMQ, etc.

In most cases, more than three moving pieces only bring you headaches. Now I always try very hard to keep things simple to at most a web server and a database, plus maybe a memcached caching layer. If you need to have a queue or full text search functionality, I'd try to bring it in as an outside service.


What's most worrying with all that Docker bullshit is updating. We're going to end up with physical hosts with dozens of fire-and-forget VMs on them and each one filled with security holes.


I agree with most of this rant, but not with this too common mith: "Only one thing can bind to port 80 and it has to run as root". I generally use the following command <pre>setcap 'cap_net_bind_service=+ep' /usr/bin/nodejs </pre>

Just learned the trick to become root when you belong to docker group. Awesome


>"Let’s just say it rhymes with “piss horse”

I think he's talking about Discourse [1]. I tried Discourse few years back when it was released but it was too bare-bones at that time. Haven't tried it recently.

[1]: https://github.com/discourse/discourse


I really don't understand why a forum needs 2GB of RAM.


It doesn't; 1GB is the minimum. We have to fit Redis, Postgres, Ruby, Sidekiq, and 2 Unicorns in there. There are a bunch of fairly active instances on 1GB RAM, which works fine. If your community grows a lot you'll need more resources, of course.


It also warns you if you try to use gmail to send mail, because gmail throttles you to only sending 2000 emails per 24 hours.


Some other sad things, from the top of my head:

- Besides a web app, we must also create mobile apps, and they usually don't share much code. It would be much better if apps could just run HTML and Javascript instead, and still be integrated like native apps are now.

- Every web app that is deployed now is not guaranteed to work X days into the future. In fact, it often breaks. We need better guarantees for this.

- Browsers are incompatible in subtle and not-so-subtle ways.

- It is way too complicated to write a web-browser. The number of abstraction-levels combined is too large. Security problems are therefore unavoidable.

- On the front-end, React is a suboptimal solution because it requires O(N) time and energy for simple O(1) updates. We need smarter incremental computation techniques.

- The primitives for the web (HTML+Javascript) have been created for novices, instead of for developers. However, even novices can't read the W3c specs, so it doesn't really help anybody.


I deploy the basics of the app as a debian package. Debian is very good at dropping the files and folders where they belong.

There is also a very usable "postinst" installer shell script that can do additional fixups. It can handle things like "if we are on openvz do this, otherwise do that" or "if this is debian 7 (wheezy) fix this; if it is debian 8 (jessie) fix that".

If that is still not enough -- which it sometimes isn't for really complex platforms -- I write an additional shell script that will do the remainder of the work.

But then again, that probably means that the image will have too many moving parts anyway. That means it is time to split the image and stuff a network protocol between the parts. You can use virtual machines for that. The front-end goes left, the back-end right, and other things go top and bottom. Now we are back to more simple images to install and to test.


Where did you learn how to build .deb files? Every time I try it ends up being horribly complex and I either go back to a shell script or something like OpenSuse Build Service. Docker is great - when it works, but there's something very wrong with deploying a whole new OS just because we can't work out packaging and deployment properly.


Jumping in here, but I always suggest just packaging something pretty vanilla outside of a web app due to all the aforementioned specifics and impedence mismatches of deploying web apps.

Learning some of the innards of the package management system of your machines is surely a good idea in itself, if only because you are relying on them so much. Even if you just say "screw it" and use shell scripts to deploy anyway.


My experience installing Discource was a breeze (aws instance, using external postgres and redis servers). This included setting up a custom auth endpoint in my app to allow federated login. I'm amazed that so much effort was made to simplify this process given that their business model revolves around hosting.


Yeah everyone up in arm at maven complexity but then you start to use things built by others and suddenly 'if only there was a better way'

There is, but the influx of hacker types into what's an engineering discipline just reset what the field collectively knew to do (when was autoconf introduced again?).


I agree. I've made an effort over the past little while to learn Makefiles (after tinkering with waf, ninja, etc) and they can be really good if used properly.

Hell, even Docker can work well if used properly. The secret here is to keep your development machine clean and standardised - this is what Vagrant is for, right?


This article badly wants a thing that already exists -- sandstorm.io.

Done.


Wow, this is a prime example of a web site not explaining what the hell the software really does.

"Sandstorm is an open source platform for personal servers."

What is it?


I'm a web developer, but I can't really use my skills to provide an open source web app the way I'd like to. I'd like to build a small server-side budgeting app that people can use from their computers or phones to record expenses, but there's no way I can ask people to find a web host that lets them run rails, or set up a heroku account or whatever.

So my only alternative would be to run the service myself, but then I'm storing other people's data, I have to worry about scaling if lots of people use it, and user accounts, and all this stuff.

The idea of sandstorm is folks run this platform on their personal servers, and then it lets you browse an app store like interface and one-click install these server side apps. So I'd bundle up my budgeting rails app as a sandstorm package, and if someone wants to track their expenses from a variety of devices, they install the app. Now they're running it so the data is theirs, there's no scaling issues, and user authentication is provided by sandstorm.

Sandstorm is still in its infancy so there's not a lot of apps available and the development APIs are being worked on, but I hope it's the future. It would lead to a more decentralized web with better privacy and users owning their own data.

I'm hopeful, if not optimistic, for a future where every family is expected to have their own little server running somewhere. And they access that server through the sandstorm web interface, and can easily add little apps to it. My budgeting app, a webmail app, some future federated profile app to replace facebook, etc.


You know, a picture might really explain it better. If it's something that sits on top of Linux and manages software installation, it's easy to illustrate.


The sentence you quote doesn't appear on the page. I think it might have said that at one time, but that was long ago, so I wonder if you're somehow seeing some ancient cache or something? Or are you looking at some other page?

With that said, it's true we've had a tough time coming up with a two-line summary that fully explains what Sandstorm is. If you could suggest what would have made more sense to you, that would be helpful. Otherwise our strategy has been to try to push people towards trying the demo, which I think illustrates what it is much more quickly than words can.


I'm looking at the first page of documentation trying to learn what is it that the thing does. It is the first sentence:

https://docs.sandstorm.io/en/latest/


Last time I complained about this, a few people said exactly the same thing, but naming a different product.


I'm going to guess Heroku. They pioneered PaaSes, which are intended to solve this problem. Spoiler: they solve this problem.

There's also my fave, Cloud Foundry -- try Pivotal Web Services or BlueMix.

These PaaSes have a buildpack model and basically, it Just Works.

Or OpenShift, though maybe its shift to a pure docker-y model will put you back where you started.

Disclaimer: I work for Pivotal Labs, part of the company which donates the most engineering to Cloud Foundry.


A dev learns that deploying apps is hard, wonders why no one solved this already, suggests writing more code will fix the problem.


Actually I suggest writing less code, assuming that Docker is a whole lot of code and "what if we write down dependencies" is more interface than code.


Your hypothetical vision of the future sounded like you were suggesting writing a tool/system to manage the complexity of dependencies/deployment. I'd recommend changing the thing-to-be-deployed-itself, so it was simpler and had fewer dependencies (if we're talking hypothetical code changes).

Aside: complex software is hard to deploy because it's complex. Software gets complex because devs are very comfortable managing large amounts of complexity. Docker lets you hide that complexity so someone other than the dev can do the build/test/deploy steps, but it does nothing to address the high level of complexity in the underlying software. Let's not mince words here: Discouse is hard to deploy because Rails wasn't designed to make its projects easy to deploy by someone other than the devs primarily working on it, and no amount of tooling around it will change that.


What I think is kinda odd about this rant, and about half of the comments here, is the implicit assumption that web forum software must be simple and dumb. "How could it possibly need X, it's just a forum dammit!" Why? Says who? Do we have some "shit's easy" syndrome going on here?

Meanwhile, the users of the forum that I've seen, mostly at TDWTF, seem to mostly complain how short of features and functionality it is/was compared to other BBS solutions.

So which is it? Is it too complex, or not complex enough? It seems to be doing all right at the current level of complexity. If you're suggesting a different one, what makes you think that would be better? Better for who, exactly?

Also note that Discourse is trying to take business from a pile of well-established competitors. To do that, you generally have to be not just as good, but much better, and with flashier features too, for both the users and the mods and admins.


People at TDWTF don't complain about Discourse lacking features, they complain about it being overwhelmingly full of bugs and brain-dead behaviors.

Particularly the mobile experience, which is supposed to be one of those things Discourse was designed to do well.


Did you know that in asp.net one can deploy apps with just xcopy? I mean this is what the whole rant is all about, right?


But you can't mandate a particular version of .Net, or configure SQL with SQL Reporting, or configure add-on services, or request or enumerate firewall rules, with just xcopy.

Which is what the whole rant is about.


The rant is about basic web apps like a web forum in his case. your sample about .net version is a no-brainer in shared hosting and sql reporting is not for basic apps.


I can deploy JBoss applications by `mv`-ing .ear's around and I can deploy PHP applications by FTP-ing some files into a directory somewhere but it's not the full story.

You can't deploy an asp.net app that has a database and other external dependencies on a clean install by just `xcopy`ing some files around.


I feel like the real problem here is with the vendor. Had they produced and tested their product against and packaged it for Ubuntu 14 (which is a very common distribution), the user would have had a good experience.

Demanding your users manually install prerequisite software -- especially dependencies that are already included with their OS distribution, and when it's not clear that it's necessary -- is not good practice. And if newer versions of such software are needed, that's what "vendoring" is for: you package all the dependencies along with the software. Sure, it means you might have a duplicate copy of Postgres or whatever, but at least it works out of the box.


Maven is fantastic at handling this complexity. It's a pain to initially configure, but once you have it set up you can pretty much forget about it. Similar to the author, my experiences with Gem and Bundler have been rather painful due to a combination of things like Cygwin/Windows issues, incompatibilities between Ruby versions, and 32/64 bit problems. Maven isn't quite perfect, I'd really like the ability for dependencies to have their sub-dependencies sandboxed in their own namespaces, I've run into some weird quirks with load order and the API-only JavaEE packages, but once you get a POM written it's totally reliable.

I bitch just as much as the next guy about the mistakes that have long since been baked into the Java environment, but that dedication to backwards compatibility does pay off. Similarly, the fact that it's running on a JVM rather than bare metal has its performance costs, but it does allow you to trust that the actual computation will Just Work wherever you run it (although of course the stuff that touches the surrounding environment will not).

In the comments the author states that he wants something "less invasive" than Docker. I'm not sure what that actually means - for me, having to actually install it and deal with the versioning on my real system is more invasive. Containerized applications that build their own runtime environments is a step in the right direction there, just like the JVM. Docker has some really dumb defaults right now, running as root has been known to be retarded since forever and you should not encourage users to do it. But running as root is problematic for lots of other reasons, and causes the potential for jailbreak on other jails too (eg chroot). FreeBSD jails seem to have some good rules on what you're allowed to do inside a jail.

If you really want "minimal invasiveness" and your software really really requires that you have superuser privileges then you should probably be running a full-fat VM with an OS inside, with the network sandboxing happening at the host. Even then, it's not like even Xen hasn't had the occasional jailbreak. The real problem is that the program itself should never require running as root in the first place - but that's a problem with Discourse, not a problem with jails.


Here's how I deploy:

    cf push
It requires zero configuration on my part.

Actually, I'm lying. Here's how I deploy:

    git push
CI then verifies my work before running this arcane command:

    cf push
It took literally minutes to set up. It'd be nice if it was psychic but I guess I'll just have to settle for an imperfect world.


Or you could have just installed a Bitnami image until you're willing to jump through all the hoops necessary to get many multiple moving pieces to work together happily.

Ops is hard dude. Sorry about that.

https://wiki.bitnami.com/Applications/BitNami_Discourse


I find it interesting that no one has mentioned http://nixos.org even here on HN...

Isn't it suppose to address most of those grieves the author was complaining about (and I have also experienced so I have up on Ruby years ago and did some nodejs but that's not really different either)?


We are trying to solve this problem with Flockport [1] so users can at least get to see and try apps without needing to install and configure tons of stuff and we use LXC containers which behave like lightweight VMs and provide an OS environment similar to a VM or a bare metal server that users are more familiar with. And this also allows us to package the stack and app in a single container..

With Docker apart from the app you have to deal with the additional complexity of a single app environment that is not an environment that most users are familiar with, for instance you need to launch all your apps in 'non daemon mode' (how many uses know how to do this, and why you would need it?) figure out how to deal with storage persistence, networking and 'linking containers' and things like logging, cron and ssh (there is no place for these in single app containers). Installing something as simple as Mysql with Docker is non trivial because of storage. So this adds a whole new layer of complexity on top of the app that's makes it completely unsuitable to package apps if simplicity and accessibility is the objective. You need to be a real expert to deal with this, much more than LXC containers or VMs.

We have a guide on building a simple Wordpress stack from scratch with Docker [2] that gives readers an understanding of the level of complexity involved with Docker and what it seems to be designed for. It's not for end users and certainty not to making things simple. It would take a tenth of the effort to install Wordpress in a VM or LXC container. But its pointless to compare VMs or LXC containers to Docker, as Docker is intended for a completely different use case and that's what its users and advocates should highlight and push.

We have packaged over 60 apps in LXC containers at Flockport and Ruby apps are by far the most complex to setup, usually requiring a complete build environment. PHP is the simplest both to install and troubleshoot, and usually just works. The Ruby apps like Redmine, Gitlab are more complex and Discourse is the toughest to get going. You need to be an expert to install it and an even bigger expert to run it so much so that I think unless you are a rails developer the only reliable way to use Discourse is to have the Discourse team manage it.

[1] https://www.flockport.com

[2] https://www.flockport.com/a-beginners-guide-to-docker


Corrected title: the sad state of one developer's experience with one web app deployment.


I think that the vast majority of web apps could be deployed via shell script that calls out commands to the particular system. I'm not too optimistic about Docker since it is vastly more complex than the alternative (VM + shell).


If a web app is huge, it should have a plugin system. So you only need to get a basic configuration for basic features (without complicated monitoring, full text search).

Then you install more, configure more if you want something more advanced.


A lot of people are criticizing the author but I know much less about installing things on Linux and quite quickly managed to install another, more conventional forum without any of the problems he mentioned. I just copied the files into a folder, created a database, ran an install script (not as root) and everything worked. Still not ideal but much easier. Why should installing software require some kind of expert? Why didn't the developers solve that problem themselves so every user can just use it easily. This isn't limited to web apps, it's most open source software. The authors don't put effort into making it easy even though they're the ones who are best positioned to both understand it and provide the greatest benefit by saving everybody else from repeating the same struggle.


From the developers' INSTALL file:

"Hosting Rails applications is complicated. Even if you already have Postgres, Redis and Ruby installed on your server, you still need to worry about running and monitoring your Sidekiq and Rails processes. Additionally, our Docker install comes bundled with a web-based GUI that makes upgrading to new versions of Discourse as easy as clicking a button."

The basic install guide is a bit lengthy, but amounts to writing these commands:

  ssh root@machine.address.of.choice
  wget -qO- https://get.docker.com/ | sh
  mkdir /var/discourse
  git clone https://github.com/discourse/discourse_docker.git /var/discourse
  cd /var/discourse
  cp samples/standalone.yml containers/app.yml
  nano containers/app.yml
  ./launcher bootstrap app
  ./launcher start app
There's definitely some extra configuration that has to take place between these steps, but it seems to me that they've gone through a lot of trouble to make it easy for their users. I'm not sure where you're getting the idea that it requires an expert to install Discourse -- it only requires one when you have a very unconventional build toolchain on your Linux box and refuse to use Docker (the officially supported deployment method).


OK, I understand now.


Ok I'll bite. This is a good reason to use PHP. In fact, I first learned PHP because I had a specific web application I wanted to write that needed to be dead simple for relatively tech phobic users to deploy, and PHP won.

For small things it works well and I've never had a single installation as painful as any of the stuff described in this article. For big things, well there's Facebook. It may be ugly, but it's definitely doable.

It's not the new shiny, it's full of warts, and it's unfashionable. (If you tell a fellow programmer you code in PHP you immediately need to rebuild credibility somehow.) I personally prefer working with JavaScript and nodejs but I don't think the deployment situation for nodejs is much better/different from what's described here.


Thinking these high level problems can be solved by switching to a different language is quite a common fallacy in the tech world. Almost always it comes down to being familiar with the platform/OS/languages you're working with rather than a few attributes of a particular technology choice. The question of technology choice in that context is then primarily about using something with reasonable popularity and an adequete community supporting it - but usually not much more than that.


> I don't think the deployment situation for nodejs is much better/different from what's described here.

Yeah, it's really not. npm is designed by monkeys. It tries to be too general and do too much, so they thought a completely open format would be kosher. Turns out, it just allows everyone outside npm to define their own pet solutions for interoperability (a word that zero developers under the age of 30 have heard today). There is no standard way to do the simplest of things. Say, bundle an ES6 file or a CoffeeScript file or Sass, or whatever. "Preprocessor? What's that?" says npm. Oh lordy.

What's worse are the standards for documentation today. I'm finding the answer to about 99% of my problems in a fucking github issue thread. Our industry is completely doomed if this continues. Software we depend on is abandoned within months.


Nix and NixOS as a whole solve this entire problem in a really nice way. Its still not ready for prime time from my personal experience, but the ideas and foundation are there... just needs more polishing.


Another title: Coding Sucks: Why a Job in Programming Is Absolute Hell


Side note: LD_LIBRARY_PATH is bad, do not use it. Among other things it will crash your 64 bit app if 32 bit lib matches by name (or vice versa). Bad, bad, bad. Use ld and ldconfig.


I think we've all felt like this before.

Is this one of the reasons people use Java? Is it way wrong to say Docker is like a JVM for non-Java?


Haha, Docker. Few months ago that word started being peppered throughout conversations in the office, everybody keen to jump on the bandwagon. Guess what, now we hired a "docker expert".

If this thing is so simple a toothless toddler could use it, how come now we need experts in it, huh?


What rhymes with 'piss horse'? I'm really curious now.


Discourse?


This is a story of somebody not really knowing what he does.


I quite like Node BB...


I think Discourse is an over-engineered catastrophe, a prime example of highly-skilled technical people unable to make business decisions. As the author says, this a web-forum software, all it is doing is displaying some text on a page and maybe sending an email. This was a solved problem 15 years ago with PHP. But they wanted to re-implement everything in a technically-purist manner, and end up with the absurd situation where you need to install a virtual container to perform one of the most basic computing functions there is: displaying text on a page. And that's just the server-side, the client rendering is so over-engineered that it only works on the very latest browsers. TO DISPLAY TEXT! Completely bonkers. They needed a "user-experience evangelist" or whatever you call them on that team.


> As the author says, this a web-forum software, all it is doing is displaying some text on a page and maybe sending an email. This was a solved problem 15 years ago with PHP

This reminds me of what Jeff Atwood had to say about people over simplifying the nature of stack overflow (and by extension, claiming it to be over valued).

He eventually responded about the fact that the little details mattered. From auto searching for existing similar questions to little prompts on framing the question.

Similarly, Discourse is being built with principles in mind that go beyond a threaded discussion. It's attempting to answer the question of "can we use software to encourage a civilized discourse?". It's easy to ignore this, and look at it as bloated. But in that vein, Stack overflow is a bloated Yahoo answers or any other. And we all know it isn't. Essentially, it's more than just "a web-forum software". That distinction really matters.

Relevant links:

1: http://blog.codinghorror.com/this-is-all-your-app-is-a-colle... 2: http://blog.codinghorror.com/code-its-trivial/ 3: https://news.ycombinator.com/item?id=678501


> "can we use software to encourage a civilized discourse?"

After carpet bombing this thread with my own criticism, this was a good reminder of why that was probably stupid. Thanks :).


The old adage about "never start over from scratch" doesn't really apply when there's no feasible way to refactor your way from A to B.

I'm not gonna rehash the full deal, everyone knows the "PHP is a fractal of bad design" article. Whether you think that's an exaggeration of the problem's impact or not, it's hard not to agree that PHP has a lot of features that are counterintuitive, encourage bad coding practices, and make modularity/expansibility of the codebase a real challenge. PHP is fine if you do it right, but it's really a challenge to ensure that a bunch of volunteers are doing it right, 100% of the time. It's not a coincidence that the people who are using mega-scale PHP codebases like Facebook aren't actually running PHP - they've gone through and cleaned up some of the awfulness and run a language based on PHP. Those are the design decisions that the actual PHP standards group isn't competent to make, because anyone who's sane has run for the hills.

Discourse is indeed extremely overengineered, but I don't think the urge to start over on a modern language that has better design features is wrong per se. If you had a web forum written on ANSI C or COBOL, wouldn't you say that at some point the costs of an inappropriate platform or a limited pool of engineering talent outweigh the cost of just rewriting it? You don't have Facebook-level resources to refactor the whole platform, you don't have billions of dollars of sales riding on your software. It's a comment forum. To throw your comment back - if it's just a simple app to display text on a page, why not fix the technical debt and put yourself on the right long-term track?


Funnily enough this is the same guy who wrote "PHP: a fractal of bad design"


Anything can be flippantly simplified as "displaying text" on a page. Google also does the same thing. After all, you're just making some "simple" database queries, right?


The author seems to mistakenly believe that Linux is a multi-user system where not everyone has root privileges.

The reality is that almost nobody uses it that way, it's not tested for that and due to the huge API surface area the chance of some kind of exploit is so large that it cannot be secured even if it people tried.

The only reason "root" exists currently is to make it less likely to accidentally change or destroy the operating system.

There is no way to provide a multi-user experience on a physically accessible system (cannot defend against physical attacks), and the way to provide it on a remotely accessible system is to assign a virtual machine to each user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: