Hacker News new | past | comments | ask | show | jobs | submit login
The Joy of Getting Hacked (waxy.org)
86 points by kawera on Dec 12, 2015 | hide | past | favorite | 30 comments



Perhaps there are good reasons why this hasn't happened yet, but I'm surprised that the bulk of Wordpress-based websites don't split the editing of content from the display of content.

What do I mean by this? Imagine you have two servers. Server 1 has WordPress installed, and is where you manage your website content. Server 2 does not have WordPress installed, but instead displays a static copy of the WordPress content, which is updated by a script on Server 1 whenever new content is published.

What are the advantages? First of all, Server 1 can be very heavily locked down. You can keep it accessible only over a VPN or ssh tunnel, you can keep it unlisted off search engines, you don't even need to register a domain name for it. Secondly, due to the static nature of the content served on Server 2, site performance and scalability is going to be excellent, and it'd be easy to manage a cluster of "Server 2's" if a single server isn't fast enough.

What are the disadvantages? You have to rely on a third party solution like Disqus if you want to have comments on the content you create. Same for shopping cart functionality. Also, you may not be able to have a Wiki. I can't think of any other disadvantages.


I've done that and it works well, with the limitations you noted - it is static, so there can be no dynamic content (e.g. no comments - a Good Thing^TM - random user's comments are nothing but trouble). Blogs are not a problem, but you have to (manually) refresh the static site after each blog entry. For a business site (what I set up) with essentially static content, that is fine.

I no longer have access to the site, so I'm writing this from memory... I used a standard plugin that generated a static site out of the dynamic content. The plugin was for supporting static site serving on the local server, not a separate server. Being the more paranoid type, I used a Wordpress "authoring" server that was not publically accessible and a separate publically accessible "static" site server. I either had the plugin suboptimally/mis-configured or it had limitations, so I had to run a sed script on the output to change the web server host name from the internal authoring host to the static publically accessible host name (the link paths were fixed up properly). I then rsync'ed the result to the public host (with --delete to clean up obsolete files) using ssh keys for authentication, not passwords. I put this in a shell script, which made it simple and convenient for a sysadmin (me) to use but it was not user friendly for the Wordpress (not really computer literate) users. Not a problem, perhaps even a Good Thing^TM.


That's the idea behind Buster for the Ghost blog framework (run Ghost locally or on a highly restricted web server then generate a static site from it you upload somewhere else) but it doesn't look maintained unfortunately: https://github.com/axitkhurana/buster

I'd be very interested to know about more solutions like the above as it doesn't seem well explored.

I've had some success running WordPress on Heroku (uploads go to AWS S3 and the database is hosted on AWS RDS) which I much prefer to the way typical VPSs are administrated. The Heroku filesystem doesn't allow the WordPress PHP files to be altered, you can't SSH in to the live server and you can recreate and rollback the whole server state in seconds if there are any problems.

Either way, the more static the better in terms of minimising the attack surface and admin time in my opinion.


This is how Octopress/Jekyll/Ghost/other platforms work.

You have a backend which when you create a post just regenerates static HTML files, and then you just serve the static HTML Files.

There are obvious drawbacks like you mention.

Now that Wordpress is moving to node there is probably more opportunity for these types of security optimizations.


Don't hold your breath while waiting for WordPress so be based on node. That PHP code base will be around for most of our lifetime. :)


You can get the performance benefits with cache servers, and such a workflow isn't really integrated into Wordpress (e.g. can you easily generate a list of all pages affected by a change? Or do you want to crawl the entire site for each update?). Projects that are build with this in mind are probably a better choice.


Yes, you can get the performance benefits with caching solutions like Varnish, but what about the security benefits I outlined above, how do you get those?


You don't. Still, I'm not aware of any tooling around WordPress that makes this a viable strategy for the average WordPress customer, which I guess is the answer why nobody does it.


There is tooling: take a look at roots.cx It has a workflow for using WordPress as a backend to manage a static site (generated by roots): https://github.com/carrot/roots-wordpress

In addition, it's not that hard to come up with a home-grown workflow using WordPress's REST API (now a plugin but soon to become part of the core application). Middleman has native support for generated static pages from a JSON API (dynamic "proxy pages") and it's not hard to make a Jekyll plugin to do the same.


Here's one easy (and cheap) way to do it.

Server 1 is based on the the free tier on OpenShift.

Server 2 is based on the free tier on Google App Engine (or Amazon EC2).

On server 1 you install Apache, WordPress, Varnish and a file sync solution like syncthing.

On server 2 you setup the DNS settings for your site plus the sync client.

On server 1, you have WordPress use the Varnish plugin to automatically populate the Varnish cache with all pages on your site. Syncthing (or something similar, if syncthing doesn't work with GAE or EC2) looks at the Varnish cache folders and pushes all changes to Server 2.

That's about all you need to do. It basically gives you a fast and hack-resistant WordPress website, and you only pay for the site domain name.


I suspect there will be quite a lot more stories like this now that businesses are seeing "devops" as meaning a developer that must do operations/sysadmin activities.

Not everyone can be versed in everything. And not every company can afford to learn from this mistake, they must learn from others.


"devops" really terrifies me. I think the idea is that you have less friction if you have an operations person that also understands development, or vice versa. But! You're still asking one person to do two jobs. And they really are two very distinct jobs. Are they being paid double? To me, it just seems like cheaping out on having a proper operations person, which really is a full time job. It's understandable if you're a cash strapped startup.. but lets not pretend it isn't what it is.


Admittedly, I develop desktop software, so my experience with 'devops' has been in managing build infrastructure, but I found that the tools and overall philosophy of the movement has empowered my team to control our own infrastructure and actually get things done.

Security was also improved when I started documenting how our VMs were created (and automated it via packer). In the process, I switched them from unlicensed, never-been-updated RHEL boxes to CentOS.


It's unfortunate but predictable that a movement that started off as an attempt to find a way out of the swamp has been coopted into the service of the standard ideology of extracting as much work from labor as possible.

Fixing the issues with both development and operations that result in bad, insecure software and poor user experience all the way around is a worthwhile goal.

Putting the team focus on delivering complete software systems that support traceability, manageability and testability is the whole point of "devops" if the term means anything at all.

It's not about running Ruby scripts as root, or containerising every last script in your environment. It's about building better systems in a manner that's more humane to all concerned.


Except now recruiters and hiring managers don not see it this way, they see it as Devlopers can do Operations tasks. Which is what I was saying, and what the parent was mentioning.

DevOps by itself is a noble cause, putting developers with operations in order to smooth a pipeline, but invariably it just means 1 person with both skills/disciplines.

And like they say, a jack of all trades is a master of none.

(but I suppose better than a master of one?)


It's like every other broad movement in IT.

It starts with very smart people taking a look at the processes and outcomes they are using and deciding they are broken.

They come up with some solutions and some tools to support their new processes and start sharing them with their peers.

The new techniques have some notable successes and several members of the original group find themselves drawn into teaching/evangelizing the new methods. To make communication easier catchphrases and buzzwords intended to be a shorthand for a suite of methods become popular.

Buzzword compliance becomes a checkbox feature for groups, companies and individuals each of whom have varying levels of skill and understanding and the buzzwords become diluted and more closely associated with specific tooling.

At this point the original group is crowded out by people who are serial evangelists, and enterprise sales become more important than sharing knowledge with peers. Job descriptions start to lose contact with reality.

The movement becomes mainstream as a grotesque caricature of itself driven mostly by the greed-fueled hype train.

At which point someone looks around and declares that the processes are broken... and the whole cycle repeats.

See: object-orientation, agile, scrum, devops, etc.


This is one of 20 reasons I just left my job. There were about 4 major points of failure that will eventually hit the IT dept. The owner doesn't give a shit (just work more hours). So I told them what they were lacking, the resources they need and they don't care.


I believe the contrary. I work in higher education, where people are valued as investments. I have been instructed extensively in security, allowing me to create more secure applications for college/devops purposes. If I lacked the knowledge and they had simply brought on a security-focused developer, it would have cost more and could have resulted in a poorer suite of apps and standards, as I could overlook security flaws even a novice would spot.


a bit surprised by the move to digital ocean as a magic silver bullet that solve all the problems

no, you just moved the problem "away" to "oh it's a virtual instance, so if anything goes wrong I can restore from backup"

I don't see how this protect from being hacked ?

if you run a server, not maintaining it is what make it hackable.

so yeah reading the digital ocean tutorials can be a good start, like reading the ubuntu server guide https://help.ubuntu.com/lts/serverguide/ , but it will never replace the time you invest in your server, eg. doing sysadmin.

It does not have to be hard it just have to be done and on a regular basis.

It's like a car, motorcycle, bicycle, etc. you need to spend the time to change the oil, check the tire pressure, and all those little things that are simple but necessary ... otherwise it get rotten with time.


> It's like a car, motorcycle, bicycle, etc. you need to spend the time to change the oil, check the tire pressure, and all those little things that are simple but necessary ... otherwise it get rotten with time.

Agreed. Or else pay significantly more for a managed WordPress server, but even those aren't immune to security issues, as we saw recently with WP Engine.

The most secure option by far if you don't serve dynamic content (that would require a login, for example) is to use a static site generator and serve it via Github, S3, or Netlify. Or even your own Nginx (only slightly less secure as long as you understand Nginx and SSH and how to mitigate any potential security issues).


For me the decision to move to Digital Ocean was not expected to solve any of the underlying security issues. I chose them because they encourage better security practices through documentation and make isolating instances cheaper and easier.

The previous host did not give very good advice and kept band-aiding the side effect of bad infrastructure decisions.


What I like about Digital Ocean is that I can separate out my "risky" servers (you know, when a client wants to use Wordpress, or when they mandate an out of date version of a library because it works with some of their existing codebase) onto $5 instances. I never put full Github keys on them and with regular backups, the damage is mitigated.

When you go with a single server any one hack can take out your entire database, like what happened here.


Same for me. The overhead of managing several instances is largely compensated by the peace of mind (thanks Ansible!).


Also the Digital Ocean API is pretty awesome, although I wish they would return the public key when you make a new droplet. I've had to resort to stuffing one onto the server during a server creation script.


Libcloud is great and will solve your public key issues if I understood you correctly.

https://libcloud.apache.org/

https://github.com/apache/libcloud


>I don't see how this protect from being hacked ?

It's easy. If you can easily restore from a backup, then you can easily not care at all as to whether you can get hacked or not -- especially if what you run is just a blog.


He also moved his site over to a non-EOL Ubuntu 14.04 instance, which is probably the more relavent aspect here.


I get less and less comfortable using my Fever RSS reader (www.feedafever.com), which was made before FireSheep and HTTPS were part of the public conversation.

I don't know of any services that come close, but although I don't have anything particularly incriminating in my feed, it's still a shitty feeling to know that you're a bored script kiddie away from all that getting owned.

It also reminds me of the self-hosted fad a while back (around the time Docker came around) where people still didn't want to fork over the money to get SSL for all their personal health data and whatnot.


There have been free ssl options for a long time. Now there is letsencrypt so there is really no excuse.


The same thing happened to me last week:

- Giant shared server full of old PHP projects* gets owned.

- Logs show an automated tool was brute forcing a vulnerable SQL injection vector for weeks before getting through.

- Backed up server for analysis and removed malicious content.

- Setup new ModSecurity rules, fail2ban, and fixed vulnerable code.

- Started moving stuff to Digital Ocean.

* Not my server or projects, but recently my responsibility.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: