Hacker News new | past | comments | ask | show | jobs | submit login
DigitalOcean Sucks. Use DigitalOcean (raymii.org)
208 points by mdewinter on Nov 19, 2013 | hide | past | favorite | 186 comments



We https://commando.io use, love, and are sponsored by DigitalOcean. They've been awesome and amazing. The SliceHost of 2012/2013.

Some features we really want to see are when upgrading a droplet, the ability for the disk to increase in size. Right now, you have to image it, delete it, create a new droplet from the image, and pray that you keep the same IP address. Not a suitable solution for companies relying on uptime and stability.

Second, when creating a droplet, the ability to check a box, "ensure this droplet is provisioned on a different hypervisor then the rest of your droplets." Again, when building a highly available cluster, does absolutely no good if they are all on the same physical machine.

Finally, the ability to attach multiple ip addresses to a single droplet is a must have.

With that said, thanks DO, you guys rock!


Commando.io rocks man! I actually use commando server management on digital ocean :)

The IP thing is definitely annoying especially since you're never guaranteed an IP (say you want to switch VMs) even in the same region. That and hard drive size which has definitely made me pay more than I should (when I don't need that much CPU/RAM but need the disk space). Otherwise great experience with them.


I've still had absolutely no issues with my running droplets, though space is getting very tight in AMS1 at times when creating new ones. Their support is excellent even if you have a very specific question about their network, there's none of the fluffing around I've seen with other providers. It's cost effective even for my non-existent budget. Put simply, I'm pleased as punch.

I'm not usually vocally supportive of companies, but they're doing quite a good job: this article is a little undeserved.


"with my running droplets"

Personally, I hate new names like this for old things.. [1]

Stop making up names for things. My brain doesn't want to learn a new language for something that can easily be called by the legacy name. It's wasted time and slows things down. (I'm exaggerating a bit of course but why do I have to learn something new of no value to me? And I'm reading this and someone is commenting calling things droplets and now I have to do a search to find out what a droplet is...)

Reminds me of some of Spike Lee's first films where he was more concerned with coming up with some creative oddity that could be linked to him. (If you've ever taken a film class you probably remember the prof talking about something that Hitchcock was doing in the film that had never been done before.)

[1] From DO website: "DigitalOcean calls its virtual servers, droplets; each droplet that you spin up is a new virtual server for your personal use. "


I agree with this in many cases, but "Droplets" is really not difficult. Some providers invent whole new languages for perfectly mundane things. Digital Ocean just brands their virtual servers in a very straightforward way.


If they were simply differentiating between different levels of servers then I would agree. (Just like people call things "gold" "bronze" "silver" etc.)

But while it is an easy word (and it's "cute") it is different and strictly for branding over something that already is well understood and known. Why do it?


But the flip side is that if you call it according to a convention, then you have to live up to that convention otherwise people will be saying: "Just call it something else because X means Y and you are Z, stop confusing people".

Plus, it's just one word, not a whole new language, so not really that much of an issue, especially compared to some of the heavier offenders out there.


This is no different that amazon EC2 naming servers "instances"


You are in a maze of twisty little rooms, all alike.

"Instance" has a very well defined meaning, which leads to some standard implications.

It means one particulate <thing> out of several, which are all alike. It implies that your <thing>, and in fact all <thing>s, has (have) no physical reality. It implies that <thing>s can be created and destroyed at will.

It is also completely generic. You can have EC2 instances, ${APPLICATION} instances, object instances (in absolutely any object-oriented language, and several that aren't), database instances, etc.

A virtual private server is a cheap knock-off of a real physical server. A server instance is something you get from an API instead of having to hire a guy to plug things in.


In the context of servers, I have never seen other web hosting platforms use the term "instance," in the context of on-demand server instance vs. just give me a server. While it seems commonplace now, I don't recall reading about other hosting platforms referring to their services as instances. Can you point to literature that does? For instance, from http://aws.amazon.com/ec2/instance-types/ : "You can think of instances as virtual servers that can run applications." That's the AWS context of instances.


Branding. If someone says to load up a droplet, I know they mean DO. If they say VPS, I ask, where?


Well, I think it would be more apropos to ask "Why not do it?" The objection to the general practice is that it's confusing. This instance is not confusing, so I can't think of a reason why they shouldn't if they feel like it.


> apropos

Many people seem to think that's a smarter-sounding alternative to "appropriate", but it's really a completely different word. Just use "appropriate".


Apropos is defined by Oxford as "very appropriate to a particular situation." That is the sense in which I meant it — in this situation, it makes a lot more sense to ask about reasons against rather than reasons for, since "I felt like it" is a perfectly reasonable explanation for the decision in the absence of reasons not to.

It is true that "appropriate" could have been used instead, but I chose a different word and I believe I used it correctly, thank you very much.


No, you used it incorrectly. It's like you're saying "appropriate to the asking situation to ask.." or something. It doesn't make sense. Check:

http://english.stackexchange.com/questions/21101/does-apropo...

for examples. Using "appropriate" would have been better. There's no need to thank me ;)


Please, people, stop trying to score points. That link isn't even relevant. That Stack Exchange question is about the prepositional usage of apropos — I was using it as an adjective.

Even if I had been wrong, this kind of smart aleck, nit-picky comment would not be welcome. But based on the fact that you apparently just linked the first result for "apropos" on english.stackexchange.com even though it's about a completely different part of speech, I don't think you actually know enough about this topic to comment on it. So you are both needlessly hounding people about irrelevant trivia and spreading bad information. Come on, dude.


I'm not people; I'm me. And, actually, I do know about it. Your use as an adjective was wrong as was explained in that link.

I have no interest in your "points". I don't care how you thought you were using it. I don't care that you think you were right. I really don't care about the whole thing at all. I simply told you where you were going wrong, so feel free to ignore it if you wish to remain wrong, but you were still wrong. Go learn something.


> And, actually, I do know about it. Your use as an adjective was wrong as was explained in that link.

That link does not address the word's use as an adjective at all except to note that it can be used as an adjective.

> I have no interest in your "points". I don't care how you thought you were using it. I don't care that you think you were right. I really don't care about the whole thing at all. I simply told you where you were going wrong, so feel free to ignore it if you wish to remain wrong, but you were still wrong. Go learn something.

I quoted OED to support my usage. You linked to a Stack Exchange question asking whether the correct prepositional form of apropos is "apropos" or "apropos of" — when I wasn't even using the prepositional form. If you were in my shoes, would you listen to you?


If I were in your shoes, I'd have checked the usage before using the word. I've no idea why you need my approval. Your usage was wrong. If you don't like that, go find another opinion.


> I've no idea why you need my approval.

I never sought your approval. You just came out of the blue and told me I was wrong.

> Your usage was wrong.

You are wrong. I don't know why this is so hard for you to accept. Unlike you, I have presented actual academically acceptable evidence for my claim — you might take that as a clue for who's right in this situation.

> If you don't like that, go find another opinion.

I did get another opinion — I looked the word up in the Oxford English Dictionary and it confirmed that I used the word correctly.


No, it didn't. It said you used it incorrectly. You just failed to understand the usage. Try searching for it.


Yeah personally I like saying "droplets" more compared to "vee-pee-ess-es"


Yes but they're droplets in a Digital Ocean. In this specific case I think it works very well with their branding.

If they'd have called it a Virtual Private Dedicated Serverlet or invented yet another useless acronym (YAUA, of course) I'd be agreeing, but droplet is simple and easy and as a bonus helps avoid the immediate "This is too techy for me!" attitude you can get when trying to convince a shared server user that the reason their site keeps going offline is because they need to upgrade to something better


I agree wholeheartedly. Especially in the domain of cloud providers the terms are very unclear. I suppose that's "cloud" is already such an extremely vague term.

I've written a short blog post about the over-use of buzzword in software here: http://timbenke.de/?p=881


We're going to be making an announcement about AMS late this week / early next.


Excited, thanks!


IPv6 is something we get asked about a lot, and it's on the way. Unfortunately it's not something we can just "turn on" - We have to make sure it's build out properly and also fits in with some of our other roadmap items that our community will love in the coming year.

So yeah, love the feedback and actively working every day to bring any suck to zero.


IPv6 is a must for my use case. Currently, I have to use an IPv6 tunnel which is OK, but definitely less than ideal.

Can you disclose what kind of allocation each instance or each customer will get? Hopefully, you guys will be much less stingy with them than Linode, and give each customer a /48 to allocate freely between the individual nodes.

I am a very happy customer currently and am looking to use DO for my next project which involves heavy use of your API and lots of concurrent instances. As a customer I can testify that your product is a fantastic value for the price. Keep up the good work!


Totally get the ipv6 tunnel being a pain.

I think our allowances will have to be sorted out a little down the road but /48 is easy to acquire (unlike ipv4 where we have to wait till we're at 80% capacity before we're allocated more IPs). ipv6 gets a little sticky when you start talking about the internet of things, there are a lot of new concepts introduced that we have to take into account when we're managing what is closing in on a million of the worlds public hosts. I don't see ANY reason why we won't allow a least one /48 per droplet. Part of our slowness on things like CDN, load balancing, failover etc are because we want the system to be really well build out to accommodate both ipv6 and the future of internet protocol addressing. While I can't speak for other providers, for us it's something we take seriously, but need to test all our systems together to make sure cloud 2.0 if you will works really we..

As I'm sure you know, we've scaled massively in the last 8 months and unfortunately the "lets just pop in a feature" becomes a lot more complex as the complexity and stability needs of our customers increase.

What I can say with 100% confidence is that there is a group of really amazing engineers working on this stuff and it's all in a roadmap, we just have to make sure everything is simple, stable and safe before we roll out.

I hope you can appreciate your measure in this matter. :)


That's great to hear. I actually envision a situation where the allocation of the /48 would be per-customer or per-customer-per-datacenter. That way I could do things like firewall off hosts easily. For example I could tell my DB hosts to only allow incoming connections from my /48 and nothing else. Having no common prefix for all my droplets would make this harder where I would have to list all allowed hosts.

I definitely appreciate that DO is doing things the right way and that this isn't going to be a rollout of "well, this host on this router supports IPv6, but that load balancer upstream does not".


This sounds like the way to do it - being able to get a /48 for every data centre and then being able to use addresses and subnets out of that, or a separate /64 for each droplet.

I wonder if we could eventually get virtual routers between subnets without having to use droplets (like the virtual networking that VMWare/OpenStack are doing)


You mentioned Linode's IPv6 allocation, but they gave me a /48 routed to my small $20/mo server by filing a ticket, no questions asked.


I did not realize this was a possibility. In either case, I am no longer a Linode user mostly because of the relative cost difference with DO.


So when will the bullshit about custom kernels be cut out?


I have a stupid question: Why do people need IPv6? What do they use it for?


To build the next generation of the Internet of course! :)

On a practical level, IPv6 is easier to administer since in a lot of ways it's more straightforward. For home use it's great since every device gets its own globally unique address (I can `ssh home.example.com` from any IPv6 enabled host).

Firewall rules become simpler if you have a common prefix vs lots of individual IPv4 addresses.

There are already lots of users who have IPv6 enabled: http://www.google.com/ipv6/statistics.html: TL;DR: over 2% of Google's traffic is over IPv6. Running dual stack servers lets you reach more customers.

I guess to an extent asking what people use IPv6 for is similar to asking what people use IPv4 for. You don't normally use IP. You use applications that run on top of TCP/IP or UDP/IP. Having a public IP address makes writing and managing those applications easier.

Edit: Also IPv6 addresses are cheap. An additional IPv4 address will run you about $1/month with a lot of providers. A billion IPv6 addresses is one billionth of the minimum allocation you will get for free. So in the long run, the cheapest DO droplet might cost as little as $4 instead of $5 :).


Even as an avid IPv6 fan (at home, tunneled, and awaiting my new modem to take part in Comcast Business trials), I have to question this:

"Running dual stack servers lets you reach more customers."

Is there a single ISP in the world that is IPv6 -ONLY- to the end device?


There aren't any real ISPs with no IPv4, but there are a few that support IPv6-only "on the wire", where IPv4 is either translated to IPv6 using NAT64 (e.g. T-Mobile US) or tunneled over IPv6 using Dual Stack Lite (e.g. Unitymedia in Germany).

The main advantage to running v6 on your servers (aside from being a good citizen) is that it allows you to distinguish individual customers for abuse detection, who'd otherwise be NAT'd to the same IPv4 address by their ISP. And ISP NATs are going to become more prevalent, because there just aren't enough IPv4 addresses to go around.

What IPv6 provides is an escape hatch; a path that avoids the NATs and tunnels.


You got me. Currently, I am not aware of any. However, from what I understand IPv6 first will become a thing pretty soon especially in the regions where RIR's are getting close to exhausting IPv4 addresses.


Since you seem to know what's going on, my question is: how can I seamlessly use ipv6? And how can I do that with Windows and Linux?


What a good looking question!

The idea is that you want a dual stack environment where IPv4 and IPv6 coexist. Mac, Linux, and Windows all support this and do the right thing: if IPv6 is available on the client and server they try that first, if not they use IPv4.

Since you are asking I am assuming your ISP does not provide native IPv6. If you have a public IPv4 address that does not change more frequently than every several hours, the best solution is to set up a 6in4 tunnel (ignore any docs that refer to 6to4, different beast). Sign up for an account at TunnelBroker.net and create a regular /64 tunnel. What this will do is that your router will encapsulate your IPv6 packets inside IPv4 packets and send them to a Tunnel Broker router. That router will then strip the IPv4 header and send the packet off as a pure IPv6 packet. In most cases latency is measured in milliseconds and you might actually see lower latency since IPv6 servers and routes are currently less utilized. YouTube is certainly faster.

After you have your account and tunnel set up, it is best to set up your tunnel end point on your home router that supports TunnelBroker.net. Anything that runs OpenWRT does this, as do some commercial router firmware. OpenWRT is great for many reasons so you should use that anyways, this is just a bonus.

After your router is able to reach IPv6 hosts (google.com), you would set up and enable a service called radvd (router advertisement daemon) on your router. This will automatically give all the hosts on your LAN instant IPv6 connectivity.

Lastly, you would set up an HTTP ping to Tunnel Broker to let it know if your IPv4 address changes. Once again, OpenWRT does this out of the box.

I am writing this from my phone, so I cannot provide much specific advice, but email me at igor <at< igorpartola.com and I will answer any specific questions you may have.


Wow, this is really useful information, thank you!

Since I wrote my original comment, I went out and started doing more research into this. At this point, I have my main home server[0] now accessible via IPv6 using freenet6. However, I did apply for SixXS, so I can hopefully put my entire house on it (my router's an Asus WL-520GU, which currently has DD-WRT on it, but is compatible with OpenWRT). Though I'll be looking more into the Hurricane Electric service.

I'm really liking this, since now I can access my devices directly, without having to try to negotiate firewalls and weird subnets!

Thanks very much for the advice!


No problem! Very happy to convince someone new to try IPv6. Please feel free to get in touch if you run into trouble.

Re: DD-WRT, last I checked it did not come with an IPv6 firewall, ip6tables. I ended up recompiling it using what seemed to be Voodoo magic (match the arch, release, kernel version, etc.) OpenWRT comes with the firewall on by default and a web GUI to control it.

As for the different tunnel brokers (of which TunnelBroker.net is one), I should not assume you are in the US. Freenet6 is great. So is SixXS, though watch out to not lose credits and have your tunnel cut off. TunnelBroker.net is run by Hurricane Electric, has been rock solid for me and their support (for this free product) is fantastic.

Also, check out dns.he.net to set up reverse DNS, or just use them as your DNS provider. This service is also very nice and free.


Next generation P2P apps is one great addition. Though we've yet to see it in the wild, the theory is that you don't need to port forward any more (just make allowances in the Firewall). Will come with it's own set of security issues. Someone here probably knows a lot more about it that I do.


I came across an article [1] that mentions load balancing as an upcoming feature. Is there a chance you could elaborate on the feature spec or provide a rough ETA?

[1] http://www.enterprisenetworkingplanet.com/datacenter/digital...


Hey Stanley, see above. :)


Treat DigitalOcean as any other provider - something you can't trust, so always have backups of your own data in a place you trust (not a DigitalOcean snapshot) so you can restore when needed.

Personally I use Ansible http://www.ansibleworks.com/ and rdiff-backup http://rdiff-backup.nongnu.org/, along with Vagrant http://www.vagrantup.com/ for testing. So the day something happens with my droplet on DigitalOcean - I'll just run Ansible on a fresh server and restore the remaining data with rdiff-backup.

Yes, the lack of IPv6, being unable to use a virtual machines bootloader, a lack of a decent rescue image, and no private networking apart from one location sucks. However for the price - it's a good deal.


Please accept our invitation to use rsync.net as the "backups of your own data in a place you trust".

The "HN Discount" is still granted to new customers that know to ask about it - and we have always supported rdiff-backup, which you mentioned.

We support ipv6 in our Denver and Zurich locations...


Definitely a happy customer! +1 to rsync.net. I've only ever had 2 issues there, both of which were resolved quickly and painlessly

The first of which was when my bank account suddenly got drained by a subscription plan to rsync - This later turned out to be an issue with the Paypal subscription, rsync refunded everything within an hour or so of me reporting the issue despite being on quite a different timezone, I was surprised someone was awake!

The second was that I was on somewhat of a very old contract (Going by emails I've been using them on this account since Sep 2010 and via nearlyfreespeech since March 2009, wow time flies!) and my storage space vs price was a bit off compared to their most recent pricing/HN offering. Quick ticket and within a couple of days everything was updated and I was on a much better pricing plan

More importantly than all of that, I've never lost data there - The only ever times I've been lacking a backup is when the backup was never pushed out to their servers in the first place due to my offsite script failing to run

As you can probably tell I'm a little bit of an rsync.net fanboy.. ahem :--)


Yep - I'm also using Ansible and Vagrant, in conjunction with machines on my home network, reverse ssh tunnels, and API manageable DNS. I'm working towards having services able to be dynamically/automatically moved between various VPS providers (Digital Ocean/AWS/Hetzner/NineFold) without me worrying (or even being aware) which VPS provider is currently being used.


How would you do it in such a way that you are not even aware?


The plan is a mail server where the hardware/storage is in my home, opening reverse ssh tunnels for ports 25 and 465 to inexpensive VPSes regularly created and destroyed via APIs, with DNS MX records updated automatically. That way my world-visible MX endpoint will regularly change IP address, and move between US, European, and Australian based datacenters. The VPSes are configured to not store or log anything, and to always attempt to initiate SSL/TLS connection (with STARTSSL SMTP messages).

I'll know which VPS providers I have accounts with (and anybody curious could also find out by watching my zonefile updates), but at any time I won't care where the remote end of the ssh tunnels is or where the MX records are currently pointing.


OK, so you are using these systems as data relays, not to store data. This makes sense. I think it would be much harder to do this if you wanted to switch between VPSs that were storing your data.


Sure.

I've done some thinking - but not (yet) experimenting with EncFS combined with S3FS to store encrypted mountable data on Amazon S3 (I'm currently useing EncFS to store data on Dropbox & GDrive and with BTSync). No good if you need fast local access to the data (you wouldn't want to run you database this way), but it would solve _some_ of those problems. For me right now - the answer is to store my own data at home, and relay access to that data when needed.


Would it be possible for you to describe your rdiff process a bit more? Do you dump the database periodically and backup that data or do you run rdiff on the binary data directory of your DB? Also, where is the rdiff backup data stored? I am guessing you would need another server for this and cannot be just Amazon S3? Thanks!


On a local server at home, it's a low powered ARM box (QNAP TS-219 running Debian) with RAID1. The same thing could probably be done with a Raspberry Pi for lower costs if I was doing it now.

For database backups, you need a consistent snapshot of the data - something which might not happen if you attempt to access the data directly. Add a script to your daily crontab directory, such as:

mysqldump --skip-extended-insert --all-databases --single-transaction --master-data=2 --flush-logs | gzip -9 --rsyncable > backup.sql.gz

Or:

sudo -u postgres pg_dumpall | gzip -9 --rsyncable > backup.sql.gz

Into a directory which rdiff-backup will download.

I always prefer to pull my backups from a local server I can trust instead of running a process on the server to push backups - if someone gains access to the server they could potentially destroy the backups if all the credentials are left on the server.


That's very helpful. Thank you so much.


"TL;DR: DigitalOcean is a good VPS provider with minor issues. I like them and have been using them for over a year." -- Thus a dishonest title.


I think the title of the original post is trying to be creative. The world would be a boring place if everything was stated as a matter of fact. You might not like the author's creativity, but I wouldn't call it dishonest.


It's a link bait title. Got me to read it. The word "minor" doesn't get me interested.

Other suggestions:

"Why John Kennedy would have loved Digital Ocean" "What Digital Ocean and the weather have in common".

Sorry I hate to joke and I don't want HN to end up just being people joking. But I had to point out that the title that gets attention is the title that gets your attention.


I am pleased with DO but they overpromise way too often, or at least they used to. We were supposed to be able to install instances from ISO in 2012 as promised by their support tickets:

http://digitalocean.uservoice.com/forums/136585-digital-ocea...

It was last updated ONE YEAR AGO. Come on, that is outrageous.


I do wish there was something akin to S3 to allow for larger amounts of storage, without necessarily being bundled in complete package that gets upgraded together (and there would be no SSD required). Obviously using S3 is still an option, but it would be nice to have the network locality and maybe even the general value proposition that DigitalOcean provides.


Yeah, it makes it hard for those of us with a lot of audio and video to deal with. We still need to be on Amazon for those assets, though would very much prefer to have them available through a locally mounted filesystem on our Digital Ocean instances.


Yesterday I went to pay my bill and it said "Automated Abuse Detection - Account Verification". Luckily I was able to ftp in and do a back-up.

After some back and forth my account was reinstated. It wasn't a huge deal but a shitty way to start my day. And after asking multiple times I was never told the reason why I had to go through this.

I had been happy until now. This just left a bad taste in my mouth.

edit: What pissed me off was having to send in a copy of my government issued ID.


Hey John,

For sure it's a pain in the ass and we've been discussing another verification methods, as I'm sure you can imagine, with adding 100s of accounts a day we need to make sure the public internet and our internal networks are relatively protected from someone spinning up vms for DDoS but also from compromised boxes and accounts.

If we publicly revealed how to trigger our verification process (that genuinely does a good job of protecting our network and customers) I'd imagine people would work to figure out ways to circumvent it. Occasionally we trigger a false positive, but I really believe that it's important to have this system in place.

I appreciate your feedback, and I'll make sure your comments get discussed at our next product meeting.


See, here are my concerns.

1: I wasn't notified that there was any problem with my account. 2: I have no idea what the repercussions of being suspected of fraud entails. Will my sites be shut down and if so then when? 3: I was only informed that something was wrong when I logged in and tried to give you money.

So what would have happened if I had paid for six months of hosting in advance (like I have done in the past)?

For fucks sake. Send off a email so I am not frantically backing up databases while waiting for my ticket to get replied to.


Legit complaint and I'll look into what happened. Sorry about it, can you let me know the ticket number so I can check it? Thanks.


#109125


I had the exact same issue. I launched a new droplet and installed Tomcat. Then got busy with other stuff. The next day I get an email that my droplet has been used for DDOSing. And am like clueless as it was a fresh droplet. I ask them more details about the attack but they do not reveal anything. I don't even know which files were responsible and where they directed the traffic to. They disable the droplet completely. The password for the server was their default created one so I don't think a security breach really happened. In over 7 years of working with plenty of hosts, this is the first time this has happened.


I recently setup my first droplet and I have negligible experience with unix administration. This guide was extremely useful:

https://www.digitalocean.com/community/articles/initial-serv...

Don't miss "Step Five— Configure SSH (OPTIONAL)".

My assumption is that my droplet is considerably safer than it was to start with.

Next step: https://www.digitalocean.com/community/articles/how-to-prote...


How would you like them to handle it? Fraud and abuse is a big problem, and government ID is a standard way to sort out who is who.


Telling me why I was suspected of fraud and abuse would be a good start.

Another edit: I got a helpful reply.

There has been a response to your ticket:

Greetings,

Unfortunately, we are unable to provide further information with regard to our backend abuse filters.

If we may be of any further assistance, please do let us know.

Regards, * Mitchell | Support Team


You really can't think of any possible reason why they wouldn't want to tell you what triggered their fraud checks?

Because it's pretty obvious to me that people would create throwaway accounts to probe for all the fraud checks, then start creating abusive accounts.


Well. I have paid for around a year. I would like to know what happened so I can avoid the same in the future. Like I said there was no communication and I worry that my sites will be shut down for no reason.


A government ID is useless without a way to verify its authenticity. It's pretty easy to photoshop a fake one.


Most fraudsters wont make the effort. Those that do are usually hilariously obvious.

A lot of people just use webhosting companies as a testing ground before magging a real card and going on a spending spree.


A few weeks ago it was still possible to sniff traffic from other instances.

http://seclists.org/fulldisclosure/2013/Aug/53


We addressed this as soon as it was brought to our attention and is no longer a vulnerability. :)


I was trying recently trying to estimate write-ahead log performance for my 2GB droplet, and for random sized (2-8k, 4.3k avg) sequential writes, my droplet is able to throw out 230MB/sec or ~60k 4k writes/second. I haven't actually done database benchmarks yet, but it looks pretty promising.


The title is very stupid especially since he is actually giving them a recommendation.

Anyway it comes down to the fact that its cheaper than any other large provider out there and almost always works very well. No one can beat that price with an actual usable service.

If I was going to complain about anything it would be the current lack of 2GB/4GB etc. droplets in San Francisco and the fact that launching a droplet from a smaller snapshot takes several minutes instead of one minute.


I run my web, email and some other stuff off a cheap end DO droplet for $5/month. Can't say I've had any problems other than availability of droplets in Amsterdam. I'm in the UK but have a VM in NY2. To be fair the latency feels about the same as our production kit which is 7 miles from my house, not 3460 miles. DO is less hops as well.


What are you using for email? Did you find a good install guide?


I don't know about the GP, but I used the "How To Install iRedMail On Ubuntu 12.04 x64"[1] guide on Digital Ocean's community site and was pretty successful in setting up my mail server, complete with postfix, dovecot, spamassasin, greylisting, ssl/tls, webmail, etc.

1. https://www.digitalocean.com/community/articles/how-to-insta...


I wrote it up the other day conveniently. Here you go:

https://news.ycombinator.com/item?id=6750630

Basically, mutt, postfix, procmail.


The font on that site is what really sucks... :-)


There must be a reason the Chromium/Chrome team decides to displays custom fonts like that on Windows. They almost universally look terrible, or at least worse than every other browser and every other platform. No other browser renders them so thin that the lines of the glyphs become disconnected.

I opened a bug report about font rendering and text color issues in 2011. Not only is narrow text rendered oddly as you can see, but if you give it any color other than black, the rendered color doesn't match the color you specify in CSS. It can be way off, like blues rendering as purple. Last I checked, the bug was still open with no work done on it.


Chrome still uses the old Win32 API (GDI) to display fonts, whereas IE and FF are now using DirectWrite.


It's been a problem since Chrome's early days, on all platforms: http://lee-phillips.org/google-chromeBadKerning/


I had to go in an remove the custom font CSS in chrome to get through that article. That font is absolutely terrible!


I'm glad I wasn't the only one that noticed the terrible typography.


Yup. It renders improperly on Chrome and Firefox, but properly on IE (at least, on IE10).


I'd really like to see additional storage available for a node. I'd like to add an additional 100gb to my 20$ per month node.


I second that. I understand SSD space is expensive, but sometimes you have 0.05 load average and full disk. At least it would be nice to get some kind of Amazon EBS analogue.


Agree, I am also running out of disk space now, don't need additional RAM or CPU power.


Has anyone consider collocation yet? For DO 's 160GB 16G $160 packages, I figure I can easily put a 1u in colo center with a few TBs for $99 / month in Bay area.

Any pro/con with that approach?


I read somewhere on their forums that this is planned. I too, am eagerly anticipating this.


Me too!



You still can't boot your own kernel...


What are some use cases that you have for running your own kernel? I just can't think of much need of running a custom kernel. There's a lot of different kernel versions to choose from. For subsystems like process scheduling, you can just load your own kernel module to change that.


Their kernels are all months out of date. And besides, I want to install updates as they are released by my distribution, rather than have to wait around for my host to get their arse in gear.


"yum install kernel" then modify grub.conf?


Nope.

DigitalOcean's has been ignoring customer requests about this for 18 months on this:

https://digitalocean.uservoice.com/forums/136585-digital-oce...

It's really hard to take them seriously when you have to wait patiently for kernel updates that include critical security fixes.


OMG!


I like DigitalOcean and these problems are annoying. I'd really like to run coreos. DO claimed they were near deploying a custom image feature months ago. But it has never materialized.

The lack of IPv6 is a pain but I can deal with it. And the screw up with the NY2 network going down for a day kind of soured things as well.

What I'd really like is for DO to be a bit faster with feature development and more transparent on their progress.


We're working on a few new ways to deploy custom stuff, we want it to be really fly and work really really well and be really really simple so we are refining it. I can't give a timeline for this but it's in our short term roadmap for sure.


It doesn't need to be perfect. I can deal with a more complex but functional feature tomorrow far better than a perfect feature in a year.


Totally, however a lot of our install base tend not to be as technical. The disparity between devs is getting massive.


Would it be sufficient to just slap a big "Warning: Advanced Users Only!" label on the custom image option? I understand not wanting not-so-technical users to underestimate the difficulty of setting up a custom install and having a bad experience, but I think it's safe to assume that the not-so-technical users will be stick the default images, while the hard-core OS geeks will be willing to put up with, say, importing a QEMU or ISO image.


Why not provide early access to new features to technical people who don't mind to be on the bleeding edge? That would be a sword cutting on two edges since you'll get high quality feedback in an early stage.


Then everybody will assume he's technical enough and still flood the trackers with issues that are more of a pebkac than a real problem. It would probably end up like the dev-builds of IOs7: People were downvoting apps because they crashed, even though it was impossible for the app authors to rectify the problems.


> Why not provide early access to new features to technical people who don't mind to be on the bleeding edge? That would be a sword cutting on two edges since you'll get high quality feedback in an early stage.

I'd hazard a guess and say that people who often think they are on the bleeding edge aren't, and therefore get themselves into trouble.


Because that would create an endless stream of negative blog posts as the bleeding edge changes and breaks these people's software. It doesn't matter if they should have known better, it's still negative press, I wouldn't do it if I were DO.


I am kind of interested in this disparity. What are you seeing? Are you getting a whole lot of people who are very inexperienced but trying to keep up. Or is it more people who are technically lazy?


I use DigitalOcean, my application is not very large but response time is important. 20 GB is more than enough for me, even 10 GB would be enough. Being it SSD, it runs the short lived disk-based tasks I give (compilation & copying moving files) very fast due to its SSD, compared to AWS and others. This is my usecase and I am happy with it.


I won't use DO for everything, but I won't use Linode for everything either, and that's ok. I also use dedicated servers with VMs and they all have their place and purpose quite nicely.

With tools like docker, it's less about where you're hosting and more about being able to deploy to any infrastructure, as the only guarantee you'll have for the rest of your life is you'll be redeploying somewhere, be it with the same host or another one.

I think for what you pay, what you get, and the level of flexibility, DO is a great value and service in that it's not a random, small, no name vps provider that may disappear due to not managing their resources between the fine line of over or under subscribing.

Where I wouldn't get a Linode and recommend someone to get shared hosting, I can start them on DO and let them grow there, so I quite like the segment they've let me introduce dedicated VPS resources to.


Does Docker run on Digital Ocean?

I tried to set it up a few months ago, without success. Maybe things have changed since.

Something to do with DO using LXC with a setup that wouldn't work with Docker.



Docker in Ubuntu 13.x has been working great for me at DO. Just have to use a recent enough kernel.

DO itself is a full VM virtualization and afaik is not using kernel-level isolation/LXC.


Correct, we run KVM vs some type of PID isolation or LXC. :)


Ah you are right, I must have been confused with something else. Thanks for the info!


Digital Ocean has a Ubuntu 13.04 x64 droplet preconfigured with Docker.


Digital Ocean is great. The only thing keeping us from using it is lack of private networking in Amsterdam DC.


Their private networking is not private to you, it's private to all customers. Meaning I could access your server via your internal IP.

What advantage are you looking for?


I'm guessing it has something to do with EU data privacy laws.


It's because DO only offers internal/private networking in their NYC2 datacenter and thus not available in the AMS center.


I'm looking for free, speedy network for my data to not go through Internet. Data should be encrypted anyway.


Can you just establish a ssh / ssl vpn tunnel for that?


I'm afraid that ElasticSearch cluster may be to slow to work on this setup.


For just needing an off-site VM for backup-relay-host/testing/debugging purposes, you just can't beat $5/month. Been very happy with them, and the fact that I can just throw $50 into my account and not have to worry about it for almost a year.


Been a big fan since their first deploy... grandfathering is nice for unlimited bandwidth... Use them for most test setups now, proof of concepts, etc... heck I've even got a domain pointed at them, and I've run my own DNS servers since the early 90s ;)


What I'm really looking forward to is better IP management. So far it looks like most cloud servers providers try to ignore the issue, which I'm really disappointed about. It looks like DO's answer to droplet failures is - set up a load balancer. OK, but what happens is that droplet disappears? What protection can I have against going completely offline for the duration of DNS TTL in that case? (And the time of manually changing it)

I see that as an unacceptable risk really - one thing that keeps me from using DO. Other services also lack the reserved IPS sometimes, but some of them at least provide an integrated solution (LBaaS style).


Been using them for a few weeks now for a private Minecraft server (<= 6 people, whitelisted). Additionally it serves a statically generated map over nginx and I've been trying to get OpenVPN to work. It's a $10 one, 1 gig of RAM in Amsterdam.

Generally I'm quite happy so far. The setup process and the management panel is awesome while the CPU performance is lacking a bit. I/O performance (both disk and network) is great.


I use smallest Digital Ocean package (Amsterdam) to run a play 2.x scala app. It is not a visited site (yet, heh) but I am very happy with it. And they gave me a coupon when I couldnt tinker with the project and cancelled, so I am back again :)

The sucks part fortunately havent affected me yet. I could use a little more ram but when I am there it could be changed with a click (and a reboot if I remember correctly).


I'm a big fan of Digital Ocean; great value and good quality.

If I had any complaint, it's that occasionally I get a droplet with a wonky IP address that seems to be shared with another host (like, the SSH host key wobbles between one and the other between invocations of ssh).

There was also some flakiness with NYC2 a couple of weeks ago, which caused some sadness and grief.

All in all, they're saving me multiple thousands of dollars.


Not seeing anyone here talk about it so maybe no one has had it happen to them, but how is everyone dealing with the disk failures? Seems like your one to two man startup could have a fun day ahead of them. Not to mention if it happened on your launch day. Coming from someone looking to use DO for a future project.


Use something like Ansible (Puppet, Salt, Chef, whatever) to provision your servers, along with backups. You can also use DO disk images but they're not as good a solution; my Ansible playbooks will let me configure and deploy a fully functioning app or db server instance at the drop of a hat, on almost any infrastructure; a DO image is only good for deploying onto DO infrastructure.

If you're "in the cloud" (or hell, even if you're using your own bare iron servers), you need to be accepting that your server could just go away at any time (and Murphy's Law being what it is, probably at the worst possible time).

The real trick is figuring out how to handle backups and user data properly. It's easy to say "you need to plan for your main DB server going poof"; it's harder to actually make sure you can handle that without downtime and loss of user data. :)


> The real trick is figuring out how to handle backups and user data properly. It's easy to say "you need to plan for your main DB server going poof"; it's harder to actually make sure you can handle that without downtime and loss of user data. :)

Exactly this.

My little side project is fairly minimal, but users are paying for services. If the site goes down before my next backup, I'm going to have to reverse some charges (and look bad in the process!). One option might/would be to run failover on a $5 droplet, but then I also need to cluster Redis (server-side sessions), etc, etc... and it starts to become an "operations" side project and not a "product" side project. I'm using Puppet to automate the build process (and will build a "hot spare" image to get things up faster), but backups and failover are still tricky problems to solve.


They do provide a backup service but personally I can rebuild everything from a github repo if it does go down.


I use ksplice to run my own OS & kernel on Digital Ocean, you don't have to stick with their templates.


Do you ksplice-patch your kernels on every boot?


Recently moved a few sites to a single, 1GB droplet and am very, very happy with it. Also have been using a 512MB droplet for the last 3 months or so to run a tor relay. Their "one-click" installs are nice too, and WordPress runs really well on a 1GB droplet.


  1073741824 bytes (1.1 GB) copied, 3.84315 s, 279 MB/s
I was expecting it too be faster, actually. I get over 300 MB/s on a VPS with HDD at the company I work at. [1]

[1] https://true.nl


The dd command is copying from /dev/zero

I wouldn't entirely trust any disk benchmark writing empty files, you might be running into some FS optimisations there.

And 300MB/s on a single spinning disk? No chance!


Could someone who has used both DigitalOcean and RamNode say how they compare?


No experience with DO, but RamNode is very good. I've been a customer for over a year, and I rarely had any downtime. Disk speed is between 300MB and over 1GB (on SSD), network speed can go over 70MB/s.


I have both RamNode and Digital Ocean servers in production.

In terms of performance and cost, both are great. I haven't ever had any issues with Digital Ocean. With RamNode, they've had a couple of issues where there's been minimal downtime. And they had a breach back in June, but responded well.

I think I trust Digital Ocean more. The setup is easier and performance maybe slightly better.


I used RamNode for a bit last year, but left them for DO after dealing with RamNode's servers getting DDoS'd constantly.

While I'm sure it happens with DO as well, at least there I don't feel like they're active participants in the various VPS provider feuds that you see all the time over at LowEndBox.


I really dislike DigitalOcean. Terrible customer services. And do not give them a way to bill you automatically, use something like paypal when YOU are in control and can just not buy the credit and run out.


I'm currently using Ubiquity SSD Cloud Servers and I'm happy with them, but still find it weird that its not that popular. Anyone else using them? Any difference between them and DO?


It is not popular because the site lists all plans sold out, i.e. no new interest in the site.


I just noticed that and sent them a message about it. Thats definitely weird and not inviting at all. You can actually get more cloud servers once you signup.


Thanks for the opinion. I have been considering moving my server to DigitalOcean for a while and this assured me I probably won't regret it


It would be nice if implementing something like Amazon Marketplace was in their roadmap but it isn't right now.


I wouldn't say it's not on our radar. ;)


Are there any (decent) similar per-hour priced providers based in the EU? Any experiences or suggestions?


I plan on setting up a vps at Gandi [0] shortly. Based on what I've heard they seem decent (not only as a domain registrar).

[0] https://www.gandi.net/hosting/iaas/buy


I had awful experience with Gandi several years ago: ssh was broken on a newly created server (wrong permissions for some ssh folder), and CPU was limited in a fun way: processes periodically stopped to do CPU work for 200-300ms (once each second or so). We moved to Linode (they have servers in London), and for the same price got literally 20x faster performance (measured as rps), good support and flawless setup. Of course, things might have changed since that time.


There are no decent ones. I've tried them all now. Moved to DO and have had no problems so far.

TBH ALL the VPS providers are better than colo/cage suppliers from my experience who even at the high end are shit.


Feels like a perfect place to host your hobby projects but nothing for serious business


You know Disqus has taken over English when OP asks us to disquss on Hacker News.


What's with all the outdated OS images on DigitalOcean?


I'm still waiting for a data center in Asia


Me too. Curious, have you tried using the US data centers from Asia? If so, how is the speed like?


Ping result shows that I'm getting around 200ms (from Malaysia)


It's about 250ms from India


with the first 3 points of the cons, no doubt to use DO rather than other cloud service for small/personal stuffs


Unresponsive droplets here


I want to say that I hate Digital Ocean as well. I am a University of Waterloo Computer Science student. I have CS 458 (computer security) this term. We had an assignment last month asking us to get all the users permission without knowing their password for a web app. Then I wrote a script. The use of the script is to use curl one time per second (may be longer, due to the connection issue) to guess all the different combination of the password. Of course I was using that script on Digital Ocean!

After running it two days, I receive an email saying their router find out that I was doing the DDOS and ask me to stop it. I stop the script immediately and reply them my reason telling them that I was doing it for University Assignment and I don't know that I was not allowed to do this. However, my account got suspends anyway. No matter how I send emails to beg them to give back my files in the server (I use the server for emacs and tmux to write codes for school projects), they just told me that sorry but my account is suspend. No matter how I beg them, all I received back from the email is just a max two line saying that my account is suspend. They do tell me that it suspends forever. After several days they deleted my account with all my files in it. Now I have no ways to get any of my files back! I feel ... so angry and hate so much about the digital ocean.

Is this how a normal American Company does? Will a normal American Company suspends customers account because that customer use it to do university work? I can understand it suspends me if I violate any laws but I was just doing an assignment and not violates anything. It also ignore the apology from the customer. Any evidence the customer provides is ignored gets well. Even that after couple days, digital ocean deleted that customer's account with all his files permanently!

This is my story. This is the reason why I hate Digital Ocean!!!!!!


As another Digital Ocean user - I'll just say I'm happy to hear that other people running DDOS and/or dictionary attacks from the same IP pool I'm relying on get slapped down _hard_, and that excuses like "but I'm doing this for a Uni assignment!" don't give you a free run to be a bad actor in their (and "my") netblock.

Do you _really_ think you have a "right" to run a dictionary attack from someone else's network? _Seriously?_

Personally - from looking at my fail2ban logs, I wish Amazon - and even more usefully, large residential cable/adsl providers - would implement this sort of pro-active monitoring of user/customer behavior.


I think you are misunderstood. The target for that dictionary attack is the UW virtual hosted machine!


Sorry that I didn't read your post carefully. I did not know that I cannot run a dictionary attack from their network. I admit that I made the mistake. I do tell them that I was wrong and stop the script immediately after I receive the notice. It's just I do not think that I deserve a penalty with destroy my account and all my files.


So what penalty do you think you deserve?

What penalty do the people attacking my clients' WordPress sites or SSH ports deserve? Would that penalty change if they mailed Digital Ocean saying "No, it's OK - me attacking that website is part of my Uni assignment!"? Who'd be responsible for checking that claim, and how much time would it take? And remind me again how much you'd spent with Digital Ocean?

You fucked up big time - deal with it and learn from it. There's clearly a whole bunch of things you didn't even think about before doing this (ad that you're still whining about and failing to accept responsibility for). Be glad it bit you on the ass for something as unimportant as a uni assignment - imagine how much worse this could be if instead of a few assignment files, you'd lost 6 months of your startup's code - because you hadn't bothered reading the TOS you agreed to and didn't bother keeping off-site copies of important files.

Sorry this is harsh - but seriously, think about this from anybody else but your perspective. You behaved like a jerk, then tried the "But I didn't know! Sorry, I'll stop now." justification. And you're _still_ whining that you're being treated unfairly.


So, you violated their ToS, specifically that you're not allowed to do things that are abusive. Attempting to bruteforce a login is abusive. You admitted that you were in fact doing something against their ToS to them in a ticket. They don't know it's for a university assignment and chasing down your professor to confirm it is an assignment isn't something that's reasonable for you to expect them to do.

You violated their Terms, plain and simple. They don't owe you a second chance or your data.

As a side note, surely a CS major knows they should keep backups of every thing. Better yet, shove your text/code files into git and have backups AND versioning.


I do tell them several times and apologies for 5 times. Also I provide my student id and assignment page.

They told me that my files are safe at first. However they still deleted my account with all my files in it


I do have version control but backups. I am just angry with their attitude on how they talk to a customer


Who ran the webapp and its server? Did that person know that there was a class assignment to brute force passwords?

If so, who complained to digitalocean?

If not, your professor should be fired, because the assignment encouraged students to break the law. Forget DigitalOcean's TOS. Attacking a server without permission is a crime.


The University of Waterloo hosted the virtual server for assignment use. The webapp is made for the assignment host on that virtual uml server. That server is the server I do the brute force.

No one complained to digital ocean. The digital ocean told me that they detected the attack by their router.

I am not attack any other servers. What I attacked is the specific virtual server hosted by university of waterloo itself.


This is absolute rubbish. Let's have a look at the Waterloo CS 458 assignments: <http://www.math.uwaterloo.ca/~dstinson/CS_458/F13/F13-slides.... Every one of those assignments that has you do practical work has you do it against a Waterloo-hosted virtual machine (as obviously anything else would be potentially illegal).


I think there are some kind of misunderstand here. What I did attacking is the Waterloo-hosted virtual machine! I was brute force the password of users of a web app hosted on that machine. I ask the professor before doing that and the professor said i am allowed to do that


Wow, now i wonder what would happen if i use their droplet as a crawl server for my vertical search engine.. hmm


You're right, I did misunderstand. Sorry.


well, this should be another case that digital ocean sucks. bad customer services plus not return important files after suspend.


I like to see DO put a process in place to resolve this kind of issue by require the user to have recorded Skype/google hangout session to verify I'd, credit card, etc. once the id is verified, face recorded , user is understand and agree fix the ddos type errors, the account should be restored.


good point, I think this user deserve a second chance in this situation.

Another thing is they should not say that the files are safe in first place, and then destroy them afterward. Maybe this user trust them and wait for the files to submit his assignment.


Exactly. When I receive the ticket, it says all my files are safe. Since I stop the script immediately after I receive the notice, and apologies to them at first place, I am pretty sure that they will recover my account. However, I didn't know they eventually destroyed my account.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: