Hacker News new | past | comments | ask | show | jobs | submit login
DigitalOcean Raises $83M in Series B Funding (digitalocean.com)
328 points by beigeotter on July 8, 2015 | hide | past | favorite | 185 comments



DO has really benefitted from Linode stalling over the past few years. I admire Linode for remaining a bootstrapped business but it feels as though the owners lost their fighting spirit and energy... Perhaps because the small pool of Linode owners felt they made enough money already.

DOs announment talks about a storage product, which is strategically important and crucially something Linode has sorely needed for a long time. And yet the biggest development in recent years at Linode has been a proprietary stats and monitoring system built as an upsell, which doesn't really do anything distinctive that Nagios or another package couldn't provide.

Instead Linode is now switching their entire platform from Xen to KVM, a curious move which will create risk and cost velocity that could have been spent on product development.

I have been a huge supporter of Linode over the years, and the startup I co-founded is one of their biggest customers, but at this point DO seems like the winning horse to back.


I'm not saying you're wrong but I disagree from my own POV. It's a common play in the hosting business to grow big quickly then sell out to a more enterprisey company (as happened with EV1Servers, Softlayer, Heroku) and if DO is taking such large amounts of funding, something will have to happen (IPO, acquisition, etc) and some companies don't make the transition well (I'm not saying DO wouldn't, but you never know).

Linode, on the other hand, can remain a company focused on just being a long term business forever. They might move a little more conservatively, but they have owners with skin in the game and customers to keep happy. I use both DO and Linode, but Linode for my most critical stuff simply because I "feel" they're more likely to remain basically the same in 5 years' time and I value that consistency as a business.

I think DO is superb, I have great admiration for them and I recommend them a lot, but I also feel they're the riskier horse to back even if the potential upside is so much greater.


>> It's a common play in the hosting business to grow big quickly then sell out to a more enterprisey company (as happened with EV1Servers, Softlayer, Heroku)

And that's exactly what Slicehost did, years ago.

I wonder if Rackspace ever courted Linode. Slicehost always seemed to be the inferior one in price and performance.


Customer opinion too. I have old data on SliceHost (http://reviewsignal.com/webhosting/company/25/slicehost/) versus Linode (http://reviewsignal.com/webhosting/company/24/linode). Linode seemed to have the better rep and then SliceHost got dismantled. Tables seemed turned a bit now though with Digital Ocean (http://reviewsignal.com/webhosting/company/101/digitalocean) having a marginal lead over Linode. But DO's opinion has been slightly downward over the years and getting closer to Linode.


Before it was sold, Slicehost tools and documentation were better, and their customer support was incredible. You would get answers in minutes, even if you only rented a $20 VPS.


For all the money Linode has on hand, and the size of their technical team, it's absolutely insulting they haven't made any tangible improvement to their management front-end in at least five years, if not more.

The way it's behaving it's as if it was acquired and kept on life-support.


For what is worth, I've been a Linode customer for a few years and never really felt their management front-end was in urgent need of an update. It's solid and does the job for us. My only complains to be honest are 1) the lack of storage dedicated nodes, as OP points out, and 2) the price, it gets very expensive very fast as you scale and spin more nodes.

We are currently considering moving to dedicated machines but only because of 2), otherwise we would happily be their customer for life.


> never really felt their management front-end was in urgent need of an update

Even when it was hacked multiple times and customer VPSs compromised e.g.

http://arstechnica.com/business/2012/03/bitcoins-worth-22800...

Nothing is worse than spending weeks securing every aspect of your VPS only to have incidents like this appear. And worst of all ? To this day Linode never clearly said what happened or what they did to prevent it happening again.


Not sure why this is being downvoted but the whole Linode hack incident was a huge factor for me to switch all my VPS's over to digital ocean.


I went to Vultr. DO's interface is just as bad if not worse than Linode's. :(


Its not that bad, it has been updated frequently though. Seems to be getting better.

I do remember it being quite lacking back in the day.


I haven't used DO since 2013 and I probably won't since I'm extremely happy with Vultr, but it's good to know that it's been improved!


I chose vultr over DO because DO used so many external web services to run their site. Every new page at their site required me to turn on 5 more domains in NoScript. Vultr only required me to enable their site. The performance at vultr was also very good for what I needed: test servers.


Why Vlutr and not something like Scaleway (https://www.scaleway.com/)?

I've recently switched from DO to Scaleway and would never go back.


Actually, I did try Scaleway when I saw them on HN like 9 months ago! I recompiled a Go program I was working on [1] for ARM and benched it against Linode.

It was literally 10x slower than Linode even after playing around with Go's concurrency level to find the fastest runtime, and even with the dataset in memory. :( ARM just wasn't the right arch for what I was doing.

I ended up going with Vultr because it has the $5 pricepoint for hosting tiny websites and tons of datacenter locations. Their CPU performance and network speed were great in my tests.

[1] https://github.com/robinsonstrategy/go_backtesting_simulator...


It "does the job" but it's not as pleasant to use as it could be. A lot of the navigation is pointless and confusing, information is often buried several levels deep, and there's missing information on some of the detail screens you need to go back to the main listing to find: Instance type is only shown on the main list, not the individual instance tab, for example.

If GitHub had never improved their site since they launched it would be awful. Every time they move things forward I'm happier to be a paying customer. With Linode I reluctantly use them, but for new projects I'm using other services that work better.


Same here, moving to a dedicated server from a Linode this week. For slightly under 2x the monthly cost I'm paying at Linode, I can get a dedicated server with 8x the memory, infinity more transfer on the same size pipe (unmetered 100Mbps connection), 20x more disk space, and 2x the CPU, and all dedicated so no worries about phantom performance problems related to other tenants. Lish &c are a big plus for virtualized nodes, but I think I'll live without it.


Perhaps users don't have any complaints about their management interface? Do you?

Perhaps users mostly want performance and value which have been greatly improved by their substantial infrastructure upgrades?


Yes.

Their IP Failover is really badly documented and hard to work with.

Their Nodebalancers are noticeably slow, the interface is alright, but you are better off setting up your own loadbalancer.

I feel that static networking could be made easier, and more automatic when I am adding a new node.

Their stackscripts should be able to receive parameters for when it is running, and the error-reporting should be better. I had issues where my scripts wouldn't run and I had no idea why.

Other than that, I am a happy customer.


I agree that networking is a pain point in terms of front-end interface. It's not horrid, but it could be improved.

I'd love to see them offer a storage solution as other commenters have mentioned.



Oh that was hidden away, I have never seen that interface before, thanks!


When was the last time you added a node? New nodes default to 'automatic' networking now which configures /etc/network/interfaces automatically


I scale up and down weekly. I didn't know about that, I will look into it, do I just need to add static networking to the node?


Not doing regular improvements is how you end up being years behind your competitors, without even noticing it, and having to do a huge rewrite just to achieve parity.


Perhaps those complaints just get ignored.

I know Digital Ocean's control panel has improved several times since their launch, and their ability to launch instances with a complete stack is extremely useful. Linode has done nothing here. They point to their badly documented StackScripts system and shrug.

There's a hundred things Linode could do to make user's lives easier and they've done maybe two or three of them.


Linode is good and simple if you have a not too large amount of nodes or complex need for extra bits, and I appreciate them for that.

I didn't really need anything shinier. Is it super-cloudy? Not so much, but also nice that you don't have to think about it.


Why do people think "well designed" translates to superficial and pointless? Yes, sometimes this is the case, but when you have pride in your work you'll want to present it in the best possible light.

Would you rather eat in a restaurant where all the chairs are creaky, where the tables are wobbly, and where the wait staff is doing their best to get by with broken equipment, or would you instead visit down the street to a place where everything may not be new but it's well maintained? If the food quality was the same, why would you insist on going to the place with crappy, broken stuff?

Linode just doesn't seem to care about their site at all. If they did they'd listen to user feedback and improve things once in a while. You know, like at least once every six years.


I'm not sure if I am meant to be "people", but I certaintly don't think that design is superificial, but rather I think Linode is designed fine, and just works, and doesn't really need changes.


I never felt the need of more polished interface. It just works.


If we talk about DO in comparison, then DO's interface is just ugly and almost unusable. Every time I use it, I want to leave their site as soon as possible.


If you think Digital Ocean is ugly and almost unusable I have to wonder what you think is better.

Their one page instance creator where you pick size, location, and distribution is extremely convenient. This is three separate steps with Linode that happens over the course of six screens, plus two more if you want to enable private networking.

Eight steps vs. one.

They also don't offer the ability to install a system with an SSH key pre-installed avoiding the need for password authentication when bootstrapping your system. Even on a technical level Linode is way behind here.


I wouldn't call DO's interface ugly but I hugely prefer Linode's. Much better visual feedback and it feels like a solid management console rather than a pretty toy.


Why does Linode look "serious" and Digital Ocean look "toy"? It's a phenomenon I see happening primarily in the tech industry where ugly trumps functional and clean. People like the complexity aesthetic, the rugged, rough edges.


Try to open this link https://cloud.digitalocean.com/droplets and find current balance of your account. I can't understand why it's under "Settings". When I'm in the dashboard, I want to see as many controls as possible, not as minimal interface as possible.


I suppose a lot of engineers prefer function over form rather than the other way around. If anything I'm suspicious when a tech/engineer focussed product prioritises aesthetics over function. Might be irrational but it's my first reaction.


I disagree, Linode is much superior than DO. I can trust to DO only fault-tolerant services when to Linode I can trust any kind of service to run.


Could not disagree more. I would NEVER run anything serious on Linode.

They have a long history of withholding information from customers in the face of security incidents and outages. The last time they were hacked I found out from Reddit that it happened and even when they bothered to tell me they failed to say (a) what actually happened or (b) what steps they took to prevent it occurring again. And many, many outages had information communicated on the IRC hours before website was updated.

It is the culture and professionalism that differentiates one VPS provider from another. Linode gets a massive thumbs down for me.


Their blog answer most of your questions.

https://blog.linode.com/2013/04/16/security-incident-update/

You can't prevent 0 days, and the informations hacked were encrypted.


Remember that was the 2nd or I believe 3rd time the same management UI was hacked. And that post was done days after the incident occurred e.g.

http://www.webhostingtalk.com/showthread.php?p=8646073

The issue here is not the 0 days occurred but how you deal with them and what systems you have in place to prevent them. Linode has consistently been sloppy at notifying customers and their auditing systems are/were clearly inadequate since their positions changed over the few days. Sure their data is encryptable but if you are sloppy about the process you're likely pretty sloppy about the implementation. It's trivial to decrypt data if you haven't encrypted it properly.


Clearly you haven't tried using their Fremont data centre.

It's been terrible for years. And it's not like you have a lot of choices when it comes to Linode data centre.

I'm sure your experience has been good, but there's a huge swathe of ex-Linode customers with pretty negative memories.


I use two of their datacenters, not Freemont, the two I use are fantastic. Linode is exceptional.


Yes, I saw their stats of Fremont DC (in linode forum) and decided to not use it. London DC is most stable by my experience.


Why?


A reasonable question

For me :-

* Excellent reliability (no unscheduled downtime in 6 years from 3 to at one point more than a dozen nodes)

* Excellent hardware - when I've benchmarked real production systems on Linode on a $/perf ratio beats DO hands down

* Excellent support - tickets are generally answered in under 30 minutes and staff are knowledgeable

* It just works(TM)

* Straightforward interface and pricing

* Rock solid network, like bulletproof

* I trust them (after 6 years of the above they have earned it)


For me:

a) Average reliability - had numerous incidents at their London and Fremont DCs. And the complete inability to tell me what was happening in a timely manner was pretty unacceptable.

b) Average support - they will answer the simple questions very quickly. But anything reasonably complex they will actually not bother responding at all.


It's been sweet sailing for me for years, maybe you should try one of their other datacenters instead of complaining about Fremont.

Edit: Their support is top of the line.


"maybe you should try one of their other datacenters instead of complaining about Fremont"

Wow, just wow.


Funny... I remember when Linode first became really popular, it was right around the time Slicehost got bought (and subsequently shut down by) Rackspace. A lot of people moved over to Linode because Slicehost was no more (or when the writing was on the wall that it would soon become no more).

So perhaps you're correct that lack of funding is making Linode lazy now, but that doesn't mean that getting bought or accepting a bunch of money would solve the problem.

Side note: the original slicehost founders grew to regret their decision of selling to Rackspace, see: http://37signals.com/founderstories/slicehost


Also, look what happened to Slicehost after acquisition. Rackspace woke up, realized Slicehost customers are cheap, and drove them off by raising prices.


https://angel.co/digitalocean

Jason sits on the board of DigitalOcean. :)


When people ask, "Why does HN/the tech industry pay so much attention to the companies that are raising money?" I'll point them to this.

There's absolutely nothing wrong with bootstrapping and making a good living, but the stuff that goes big and wants to scale quickly usually has to raise a bunch of money to get there.


That's true, but IMHO hosting is not a service that benefits significantly from economies of scale beyond a certain (though reasonably high) point. Would Linode be any better for me as a customer if it were immediately 3x the size? I'm not sure. As a customer, would I prefer a Linode that were 10x the size and having to head for an "exit" of some kind? No way.


I'm not sure; Amazon putting data centers in glaciers or Google trying to float a ship out into the ocean to water cool it certainly seem like they could could provide either performance or prices an order of magnitude greater than what is currently available; the tricky part is building up enough steam that you can be trusted with enough cash to make that happen.

If nothing else, we all benefit from the prices. AWS is so cheap I barely have to think about it.


I mean when I think of economies of scale, I would name datacenters pretty high up on the list!

Thing of the googles and facebooks making their own servers instead of buying from off the shelf guys...


OTOH, taking money puts pressure on a bigger exit/revenue scaling plan. To me, that means more like GoDaddy and less like prgmr, which is not what I want from my service providers.

The pressure to expand is a negative indicator when I'm a user of a service.


fun fact about linode: if it says automatic backups are on in your control panel, they can still be off on linode's end. I went to go recover from a recent snapshot, only to find our latest backup was 11 months ago. That it would be "on" on our end, and be broken for 11 months for a service i'm not supposed to have to think about, that really blew me away. But, to their credit, DO does not offer the same level of backups.


Xen to KVM was nothing but smooth and my node is now flying.


Same experience here, Linode did their homework and the transition was extremely smooth.


>> Linode stalling over the past few years

>> And yet the biggest development in recent years at Linode has been a proprietary stats and monitoring system

>> Linode is now switching their entire platform from Xen to KVM, a curious move which will create risk

>> I have been a huge supporter of Linode over the years

Uh huh, right.


I used digital ocean for a while. My experience was bad reliability and random technical issues. I had various very experience ops people verify with me that it wasn't an issue I introduced into the systems running.

I went back to dedicated servers at a smallish provider and forgot how nice it can be to not have all the cloud virtualization stuff get in the way. It's just too fragmented among providers in the way they setup for me to use the service and not have a fear of lockin. Does it take me 3 or 4 days to get new boxes? Yes. Is it causing a massive headache for me? No, because I plan things and order them ahead of time.

Just my 2 cents, I know others who use DO and love it.


Same experience here. A few of my developers migrated our servers to DO 2 years ago to "save" a few hundred dollars. Turns out DO has planned downtime every other month and its cost us 100x more in staff and headache dealing with them. You can't run a SaaS or anything that requires uptime reliability (ie, any sizable business) . I've transferred our main site back to AWS. Also AWS dedicated pricing is now almost the same as DO, and much more reliable.

But congrats to the DO team. They will only get better.


So, Digital Ocean doesn't have great reliable up time then? I've been using OpenShift's free hosting tier to host a Ghost blog, and it goes down CONSTANTLY, making it unusable. I was about to start paying the $5 a month plan with Digital Ocean, but I'm not going to if it also goes down frequently.


hrrsn@lillith:~$ uptime 15:13:16 up 233 days, 20:35, 1 user, load average: 0.00, 0.01, 0.05

YMMV, but I've had zero issues with DigitalOcean's reliability. This is a VM that has been running since I spun it up.


I agree. Our site, http://www.gurufoo.com, has had ZERO issues with reliability. Continuous uptime, no outages. Very happy.


Have you thought about Heroku free tier?


Heh. Nearly the same thing here but on a personal projects level, no company stuff. I really wanted it to be something I did or for there to be a solution because migration is never fun, but I finally had to bite the bullet.

EDIT: had no idea about the planned downtime. Maybe that explains the random chaos issues I had.


Why not just use AWS? Seems like you went from bad to slightly better.

There's no reason to fear virtualization, and the automation is definitely one of the best aspects of running in a major cloud.


Because I prefer a set of simple and basic tools used in combination to accomplish something instead of a "push this button and it does everything" approach. It's the *nix pipe mentality of combining simple well built tools to build solid and reliable things that you can adapt any way you please.

I also build things that are heavy on the network side, and virtualization drops networking performance a great deal (this may be changed these days, not sure).

Don't get me wrong. I don't fear virtualization. I use virtualbox and vmware heavily for development, and yes they do have a purpose, but it's a bad fit for the type of projects I build.

I just find that anything in life that just continues to add features for the sake of adding features, and create more and more "magic" push buttons to solve your problems eventually goes to crap and becomes a fragmented dependency hell that I would rather not deal with. And yes, I do love Golang.


What? AWS is great because it has a nice console but it also gives you direct access to all the VMs and there is a AWS command line you can install to program/script everything. What's missing here?


By your response, I can tell you haven't used AWS much. Using EC2 by itself is as simple as DO.


DO is much cheaper than AWS.


Who do you use for a dedicated provider?


joesdatacenter.com

I do not work for joes or have any affiliation. It's a very "non-automated" setup. You file a ticket with a rep to get new servers and describe what you want (I need 5 more servers just like X), no forms online to automate it or any of that. The plus side of that is you never get a canned response or ignored. Very quick and professional.


Anyone else reading this, they're not a bad budget provider at all. A good analogy to make would be they're like a small pizza place in your hometown, versus Limestone or FDC.

If you don't mind me piggy-backing, I have a non-affiliated love affair with ReliableSite.net. Yes - weird name, but I've had nothing but amazing, amazing experiences with them. The quality is 5x what you get for paying 1/5th the price.

Before anyone thinks about getting into hardware on this kind of level, please do your research @ webhostingtalk.com. Yes, $5 can go a long way with DigitalOcean because they're funded (and kind of got lucky); but that other $5 VM? Good luck.


I wonder if the this pushes there valuation over 1B, if so I think that means that Techstars is the first accelerator outside of YC to produce a unicorn.

Personally I hope so, Digital Ocean is a great product and I think one of the really smart things they did was be generous with there free credits as it was at least a great way for me to get on there platform and later on drop a fair amount into hosting with them.


Is unicorn just a billion dollar non-public startup? Isn't there some other magic sauce required like supernormal margins or superfew employees or exploiting toothless regulations?

DO seems to be almost a too straight forward business model (buy servers, rent servers in sub-units) to be considered in the modern startup pool of wishful thinking. I mean, it's not like they're an iPhone app for renting other people's idle server space on demand. Now that would be a game changer.


DO's UX is a game changer, I personally think is why they have been so successful in what seemed like a crowded market with the "same" business model as the others.


Yeah, the simplicity is great. AWS has 80 products and it takes months to figure how how they all work together. DO's click-n-go for $5 month helps just from a mental clarity point of view.

At this point AWS is so complicated they should be offering an official Cisco-like series of certifications.


I'm not sure they're at the level of Cisco, but they do have certifications.

http://aws.amazon.com/certification/


Oh, great to know. They are really cheap too ($150, $300, $75 re-up).

Amusingly, that page demonstrates the need for an AWS certification in the first place: the page has over 100 things trying to grab your attention with no focus or clear direction at all. Amazon as a corporate entity seems to go for "maximum information + maximum confusion" in their UX at every turn.

Always worth a re-read: https://gist.github.com/chitchcock/1281611


AWS is an inherently complex product. It has far and away more features than any other competitor, so there is obviously going to be more confusion. I think the AWS dashboard is getting better and better and relatively easy to use if you keep each product isolated. There being 40+ products to choose from is overwhelming if you don't know what you are looking at, but that is the nature of the beast.


Why fear a complicated cloud? I tried DO. It was cute, then I kept using AWS. DO is very simple, for simple things. But where do I get my GPU optimized Droplet, or one with 10GB internet connection and 244GB of ram to computer that graph problem with 3B edges? DO's only got relatively small, simple instances.

Further, DO is MORE expensive than AWS. What you say? Well right now I can grab a spot request for a box with 8 cores and 30GB ram for $0.066/hr or $48/mo compared to $160/mo on DO. That's ~1/4 the cost. Also, most "AWS Sucks" benchmarks are naive at best, failing to use the local disk, rather than the NAS, and this only requires 3 bash lines to mount.


Awesome, thanks for the pointer.


well said! most of the AWS products are literally have minor diff. point and literally over complicating things. I feel they want to keep it that way.


Serious question - What part of their UX do you think is game changer? I use DO and Linode. They both look similar to me. DO does have better UX than AWS, but I think DO is comparable to Linode rather than AWS.

On the other hand, I think DO's content guides as a marketing tool is a great asset for them. I haven't seen any other provider do that. Linode had a few guides, but nothing as DO's scale.


You said it yourself when you mentioned the marketing and the community and the customer guides. User Experience is not just web interfaces, it's the whole user experience (man....) and that includes onboarding, self-service, support, learning and all the other small touches that make it a satisfying experience to be a DO customer.


Yes. Moved from AWS to DO and it's like night and day. Sample of great UX that spring to mind:

1. No noise on the dashboard and intuitive process flow.

2. Very good forum and documentation. Site:digitalocean.com whatever-your-problem-is and you'll most likely get it resolved.

3. Little features like auto-populating the Gmail MX record values.


It's a lot of little details.

Just looking at the pricing pages only for a minute, not because I think the pricing page is the differentiator, but because I think the effort in details DO put there is evidenced across the experience: - One green sign up button instead of 8 - One option emphasised more than the others - A toggle to see hourly pricing, instead of small print monthly - 4 stats instead of 6

In general, in DO, I find myself not distracted and finding what I need. Less information to process, the right things emphasized.

Linode has improved greatly from the last time I reviewed it ... but still it seems to be a cargo cult of what DO did as opposed to really understanding the value of those details.


+1: insightful. Honestly though, despite the sarcasm, that's not a bad idea.


Probably not. Their previous round was $37.2m on a $153m post, so about 25% equity. Hosting is very low margin, so valuations trend lower than other businesses.

I would say $500m valuation, tops. Most likely <$400m.


I've got two droplets now, one for email/owncloud and another for personal projects with automated backups. It's pretty easy to use, but I worry I don't have the sysadmin chops to keep it secure.

Edit: I followed tutorials on auto-updating packages through cron, securing ssh, and setting up ufw for only services needed when I set it up. It's been about 2 years now so maybe I shouldn't worry.



Would anybody be interested in esoteric hosting?

Think Linode, but specifically for FreeBSD/OpenBSD/Plan9/TempleOS/MinuetOS/etc.?


I want one where I can run Dragonfly. I'm using vultr at the moment but I had to do some manual stuff, I just want to choose Dragonfly like you choose ubuntu at digial ocean.


Digital Ocean supports FreeBSD ;)



TempleOS has no networking :)


Which makes it that much cheaper to operate ;)


Presumably it has serial drivers though, yes?


Well, that's because an omniscient god is the lowest network layer!


I've been thinking about using DFlyBSD as a host platform for such things, given some of it's awesome networking/virtualization features.


It'd be great for a webhost or something where you choose the OS, but I don't think you could run FreeBSD or OpenBSD on top of it.


Honest question. How much would you pay to have something that manages the updates and setup for you and give you the sysadmin help when you need it?


That's what Rackspace does. I haven't used them in a while, but it looks like it's ALL they do now. (moved to DO, since it was half the cost and I /do/ have enough sysadmin chops to keep two linux servers patched).

http://www.rackspace.com/cloud/compare-service-levels


Rackspace doesn't actively maintain the servers by applying updates, etc. You can go to them if 3rd party software breaks on your servers and they will help you out, but I know for a fact that they don't actively update the servers, and that's a good thing considering you'd want to probably roll that into doing a release where you take that server out of rotation.


Yea but what if it was vendor agnostic? Rackspace has always been super expensive compared to the rest.


Judging by the general implosion of the PaaS industry, not enough to make it worthwhile.

It's a much more complex problem than most people think.


what makes you say the PaaS industry is imploding?


You should check out some of the documentation on Linode's site. They have some great tips about doing some very basic security such as securing SSH, keeping things updated, etc. Worth a peek.


I've been with many VPS providers: KnownHost, RackSpace Cloud, OVH, Linode, etccc and DO has been a pleasure to work with because of all the integrations and tooling it has due to the increasing popularity/community.

I think this is a great step for a transition from a "developers cloud" to a "production cloud". I hope they continue to go in the same direction and soon offer multi-container blueprints as easy to deploy as their pre-built images.

0.02


My only question, is the investment rounds the new form of private equity bubble fixing? How diversified are these investments and how does the interoperations of a company get changes to meet the revenue influx to ROI? I never really got this jist and how culture DOES change by these rounds. The pitches must damn near printing money kind of stuff made of magic Mike XXL and pixie dust to stick.


> The $83 million is going directly into growing our team and expanding our product offerings with networking and storage features.

Great to hear. Real private networking, object/shared storage and most importantly HA (IP failover/load balancing) is all DO is missing to start really competing with AWS for "big business".


I don't think HA is really on the needed list, seeing as you can roll your own load balancer in 5 minutes using provided tutorials. I mean I guess they could make an image for it to make it a little easier.. The AWS elastic load balancer is really nothing fancy.

Real private networking and object shared storage are both huge for sure though.


How is their current private networking offering not 'real'?


Any other digital ocean server in the same datacenter can hit your private IP.


Would you consider that to be a big enough risk to not deploy production apps in their environment? I.e. having your app on 1 droplet and a dedicated db on another. I'm new to ops and trying to learn all that I can :)


I do this in prod, you just need to take extra steps to protect. i.e. make a firewall rule on the database to only allow access to the database port on your private network card, from your specific web IPs (and make sure the traffic is encrypted).


I'd probably create static ARP entries as well.


Thanks for the info!


It's private as in only for customers (not exposed to the internet), not private as in only for you.


Have they implemented IP failover already? I haven't heard anything. Having your own LB without being able to fail over the IP is not HA. If the LB does down so does your business.


What?

You use DNS failover and multiple load balancers.

FOO.COM A record -> 1.2.3.4, 1.2.3.5, 1.2.3.6

Then at 1.2.3.4, 1.2.3.5, and 1.2.3.6 you put a load balancer that splits loads between all of your clients.

Any LB goes down, and DNS client retries will deal with it. If any backend server goes down, your LB will deal with it.

Using this pretty successfully at digital ocean right now. What is the downside? I guess client DNS retries takes a few seconds, but for a rare case of a load balancer dying, seems not a deal breaker.


That is not how things work. Once your system resolves the DNS record it will keep using that record for a while (depending on the TTL of the record and other factors).

Your browser will also cache the result of the DNS lookup, and if that server goes down it will not try to do another DNS lookup for another host and your service will be unavailable.

It will also be unavailable for any new customer that gets the "faulty" IP address.

Specifying multiple DNS records will just cause your DNS server to use one of those, usually in a round robin fashion.


TTL does not matter because I am not adding or removing systems from my DNS record. Even during an outage, a request to my domain name will return both the broken and the working load balancers.

I am simply giving a list of servers that can answer a request.. clients know to keep trying till one works. (Which they all do. Try it!)


Basecamp/SignalvsNoise/37signals had an article up on how they used Dyn.coms DNS service to achieve something like this, but I can't seem to find it. They had some nice graphs for when they tested it out.

Edit: My Google skills are poor, but I found it here: https://signalvnoise.com/posts/3857-when-disaster-strikes


Thanks for the -1 on a true statement about my own hosting setup!


"Why is DNS failover not recommended?" http://serverfault.com/questions/60553/why-is-dns-failover-n...

Among other things, IE7 will pin IPs for 30 minutes, non-browser clients may have serious issues, etc.


Yah for sure, it is not perfect but it is pretty good.

In my use case, I don't support IE7 (won't work at all on my SAAS app), and I only support browser clients.

I have simulated LB failures by killing nginx, and watched traffic flow over to the other LB without a big delay (in 30 seconds everyone was over).

Fancier IP failover is nice for sure, and would let some more enterprisey people in.. but for a lot of apps out there, DNS failover works great. Surprised from above how many people don't realize it exists or works so well (for so little effort).


killing nginx is good for testing load balancer application crashed, but insufficient for testing load balancer host mysteriously vanished; for that I would set your firewall to drop incoming SYNs on the load balanced port. You'll have a much bigger client side delay when there's no response than when there's a quick port closed response.


For sure, thanks for the tip!

Like I said in previous posts, I don't think it the end all rock solid load balancer answer. But I like to sleep through the night, and if having a short pause the one night a year a load balancer crash happens, my uptime is way higher than most of the internet.


> Among other things, IE7 will pin IPs for 30 minutes

What, regardless of TTL? That's gross.


> I guess client DNS retries takes a few seconds

Have you tested this in all the browsers? According to this ServerFault post[1], it could take minutes for an IP address to be considered "down" in Chromium before it cycles to the next one; Firefox apparently waits 20 seconds[2]. Those posts are dated 2011 but I can't imagine the behavior would've changed a whole lot since then. A user is not going to wait multiple minutes or even 20 seconds for a web page to render - it's effectively down.

IP failover with heartbeat or keepalived seems like a much better solution to me when feasible.

[1]: http://serverfault.com/a/328321/85897

[2]: https://bugzilla.mozilla.org/show_bug.cgi?id=641937


I have tested it in production, and seen traffic move! I am honestly not sure I ever had a customer access my site in Chromium, so that is not a deal breaker either way (assuming it wasn't also a Chrome bug).

Hacker news takes > 20 seconds to load all the time. You mash reload and go on with your life.

I think people get too hung up on "I must have the most optimal HA setup in the history of the world" they end up having no HA, or spending thousands upon thousands of dollars to make some elaborate AWS rube goldberg device that lets you checkoff a bunch of HA boxes. I know a lot of people who did that, and their fancy AWS HA contraption totally fails in the real world because the entire US-EAST region went down and operation depending on at least one availability zone of it working to stay up. Look how much effort Netflix puts into HA, and how many hours a year they are totally broken.

For each application, you have many competing desires. You can have a HA website or web application without using IP failover. IP failover is cool, but not without it's own problems. Every solution has pros and cons. DNS round robin is not a bad solution for many classes of apps that want dead simple failover.


> Any LB goes down, and DNS client retries will deal with it.

How? How does the DNS client know that the IP no longer works? do browsers today have this mechanism?

I'm not a network guy so perhaps I'm wrong but it's my understanding the problem with DNS load balancing is that you can not invalidate the TTL on the client.


It is up to the client. But all of the clients (browsers) out there do more or less the same thing.. they try the first DNS record.. if no response in ~30 seconds, try the second, and so on - going down the list.

TTL does not matter here because I am not yanking or adding to my DNS record. I am simply saying "Here are 3 servers.. try them in order until you find one that works".

In practice, a helpful feature is

a) Most clients try them in order from top to bottom b) Most DNS servers (including Digital Oceans) randomize the return order.

So if you do 2 dns requests, the first will return 1.2.3.4, 1.2.3.5, 1.2.3.6, and the second will return 1.2.3.5, 1.2.3.6, 1.2.3.4

This has the double benefit of splitting traffic more or less evenly between my load balancers, and dealing with things with one or more is dead.


I'm not sure all clients will behave as you are experiencing. But in any case:

> if no response in ~30 seconds, try the second

That is not HA. Most people will not wait 30 seconds for a page to load. If your business looses money with every minute of downtime this is certainly not adequate. It's certainly not recommended https://en.wikipedia.org/wiki/Round-robin_DNS#Drawbacks


Name a business that has not had a 30 second outage in the past year?

How about services you use a lot. How many hours has hacker News been down in 2015 (yet you are still on it right now)? How many hours has netflix been down in 2015? How many hours has entire chunks of AWS been down in 2015?

Every business is a spectrum. A HFT trading shop may decide that 1 second downtime per day is their max outage. A webpage advertising a pet adoption event may decide that 6 hours of downtime per day is the most they can tolerate. You have to make this decision for each product, and even better -each part of each product.

The entire point of this post thread was the idea behind "I can not use DO for serious stuff until they implement load balancing"... which is silly for most businesses. And even those businesses that need high uptime, I offered (and still believe) that DNS round robin is a decent way to get HA for almost no money.

You link to an article about it, but miss the boat. What other solution can I implement in a few minutes to provide available load balancing between any two servers in the world (same or different host provider, same or different datacenter, same or different continent).

Sometimes the relatively simple solution is "good enough". Sure you can find a wikipedia page saying where it is not perfect. I would not DNS round robin a HFT trading app. I have no problems on it for 99.99% of the web though. So much of the web has NO failover of any kind, stupid simple DNS round robin would be a vast improvement for most websites.


> Name a business that has not had a 30 second outage in the past year?

It's not a 30 second outage! Your domain will keep resolving the bad IP. Even with an extremely low TTL (also not recomendable) ISP's DNS will cache it and even some will ignore your TTL. A big portion of all new users will keep hitting the bad IP.

Anyway, I won't try to convince you to change your setup if you are happy with it, but it's obvious from the comments that I'm not the only one thinking it's a suboptimal solution, so at least some of us won't be considering DO for HA systems given the circumstances.


> It's not a 30 second outage! Your domain will keep resolving the bad IP. Even with an extremely low TTL (also not recomendable) ISP's DNS will cache it and even some will ignore your TTL. A big portion of all new users will keep hitting the bad IP.

So with 5 load balancers, 1/5 of customers see a one time hit of 30 seconds (after which they return to full speed).

What better solution for the same price do you propose to get HA on a budget cloud provider?


> What better solution for the same price do you propose to get HA on a budget cloud provider?

Nothing, your solution is obviously better than having none and it's enough for your needs. But the original discussion was about what's needed for DO to become a competitor for big business, not low budget, there they are already king.

Normally you want at least IP failover, meaning that you get an IP and can be rerouted to a different server with an simple API call. At work we use hetzner, which is not exactly a high-end provider but offers it: http://wiki.hetzner.de/index.php/Failover/en

Then, can be even better if the provider offers this HA-load balancer as a service, so you don't have to setup anything.

You might still need DNS failover to recover from a full datacenter going offline.


Your view is an accurate view. It takes the end user -- be it some sort of client, browser, or manual user retry -- to hit the other, alive IP(s). There's also the TTL of a bad record being dropped to consider.

You can simulate IP failover with something like Elastic Network Interfaces / Elastic IPs in AWS... it's just not going to be on the same level of speed as doing it in, say, your own rack in a datacenter. It's also subject to weirdness where you could have some sort of split brain, nodes trying to take over interfaces in a loop. The health checked "multiple load balancers behind a single DNS record" approach has flaws but also simplifies a lot of things.


> expanding our product offerings with networking and storage features

I'm so excited for this. I'd previously commented about how the lack of non-SSD storage meant I had to screw around with S3 when I really just wanted to keep everything on DO.

Great company. Been with them for two years now, and couldn't be happier. Combined with Cloud66 I worry less about deployments and servers and backups, and more about just getting the code out.


Has anyone used Vultr.com?

I ask because they have all the same features as DO + way more (e.g. dedicated hosting w/ same great panel, BYO ISO, etc).


I am also curious about this. The fact that Vultr accepts Bitcoin as payment further piques my interest. Can anyone shed some light on their performance and convenience vs. DigitalOcean?

Edit: Found this http://blog.due.io/2014/linode-digitalocean-and-vultr-compar..., which seems to portray it quite favorably.


This doesn't compare against DO but it does compare performance against Rackspace and AWS.

https://www.vultr.com/benchmarks/


Slightly objective performance comparison if you ask me. Obviously Vultr is going to say they're better than the competition.


I have, with absolutely zero problems. I recommend them often. Not sure I'd trust them with PCI compliance requirements or something that heavy but I'd certainly use them for commodity VPSes over many others. They seem to offer a lot better performance/cost than DO.

Vultr is a brand of this company -- https://www.choopa.com -- and they've been around for a while.


I've used them and love them. I did some benchmarks a while ago (standard unixbench and such) and they outperformed DO on every metric. The extra features they have (particularly custom ISOs) are also really nice.

Definitely recommend them, although I use DO exclusively for the Github student credit.

Benchmarks:

Vultr: https://gist.github.com/bobobo1618/0972fc51f49d90fb37af

DigitalOcean: https://gist.github.com/bobobo1618/81aa3f413b99aaab1f0d


I'm a heavy Linode, DO and Vultr user. All three are great (with Linode being my preferred choice if cost isn't the overriding factor).

I've got Vultr instances in all of their EU locations and haven't had any issues (connectivity wise or uptime wise). One note for Vultr: new accounts are limited to max 5 VPS instances by default until you open a support ticket and request the limit be raised (which they were happy to do, at least for me).


I've switched from DO to Vultr back when DO did not offer FreeBSD and DO's SFO network was pretty bad. I'm back on DO now because I've got my hands on some credits, but I would switch back to Vultr when I run out. DO's SFO network has improved I think, but for the $5 plan you get 50% more memory at Vultr (512mb vs 768mb).


I actually dislike DO's new control panel.


I've used them. Their control panel is not as pretty, and their community is not as large. But the product is arguably better, they offer things that DO doesn't offer like DYO ISO, daily backups for the same cost as DO's weekly (seriously DO, only weekly backups???), and a ton more.


So if you had to choose between using DO or Vultr, which would it be and why?


I use DO and silently complain about the lack of daily backups. I don't really have a need for the other stuff, and Vultr is cheaper (when I last looked). But for some reason I always end up just using DO.


Their ANTISPAM policy is draconian, but I like their interface. I like that root passwords are in the web interface instead of going through email. Finally, their presence in more datacenters is useful for someone in the middle of the country.


I have, I found their Sydney instances to lag for seconds at a time, completely randomly, while SSH'd in (I also live in Sydney, so it's not distance). I now have an instance with Digital Ocean in Singapore and I much prefer it.


Quick question: How's your ping from Sydney to Singapore?


I have used them for a side project, no issues. The big reason for me moving from DO to them that I didn't see mentioned is because they have actual private networking. And also the option for servers with HDD instead of SSD.


I've been using RackSpace for couple years before moving to DigitalOcean, I've had a good experience with Rackspace when I started but my bills kept growing and server started to have constant issues every now and then, so I've decided to move to DigitalOcean couple years ago. My traffic since then grew quite a lot from 100K/month to around 1 million visitors/month and my bills from DigitalOcean are still not much higher than they used to be at the later stages on RackSpace, and performance is much better for me with DigitalOcean.

The only thing I dont like about DigitalOcean droplets is the requirement to shut off the server before resizing, Rackspace allowed me to do it without a need to shut it off.


Slightly offtopic, but I am curious if anyone has any insights into the legal side of hosting profit seeking services on top of VPS's in general. Is the boilerplate contract(s)/eula/tos good enough generally or do you seek to actually make changes to a custom one?

What about hosting websites vs reselling access for some other purpose (eg. similar to game hosting services that allow full customer control of the instance?)

It seems to me like there is a lot of room for a tool that can spin up an instance over multiple VPS providers, because sometimes one will have a colo close to where you want and sometimes another will.

Anyone aware of comprehensive location based benchmarking of all the VPS's?


There are some libraries you can use to abstract away differences between VPS providers:

https://jclouds.apache.org/

https://libcloud.apache.org/

https://developer.rackspace.com/blog/gophercloud/

http://www.openstack4j.com/

For playing with JVM stuff I found openstack4j easier to use from Scala and Clojure than jclouds.

I didn't downvote you, but I figure someone thought you were too off topic.


I love DO.

I'm a novice/intermediate programmer, and when I knew nothing about what VPS even was I started using Linode(due to many great recommendations).

Linode is a great service, but recently I've switched to DO and I like it so much more. As a person who just needs a simple and straightforward way to put several django projects online - DO offers me a simple and beautiful interface, cheaper prices, and a lot of great and extremely useful tutorials.

It is much nicer to use and a droplet price starts from $5/mo, which is freakin' awesome, and all I need from VPS service at this point.

Thank you guys, you are great, keep it up!


Great work Ben and team! I've been a customer for 2 years now, absolutely love the service and see no reason to leave it.


I'd like a harddrive option. To get a large amount of storage is way too expensive.


hm. Interesting. From what I know of the industry, their size and pricing, I would have thought they would be profitable enough that raising this sort of money wouldn't be particularly interesting.

Does this mean that they are operating at a loss?


There are many reasons to raise money. They may be operating at a loss but with growth in recurring revenue such that they expect to become profitable in a few years. But I wouldn't be surprised if they are raising this money to fuel an already growing fire, either by enhanced sales teams or even building out new capital projects that will open new revenue channels or create barriers to future competition.


I would be surprised if they didn't have a positive cash flow by now. It's probably more about growing faster than about having money per se.


I love DO, they are doing great work. The only thing I regret is the relatively poor choice of platforms they support.

For example, there have been a really big demand for NixOS for two years now but still no announcement whatsoever.


Really? Do you have some stats on NixOS growth to support that?


https://digitalocean.uservoice.com/forums/136585-digitalocea...

One of the highest voted customer feedback on their official forum.


Well to be sincere, DO is implementing features most people actually care about. Working charms, it still has a long way to go :)


I'm pleased for DO. Seems like a decent company doing things well. I've never had a complaint with their services.


it's pretty amazin what they've managed to do, what was essentially an over saturated market, they've managed to pull ahead of incumbents like linode.


how much do you think giving away huge amounts of free credits helped?


If all goes well, looks like they might be competing directly with AWS soon.


They aren't even in the same ballpark. Hell, Google isn't even in the same ballpark as AWS and Google made $83million in the time it took me to read the article. DO is great, but doesn't belong in the same sentence as AWS.


Azure is closest I think. Enterprisey cloud hosting.


my goodness, I was reading through this entire thread and all I was thinking about is "How come no one is mentioning Azure".

I have tried Azure, AWS, and DO, and by far the most stable and usable one is Azure. I thought I was the only one who thought Azure was good...


Most people here (including me) are not windows people, so Azure is obviously not going to be the first place we look to.

They have linux VMs which is cool, but still not sure I would pick them.


MS has strong reputation of company with closed-source and vendor-locked stack, so many of us can't feel "first-class citizens" in the MS world. But they are working on their reputation, VS Code is a very good example, and I hope soon we will not afraid to trust them.


AWS doesn't target the same audience or have the same product line. They're related, but not at all the same product market. To illustrate this, AWS is infrastructure management tools, and DO is infrastructure management. The fact that AWS has infrastructure management as well (built on top of their own tools) is only relevant for the people that build out the rest of their infrastructure. DO is for folks that don't want to have to learn and understand infrastructure much at all. That's very doable these days for a lot of aspects of development, but it's also not at all what AWS is going for.

DO is closer to competing with just Elastic Beanstalk and maybe RDS (from the standpoint of a managed RDBM service, not the feature set).


> DO is closer to competing with just Elastic Beanstalk

Hu? DO is like EC2, a box on the net with an IP address. Elastic Beanstalk is a PaaS that will auto-scale for you.


AWS is basically software defined enterprise/software defined business at this point. Their products and services are amazing, and they will certainly dominate the fortune with GOOG and M$, etc. I'd imagine the push down into the SMB will more a bit more difficult, the push up into the SMB with a more b2c(dev) product offering and community approach seems to be the the better game here, think Microsoft and Apple in the early days. These are still multi-billion dollar addressable markets, there is a lot of room to play in cloud.


But do they want to? AWS is going for a high price for a super super deep stack of 600 little services. I kind of like that DO focuses on their core VM thing.

Obviously you can't fake some things with just VMs (rolling your own VPC for example is kind of hard to do), but a lot of people don't need Redshift or SQS or any of the amazon SAAS things...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: