DO has really benefitted from Linode stalling over the past few years. I admire Linode for remaining a bootstrapped business but it feels as though the owners lost their fighting spirit and energy... Perhaps because the small pool of Linode owners felt they made enough money already.
DOs announment talks about a storage product, which is strategically important and crucially something Linode has sorely needed for a long time. And yet the biggest development in recent years at Linode has been a proprietary stats and monitoring system built as an upsell, which doesn't really do anything distinctive that Nagios or another package couldn't provide.
Instead Linode is now switching their entire platform from Xen to KVM, a curious move which will create risk and cost velocity that could have been spent on product development.
I have been a huge supporter of Linode over the years, and the startup I co-founded is one of their biggest customers, but at this point DO seems like the winning horse to back.
I'm not saying you're wrong but I disagree from my own POV. It's a common play in the hosting business to grow big quickly then sell out to a more enterprisey company (as happened with EV1Servers, Softlayer, Heroku) and if DO is taking such large amounts of funding, something will have to happen (IPO, acquisition, etc) and some companies don't make the transition well (I'm not saying DO wouldn't, but you never know).
Linode, on the other hand, can remain a company focused on just being a long term business forever. They might move a little more conservatively, but they have owners with skin in the game and customers to keep happy. I use both DO and Linode, but Linode for my most critical stuff simply because I "feel" they're more likely to remain basically the same in 5 years' time and I value that consistency as a business.
I think DO is superb, I have great admiration for them and I recommend them a lot, but I also feel they're the riskier horse to back even if the potential upside is so much greater.
>> It's a common play in the hosting business to grow big quickly then sell out to a more enterprisey company (as happened with EV1Servers, Softlayer, Heroku)
And that's exactly what Slicehost did, years ago.
I wonder if Rackspace ever courted Linode. Slicehost always seemed to be the inferior one in price and performance.
Before it was sold, Slicehost tools and documentation were better, and their customer support was incredible. You would get answers in minutes, even if you only rented a $20 VPS.
For all the money Linode has on hand, and the size of their technical team, it's absolutely insulting they haven't made any tangible improvement to their management front-end in at least five years, if not more.
The way it's behaving it's as if it was acquired and kept on life-support.
For what is worth, I've been a Linode customer for a few years and never really felt their management front-end was in urgent need of an update. It's solid and does the job for us. My only complains to be honest are 1) the lack of storage dedicated nodes, as OP points out, and 2) the price, it gets very expensive very fast as you scale and spin more nodes.
We are currently considering moving to dedicated machines but only because of 2), otherwise we would happily be their customer for life.
Nothing is worse than spending weeks securing every aspect of your VPS only to have incidents like this appear. And worst of all ? To this day Linode never clearly said what happened or what they did to prevent it happening again.
I chose vultr over DO because DO used so many external web services to run their site. Every new page at their site required me to turn on 5 more domains in NoScript. Vultr only required me to enable their site. The performance at vultr was also very good for what I needed: test servers.
Actually, I did try Scaleway when I saw them on HN like 9 months ago! I recompiled a Go program I was working on [1] for ARM and benched it against Linode.
It was literally 10x slower than Linode even after playing around with Go's concurrency level to find the fastest runtime, and even with the dataset in memory. :( ARM just wasn't the right arch for what I was doing.
I ended up going with Vultr because it has the $5 pricepoint for hosting tiny websites and tons of datacenter locations. Their CPU performance and network speed were great in my tests.
It "does the job" but it's not as pleasant to use as it could be. A lot of the navigation is pointless and confusing, information is often buried several levels deep, and there's missing information on some of the detail screens you need to go back to the main listing to find: Instance type is only shown on the main list, not the individual instance tab, for example.
If GitHub had never improved their site since they launched it would be awful. Every time they move things forward I'm happier to be a paying customer. With Linode I reluctantly use them, but for new projects I'm using other services that work better.
Same here, moving to a dedicated server from a Linode this week. For slightly under 2x the monthly cost I'm paying at Linode, I can get a dedicated server with 8x the memory, infinity more transfer on the same size pipe (unmetered 100Mbps connection), 20x more disk space, and 2x the CPU, and all dedicated so no worries about phantom performance problems related to other tenants. Lish &c are a big plus for virtualized nodes, but I think I'll live without it.
Their IP Failover is really badly documented and hard to work with.
Their Nodebalancers are noticeably slow, the interface is alright, but you are better off setting up your own loadbalancer.
I feel that static networking could be made easier, and more automatic when I am adding a new node.
Their stackscripts should be able to receive parameters for when it is running, and the error-reporting should be better. I had issues where my scripts wouldn't run and I had no idea why.
Not doing regular improvements is how you end up being years behind your competitors, without even noticing it, and having to do a huge rewrite just to achieve parity.
I know Digital Ocean's control panel has improved several times since their launch, and their ability to launch instances with a complete stack is extremely useful. Linode has done nothing here. They point to their badly documented StackScripts system and shrug.
There's a hundred things Linode could do to make user's lives easier and they've done maybe two or three of them.
Why do people think "well designed" translates to superficial and pointless? Yes, sometimes this is the case, but when you have pride in your work you'll want to present it in the best possible light.
Would you rather eat in a restaurant where all the chairs are creaky, where the tables are wobbly, and where the wait staff is doing their best to get by with broken equipment, or would you instead visit down the street to a place where everything may not be new but it's well maintained? If the food quality was the same, why would you insist on going to the place with crappy, broken stuff?
Linode just doesn't seem to care about their site at all. If they did they'd listen to user feedback and improve things once in a while. You know, like at least once every six years.
I'm not sure if I am meant to be "people", but I certaintly don't think that design is superificial, but rather I think Linode is designed fine, and just works, and doesn't really need changes.
If we talk about DO in comparison, then DO's interface is just ugly and almost unusable. Every time I use it, I want to leave their site as soon as possible.
If you think Digital Ocean is ugly and almost unusable I have to wonder what you think is better.
Their one page instance creator where you pick size, location, and distribution is extremely convenient. This is three separate steps with Linode that happens over the course of six screens, plus two more if you want to enable private networking.
Eight steps vs. one.
They also don't offer the ability to install a system with an SSH key pre-installed avoiding the need for password authentication when bootstrapping your system. Even on a technical level Linode is way behind here.
I wouldn't call DO's interface ugly but I hugely prefer Linode's. Much better visual feedback and it feels like a solid management console rather than a pretty toy.
Why does Linode look "serious" and Digital Ocean look "toy"? It's a phenomenon I see happening primarily in the tech industry where ugly trumps functional and clean. People like the complexity aesthetic, the rugged, rough edges.
Try to open this link https://cloud.digitalocean.com/droplets and find current balance of your account. I can't understand why it's under "Settings".
When I'm in the dashboard, I want to see as many controls as possible, not as minimal interface as possible.
I suppose a lot of engineers prefer function over form rather than the other way around. If anything I'm suspicious when a tech/engineer focussed product prioritises aesthetics over function. Might be irrational but it's my first reaction.
Could not disagree more. I would NEVER run anything serious on Linode.
They have a long history of withholding information from customers in the face of security incidents and outages. The last time they were hacked I found out from Reddit that it happened and even when they bothered to tell me they failed to say (a) what actually happened or (b) what steps they took to prevent it occurring again. And many, many outages had information communicated on the IRC hours before website was updated.
It is the culture and professionalism that differentiates one VPS provider from another. Linode gets a massive thumbs down for me.
The issue here is not the 0 days occurred but how you deal with them and what systems you have in place to prevent them. Linode has consistently been sloppy at notifying customers and their auditing systems are/were clearly inadequate since their positions changed over the few days. Sure their data is encryptable but if you are sloppy about the process you're likely pretty sloppy about the implementation. It's trivial to decrypt data if you haven't encrypted it properly.
a) Average reliability - had numerous incidents at their London and Fremont DCs. And the complete inability to tell me what was happening in a timely manner was pretty unacceptable.
b) Average support - they will answer the simple questions very quickly. But anything reasonably complex they will actually not bother responding at all.
Funny... I remember when Linode first became really popular, it was right around the time Slicehost got bought (and subsequently shut down by) Rackspace. A lot of people moved over to Linode because Slicehost was no more (or when the writing was on the wall that it would soon become no more).
So perhaps you're correct that lack of funding is making Linode lazy now, but that doesn't mean that getting bought or accepting a bunch of money would solve the problem.
Also, look what happened to Slicehost after acquisition. Rackspace woke up, realized Slicehost customers are cheap, and drove them off by raising prices.
When people ask, "Why does HN/the tech industry pay so much attention to the companies that are raising money?" I'll point them to this.
There's absolutely nothing wrong with bootstrapping and making a good living, but the stuff that goes big and wants to scale quickly usually has to raise a bunch of money to get there.
That's true, but IMHO hosting is not a service that benefits significantly from economies of scale beyond a certain (though reasonably high) point. Would Linode be any better for me as a customer if it were immediately 3x the size? I'm not sure. As a customer, would I prefer a Linode that were 10x the size and having to head for an "exit" of some kind? No way.
I'm not sure; Amazon putting data centers in glaciers or Google trying to float a ship out into the ocean to water cool it certainly seem like they could could provide either performance or prices an order of magnitude greater than what is currently available; the tricky part is building up enough steam that you can be trusted with enough cash to make that happen.
If nothing else, we all benefit from the prices. AWS is so cheap I barely have to think about it.
OTOH, taking money puts pressure on a bigger exit/revenue scaling plan. To me, that means more like GoDaddy and less like prgmr, which is not what I want from my service providers.
The pressure to expand is a negative indicator when I'm a user of a service.
fun fact about linode: if it says automatic backups are on in your control panel, they can still be off on linode's end. I went to go recover from a recent snapshot, only to find our latest backup was 11 months ago. That it would be "on" on our end, and be broken for 11 months for a service i'm not supposed to have to think about, that really blew me away. But, to their credit, DO does not offer the same level of backups.
I used digital ocean for a while. My experience was bad reliability and random technical issues. I had various very experience ops people verify with me that it wasn't an issue I introduced into the systems running.
I went back to dedicated servers at a smallish provider and forgot how nice it can be to not have all the cloud virtualization stuff get in the way. It's just too fragmented among providers in the way they setup for me to use the service and not have a fear of lockin. Does it take me 3 or 4 days to get new boxes? Yes. Is it causing a massive headache for me? No, because I plan things and order them ahead of time.
Just my 2 cents, I know others who use DO and love it.
Same experience here. A few of my developers migrated our servers to DO 2 years ago to "save" a few hundred dollars. Turns out DO has planned downtime every other month and its cost us 100x more in staff and headache dealing with them. You can't run a SaaS or anything that requires uptime reliability (ie, any sizable business) . I've transferred our main site back to AWS. Also AWS dedicated pricing is now almost the same as DO, and much more reliable.
But congrats to the DO team. They will only get better.
So, Digital Ocean doesn't have great reliable up time then? I've been using OpenShift's free hosting tier to host a Ghost blog, and it goes down CONSTANTLY, making it unusable. I was about to start paying the $5 a month plan with Digital Ocean, but I'm not going to if it also goes down frequently.
Heh. Nearly the same thing here but on a personal projects level, no company stuff. I really wanted it to be something I did or for there to be a solution because migration is never fun, but I finally had to bite the bullet.
EDIT: had no idea about the planned downtime. Maybe that explains the random chaos issues I had.
Because I prefer a set of simple and basic tools used in combination to accomplish something instead of a "push this button and it does everything" approach. It's the *nix pipe mentality of combining simple well built tools to build solid and reliable things that you can adapt any way you please.
I also build things that are heavy on the network side, and virtualization drops networking performance a great deal (this may be changed these days, not sure).
Don't get me wrong. I don't fear virtualization. I use virtualbox and vmware heavily for development, and yes they do have a purpose, but it's a bad fit for the type of projects I build.
I just find that anything in life that just continues to add features for the sake of adding features, and create more and more "magic" push buttons to solve your problems eventually goes to crap and becomes a fragmented dependency hell that I would rather not deal with. And yes, I do love Golang.
What? AWS is great because it has a nice console but it also gives you direct access to all the VMs and there is a AWS command line you can install to program/script everything. What's missing here?
I do not work for joes or have any affiliation. It's a very "non-automated" setup. You file a ticket with a rep to get new servers and describe what you want (I need 5 more servers just like X), no forms online to automate it or any of that. The plus side of that is you never get a canned response or ignored. Very quick and professional.
Anyone else reading this, they're not a bad budget provider at all. A good analogy to make would be they're like a small pizza place in your hometown, versus Limestone or FDC.
If you don't mind me piggy-backing, I have a non-affiliated love affair with ReliableSite.net. Yes - weird name, but I've had nothing but amazing, amazing experiences with them. The quality is 5x what you get for paying 1/5th the price.
Before anyone thinks about getting into hardware on this kind of level, please do your research @ webhostingtalk.com. Yes, $5 can go a long way with DigitalOcean because they're funded (and kind of got lucky); but that other $5 VM? Good luck.
I wonder if the this pushes there valuation over 1B, if so I think that means that Techstars is the first accelerator outside of YC to produce a unicorn.
Personally I hope so, Digital Ocean is a great product and I think one of the really smart things they did was be generous with there free credits as it was at least a great way for me to get on there platform and later on drop a fair amount into hosting with them.
Is unicorn just a billion dollar non-public startup? Isn't there some other magic sauce required like supernormal margins or superfew employees or exploiting toothless regulations?
DO seems to be almost a too straight forward business model (buy servers, rent servers in sub-units) to be considered in the modern startup pool of wishful thinking. I mean, it's not like they're an iPhone app for renting other people's idle server space on demand. Now that would be a game changer.
DO's UX is a game changer, I personally think is why they have been so successful in what seemed like a crowded market with the "same" business model as the others.
Yeah, the simplicity is great. AWS has 80 products and it takes months to figure how how they all work together. DO's click-n-go for $5 month helps just from a mental clarity point of view.
At this point AWS is so complicated they should be offering an official Cisco-like series of certifications.
Oh, great to know. They are really cheap too ($150, $300, $75 re-up).
Amusingly, that page demonstrates the need for an AWS certification in the first place: the page has over 100 things trying to grab your attention with no focus or clear direction at all. Amazon as a corporate entity seems to go for "maximum information + maximum confusion" in their UX at every turn.
AWS is an inherently complex product. It has far and away more features than any other competitor, so there is obviously going to be more confusion. I think the AWS dashboard is getting better and better and relatively easy to use if you keep each product isolated. There being 40+ products to choose from is overwhelming if you don't know what you are looking at, but that is the nature of the beast.
Why fear a complicated cloud? I tried DO. It was cute, then I kept using AWS. DO is very simple, for simple things. But where do I get my GPU optimized Droplet, or one with 10GB internet connection and 244GB of ram to computer that graph problem with 3B edges? DO's only got relatively small, simple instances.
Further, DO is MORE expensive than AWS. What you say? Well right now I can grab a spot request for a box with 8 cores and 30GB ram for $0.066/hr or $48/mo compared to $160/mo on DO. That's ~1/4 the cost. Also, most "AWS Sucks" benchmarks are naive at best, failing to use the local disk, rather than the NAS, and this only requires 3 bash lines to mount.
Serious question - What part of their UX do you think is game changer? I use DO and Linode. They both look similar to me. DO does have better UX than AWS, but I think DO is comparable to Linode rather than AWS.
On the other hand, I think DO's content guides as a marketing tool is a great asset for them. I haven't seen any other provider do that. Linode had a few guides, but nothing as DO's scale.
You said it yourself when you mentioned the marketing and the community and the customer guides. User Experience is not just web interfaces, it's the whole user experience (man....) and that includes onboarding, self-service, support, learning and all the other small touches that make it a satisfying experience to be a DO customer.
Just looking at the pricing pages only for a minute, not because I think the pricing page is the differentiator, but because I think the effort in details DO put there is evidenced across the experience:
- One green sign up button instead of 8
- One option emphasised more than the others
- A toggle to see hourly pricing, instead of small print monthly
- 4 stats instead of 6
In general, in DO, I find myself not distracted and finding what I need. Less information to process, the right things emphasized.
Linode has improved greatly from the last time I reviewed it ... but still it seems to be a cargo cult of what DO did as opposed to really understanding the value of those details.
Probably not. Their previous round was $37.2m on a $153m post, so about 25% equity. Hosting is very low margin, so valuations trend lower than other businesses.
I would say $500m valuation, tops. Most likely <$400m.
I've got two droplets now, one for email/owncloud and another for personal projects with automated backups. It's pretty easy to use, but I worry I don't have the sysadmin chops to keep it secure.
Edit: I followed tutorials on auto-updating packages through cron, securing ssh, and setting up ufw for only services needed when I set it up. It's been about 2 years now so maybe I shouldn't worry.
I want one where I can run Dragonfly. I'm using vultr at the moment but I had to do some manual stuff, I just want to choose Dragonfly like you choose ubuntu at digial ocean.
That's what Rackspace does. I haven't used them in a while, but it looks like it's ALL they do now. (moved to DO, since it was half the cost and I /do/ have enough sysadmin chops to keep two linux servers patched).
Rackspace doesn't actively maintain the servers by applying updates, etc. You can go to them if 3rd party software breaks on your servers and they will help you out, but I know for a fact that they don't actively update the servers, and that's a good thing considering you'd want to probably roll that into doing a release where you take that server out of rotation.
You should check out some of the documentation on Linode's site. They have some great tips about doing some very basic security such as securing SSH, keeping things updated, etc. Worth a peek.
I've been with many VPS providers: KnownHost, RackSpace Cloud, OVH, Linode, etccc and DO has been a pleasure to work with because of all the integrations and tooling it has due to the increasing popularity/community.
I think this is a great step for a transition from a "developers cloud" to a "production cloud". I hope they continue to go in the same direction and soon offer multi-container blueprints as easy to deploy as their pre-built images.
My only question, is the investment rounds the new form of private equity bubble fixing? How diversified are these investments and how does the interoperations of a company get changes to meet the revenue influx to ROI? I never really got this jist and how culture DOES change by these rounds. The pitches must damn near printing money kind of stuff made of magic Mike XXL and pixie dust to stick.
> The $83 million is going directly into growing our team and expanding our product offerings with networking and storage features.
Great to hear. Real private networking, object/shared storage and most importantly HA (IP failover/load balancing) is all DO is missing to start really competing with AWS for "big business".
I don't think HA is really on the needed list, seeing as you can roll your own load balancer in 5 minutes using provided tutorials. I mean I guess they could make an image for it to make it a little easier.. The AWS elastic load balancer is really nothing fancy.
Real private networking and object shared storage are both huge for sure though.
Would you consider that to be a big enough risk to not deploy production apps in their environment? I.e. having your app on 1 droplet and a dedicated db on another. I'm new to ops and trying to learn all that I can :)
I do this in prod, you just need to take extra steps to protect. i.e. make a firewall rule on the database to only allow access to the database port on your private network card, from your specific web IPs (and make sure the traffic is encrypted).
Have they implemented IP failover already? I haven't heard anything. Having your own LB without being able to fail over the IP is not HA. If the LB does down so does your business.
Then at 1.2.3.4, 1.2.3.5, and 1.2.3.6 you put a load balancer that splits loads between all of your clients.
Any LB goes down, and DNS client retries will deal with it. If any backend server goes down, your LB will deal with it.
Using this pretty successfully at digital ocean right now. What is the downside? I guess client DNS retries takes a few seconds, but for a rare case of a load balancer dying, seems not a deal breaker.
That is not how things work. Once your system resolves the DNS record it will keep using that record for a while (depending on the TTL of the record and other factors).
Your browser will also cache the result of the DNS lookup, and if that server goes down it will not try to do another DNS lookup for another host and your service will be unavailable.
It will also be unavailable for any new customer that gets the "faulty" IP address.
Specifying multiple DNS records will just cause your DNS server to use one of those, usually in a round robin fashion.
TTL does not matter because I am not adding or removing systems from my DNS record. Even during an outage, a request to my domain name will return both the broken and the working load balancers.
I am simply giving a list of servers that can answer a request.. clients know to keep trying till one works. (Which they all do. Try it!)
Basecamp/SignalvsNoise/37signals had an article up on how they used Dyn.coms DNS service to achieve something like this, but I can't seem to find it. They had some nice graphs for when they tested it out.
Yah for sure, it is not perfect but it is pretty good.
In my use case, I don't support IE7 (won't work at all on my SAAS app), and I only support browser clients.
I have simulated LB failures by killing nginx, and watched traffic flow over to the other LB without a big delay (in 30 seconds everyone was over).
Fancier IP failover is nice for sure, and would let some more enterprisey people in.. but for a lot of apps out there, DNS failover works great. Surprised from above how many people don't realize it exists or works so well (for so little effort).
killing nginx is good for testing load balancer application crashed, but insufficient for testing load balancer host mysteriously vanished; for that I would set your firewall to drop incoming SYNs on the load balanced port. You'll have a much bigger client side delay when there's no response than when there's a quick port closed response.
Like I said in previous posts, I don't think it the end all rock solid load balancer answer. But I like to sleep through the night, and if having a short pause the one night a year a load balancer crash happens, my uptime is way higher than most of the internet.
Have you tested this in all the browsers? According to this ServerFault post[1], it could take minutes for an IP address to be considered "down" in Chromium before it cycles to the next one; Firefox apparently waits 20 seconds[2]. Those posts are dated 2011 but I can't imagine the behavior would've changed a whole lot since then. A user is not going to wait multiple minutes or even 20 seconds for a web page to render - it's effectively down.
IP failover with heartbeat or keepalived seems like a much better solution to me when feasible.
I have tested it in production, and seen traffic move! I am honestly not sure I ever had a customer access my site in Chromium, so that is not a deal breaker either way (assuming it wasn't also a Chrome bug).
Hacker news takes > 20 seconds to load all the time. You mash reload and go on with your life.
I think people get too hung up on "I must have the most optimal HA setup in the history of the world" they end up having no HA, or spending thousands upon thousands of dollars to make some elaborate AWS rube goldberg device that lets you checkoff a bunch of HA boxes. I know a lot of people who did that, and their fancy AWS HA contraption totally fails in the real world because the entire US-EAST region went down and operation depending on at least one availability zone of it working to stay up. Look how much effort Netflix puts into HA, and how many hours a year they are totally broken.
For each application, you have many competing desires. You can have a HA website or web application without using IP failover. IP failover is cool, but not without it's own problems. Every solution has pros and cons. DNS round robin is not a bad solution for many classes of apps that want dead simple failover.
> Any LB goes down, and DNS client retries will deal with it.
How? How does the DNS client know that the IP no longer works? do browsers today have this mechanism?
I'm not a network guy so perhaps I'm wrong but it's my understanding the problem with DNS load balancing is that you can not invalidate the TTL on the client.
It is up to the client. But all of the clients (browsers) out there do more or less the same thing.. they try the first DNS record.. if no response in ~30 seconds, try the second, and so on - going down the list.
TTL does not matter here because I am not yanking or adding to my DNS record. I am simply saying "Here are 3 servers.. try them in order until you find one that works".
In practice, a helpful feature is
a) Most clients try them in order from top to bottom
b) Most DNS servers (including Digital Oceans) randomize the return order.
So if you do 2 dns requests, the first will return 1.2.3.4, 1.2.3.5, 1.2.3.6, and the second will return 1.2.3.5, 1.2.3.6, 1.2.3.4
This has the double benefit of splitting traffic more or less evenly between my load balancers, and dealing with things with one or more is dead.
I'm not sure all clients will behave as you are experiencing. But in any case:
> if no response in ~30 seconds, try the second
That is not HA. Most people will not wait 30 seconds for a page to load. If your business looses money with every minute of downtime this is certainly not adequate. It's certainly not recommended https://en.wikipedia.org/wiki/Round-robin_DNS#Drawbacks
Name a business that has not had a 30 second outage in the past year?
How about services you use a lot. How many hours has hacker News been down in 2015 (yet you are still on it right now)? How many hours has netflix been down in 2015? How many hours has entire chunks of AWS been down in 2015?
Every business is a spectrum. A HFT trading shop may decide that 1 second downtime per day is their max outage. A webpage advertising a pet adoption event may decide that 6 hours of downtime per day is the most they can tolerate. You have to make this decision for each product, and even better -each part of each product.
The entire point of this post thread was the idea behind "I can not use DO for serious stuff until they implement load balancing"... which is silly for most businesses. And even those businesses that need high uptime, I offered (and still believe) that DNS round robin is a decent way to get HA for almost no money.
You link to an article about it, but miss the boat. What other solution can I implement in a few minutes to provide available load balancing between any two servers in the world (same or different host provider, same or different datacenter, same or different continent).
Sometimes the relatively simple solution is "good enough". Sure you can find a wikipedia page saying where it is not perfect. I would not DNS round robin a HFT trading app. I have no problems on it for 99.99% of the web though. So much of the web has NO failover of any kind, stupid simple DNS round robin would be a vast improvement for most websites.
> Name a business that has not had a 30 second outage in the past year?
It's not a 30 second outage! Your domain will keep resolving the bad IP. Even with an extremely low TTL (also not recomendable) ISP's DNS will cache it and even some will ignore your TTL. A big portion of all new users will keep hitting the bad IP.
Anyway, I won't try to convince you to change your setup if you are happy with it, but it's obvious from the comments that I'm not the only one thinking it's a suboptimal solution, so at least some of us won't be considering DO for HA systems given the circumstances.
> It's not a 30 second outage! Your domain will keep resolving the bad IP. Even with an extremely low TTL (also not recomendable) ISP's DNS will cache it and even some will ignore your TTL. A big portion of all new users will keep hitting the bad IP.
So with 5 load balancers, 1/5 of customers see a one time hit of 30 seconds (after which they return to full speed).
What better solution for the same price do you propose to get HA on a budget cloud provider?
> What better solution for the same price do you propose to get HA on a budget cloud provider?
Nothing, your solution is obviously better than having none and it's enough for your needs. But the original discussion was about what's needed for DO to become a competitor for big business, not low budget, there they are already king.
Normally you want at least IP failover, meaning that you get an IP and can be rerouted to a different server with an simple API call. At work we use hetzner, which is not exactly a high-end provider but offers it: http://wiki.hetzner.de/index.php/Failover/en
Then, can be even better if the provider offers this HA-load balancer as a service, so you don't have to setup anything.
You might still need DNS failover to recover from a full datacenter going offline.
Your view is an accurate view. It takes the end user -- be it some sort of client, browser, or manual user retry -- to hit the other, alive IP(s). There's also the TTL of a bad record being dropped to consider.
You can simulate IP failover with something like Elastic Network Interfaces / Elastic IPs in AWS... it's just not going to be on the same level of speed as doing it in, say, your own rack in a datacenter. It's also subject to weirdness where you could have some sort of split brain, nodes trying to take over interfaces in a loop. The health checked "multiple load balancers behind a single DNS record" approach has flaws but also simplifies a lot of things.
> expanding our product offerings with networking and storage features
I'm so excited for this. I'd previously commented about how the lack of non-SSD storage meant I had to screw around with S3 when I really just wanted to keep everything on DO.
Great company. Been with them for two years now, and couldn't be happier. Combined with Cloud66 I worry less about deployments and servers and backups, and more about just getting the code out.
I am also curious about this. The fact that Vultr accepts Bitcoin as payment further piques my interest. Can anyone shed some light on their performance and convenience vs. DigitalOcean?
I have, with absolutely zero problems. I recommend them often. Not sure I'd trust them with PCI compliance requirements or something that heavy but I'd certainly use them for commodity VPSes over many others. They seem to offer a lot better performance/cost than DO.
Vultr is a brand of this company -- https://www.choopa.com -- and they've been around for a while.
I've used them and love them. I did some benchmarks a while ago (standard unixbench and such) and they outperformed DO on every metric. The extra features they have (particularly custom ISOs) are also really nice.
Definitely recommend them, although I use DO exclusively for the Github student credit.
I'm a heavy Linode, DO and Vultr user. All three are great (with Linode being my preferred choice if cost isn't the overriding factor).
I've got Vultr instances in all of their EU locations and haven't had any issues (connectivity wise or uptime wise). One note for Vultr: new accounts are limited to max 5 VPS instances by default until you open a support ticket and request the limit be raised (which they were happy to do, at least for me).
I've switched from DO to Vultr back when DO did not offer FreeBSD and DO's SFO network was pretty bad. I'm back on DO now because I've got my hands on some credits, but I would switch back to Vultr when I run out. DO's SFO network has improved I think, but for the $5 plan you get 50% more memory at Vultr (512mb vs 768mb).
I've used them. Their control panel is not as pretty, and their community is not as large. But the product is arguably better, they offer things that DO doesn't offer like DYO ISO, daily backups for the same cost as DO's weekly (seriously DO, only weekly backups???), and a ton more.
I use DO and silently complain about the lack of daily backups. I don't really have a need for the other stuff, and Vultr is cheaper (when I last looked). But for some reason I always end up just using DO.
Their ANTISPAM policy is draconian, but I like their interface. I like that root passwords are in the web interface instead of going through email. Finally, their presence in more datacenters is useful for someone in the middle of the country.
I have, I found their Sydney instances to lag for seconds at a time, completely randomly, while SSH'd in (I also live in Sydney, so it's not distance). I now have an instance with Digital Ocean in Singapore and I much prefer it.
I have used them for a side project, no issues.
The big reason for me moving from DO to them that I didn't see mentioned is because they have actual private networking.
And also the option for servers with HDD instead of SSD.
I've been using RackSpace for couple years before moving to DigitalOcean, I've had a good experience with Rackspace when I started but my bills kept growing and server started to have constant issues every now and then, so I've decided to move to DigitalOcean couple years ago. My traffic since then grew quite a lot from 100K/month to around 1 million visitors/month and my bills from DigitalOcean are still not much higher than they used to be at the later stages on RackSpace, and performance is much better for me with DigitalOcean.
The only thing I dont like about DigitalOcean droplets is the requirement to shut off the server before resizing, Rackspace allowed me to do it without a need to shut it off.
Slightly offtopic, but I am curious if anyone has any insights into the legal side of hosting profit seeking services on top of VPS's in general. Is the boilerplate contract(s)/eula/tos good enough generally or do you seek to actually make changes to a custom one?
What about hosting websites vs reselling access for some other purpose (eg. similar to game hosting services that allow full customer control of the instance?)
It seems to me like there is a lot of room for a tool that can spin up an instance over multiple VPS providers, because sometimes one will have a colo close to where you want and sometimes another will.
Anyone aware of comprehensive location based benchmarking of all the VPS's?
I'm a novice/intermediate programmer, and when I knew nothing about what VPS even was I started using Linode(due to many great recommendations).
Linode is a great service, but recently I've switched to DO and I like it so much more. As a person who just needs a simple and straightforward way to put several django projects online - DO offers me a simple and beautiful interface, cheaper prices, and a lot of great and extremely useful tutorials.
It is much nicer to use and a droplet price starts from $5/mo, which is freakin' awesome, and all I need from VPS service at this point.
hm. Interesting. From what I know of the industry, their size and pricing, I would have thought they would be profitable enough that raising this sort of money wouldn't be particularly interesting.
There are many reasons to raise money. They may be operating at a loss but with growth in recurring revenue such that they expect to become profitable in a few years. But I wouldn't be surprised if they are raising this money to fuel an already growing fire, either by enhanced sales teams or even building out new capital projects that will open new revenue channels or create barriers to future competition.
They aren't even in the same ballpark. Hell, Google isn't even in the same ballpark as AWS and Google made $83million in the time it took me to read the article. DO is great, but doesn't belong in the same sentence as AWS.
MS has strong reputation of company with closed-source and vendor-locked stack, so many of us can't feel "first-class citizens" in the MS world. But they are working on their reputation, VS Code is a very good example, and I hope soon we will not afraid to trust them.
AWS doesn't target the same audience or have the same product line. They're related, but not at all the same product market. To illustrate this, AWS is infrastructure management tools, and DO is infrastructure management. The fact that AWS has infrastructure management as well (built on top of their own tools) is only relevant for the people that build out the rest of their infrastructure. DO is for folks that don't want to have to learn and understand infrastructure much at all. That's very doable these days for a lot of aspects of development, but it's also not at all what AWS is going for.
DO is closer to competing with just Elastic Beanstalk and maybe RDS (from the standpoint of a managed RDBM service, not the feature set).
AWS is basically software defined enterprise/software defined business at this point. Their products and services are amazing, and they will certainly dominate the fortune with GOOG and M$, etc. I'd imagine the push down into the SMB will more a bit more difficult, the push up into the SMB with a more b2c(dev) product offering and community approach seems to be the the better game here, think Microsoft and Apple in the early days. These are still multi-billion dollar addressable markets, there is a lot of room to play in cloud.
But do they want to? AWS is going for a high price for a super super deep stack of 600 little services. I kind of like that DO focuses on their core VM thing.
Obviously you can't fake some things with just VMs (rolling your own VPC for example is kind of hard to do), but a lot of people don't need Redshift or SQS or any of the amazon SAAS things...
DOs announment talks about a storage product, which is strategically important and crucially something Linode has sorely needed for a long time. And yet the biggest development in recent years at Linode has been a proprietary stats and monitoring system built as an upsell, which doesn't really do anything distinctive that Nagios or another package couldn't provide.
Instead Linode is now switching their entire platform from Xen to KVM, a curious move which will create risk and cost velocity that could have been spent on product development.
I have been a huge supporter of Linode over the years, and the startup I co-founded is one of their biggest customers, but at this point DO seems like the winning horse to back.