Hacker News new | past | comments | ask | show | jobs | submit login
Pinterest Cut Costs from $54 to $20 Per Hour With Automatic Shutdowns (highscalability.com)
197 points by autotravis on Jan 1, 2013 | hide | past | favorite | 71 comments



I'm Ryan Park -- I'm the Pinterest engineer quoted in the article. I'm happy to answer any questions about our setup.

Just to clarify, the auto-scaling was specifically for a pool of web application servers. At the time I gathered the numbers, there were 80 servers in that pool. In the last few months we've been moving toward a service-oriented architecture, and we've been able to use the same code to auto-scale the internal services. Of course it's not possible to auto-scale stateful servers like databases, but it's still saving us a considerable amount of money.

We implemented the auto-scaling in early 2012, so it's been in use for almost a year now. It only took about 2 weeks of engineering to build the system. It does need occasional maintenance, but it's still worth the effort given how much money it saves us.


> Of course it's not possible to auto-scale stateful servers like databases

That's not entirely true. :) It's just a lot harder to scale them elastically. At Netflix we've done some proof of concept work on elastically scaling Cassandra, although we don't have it in production yet. I think that is one of our goals for 2013.


Theoretically, there's going to be a state of affairs where it's not possible -- that's physics. Maybe with multi-host tenant, so that we're not relying on AWS networking / control-plane screwing with what is going to be the most important part of that dance.

But this is about saving money; not being a tenant of multiple providers; not spending more than "2 weeks."


Even on a boring old DB: Spin up a few read slaves, give them a few min to sync to current, turn them loose. Sure it is only read scalability, but every bit counts.


I think you're missing the point that I'm not going after the DB layer or your own software states (eg. read only). Most AWS outages are related to network layers; you have to already be sync'd. If you want to do writes, sync'd enough to still guarantee a quorum for your writes AND reads, and if the AWS network(s) are fractured enough, you could theoretically be without access to bringing up a stable "wan" for yourself. And then you may simply be hamstrung by ELB outages and be left without hope altogether.

If we're talking about read-only, I don't see why a DB, in the traditional meaning, has much to do with the issue.

These are not necessarily AWS faults; they are certainly trying to help. It might be the year for being a tenant of more than one PaaS though, but that is only going to up your costs and still not give infinite 9s.


Id be happy enough if you could just keep Netflix running more often.


Why would you not choose to go colo and save the money? Even with hiring an extra full-time person, going colo should drop your costs by at least half vs. Amazon.


Our #1 requirement has been to keep up with the growth in traffic on the site. We've been growing so fast that there's literally no way we could have ordered and racked equipment fast enough. We were also a very small team -- a year ago there were only about a dozen people in the whole company. At this point we're much larger, which gives us room to consider more options like colo or multiple cloud providers.

AWS certainly feels pretty costly when you compare colo prices to the list price for on-demand instances. But one of the reasons I wanted to present our work is to show that you can use the cloud for a lot less than the list price. It takes work to buy reserved instances or run spot instances, but that does make it much more cost competitive.


I am not sure what you mean exactly.

By "going colo" they would need to layout all the upfront hardware costs. These would not be insignificant. They would then have all the operational overhead of maintaining all that gear.

You may have a higher monthly cost for AWS services - but without needing to buy any physical hardware, you have way less to worry about. Further - if you need to scale, it can be done in seconds rather than weeks/months given lead times for procurement, design time, implementation time (i.e. scheduling the install in the colo, coordinating the need for more space etc...)

This is just scratching the surface...


> By "going colo" they would need to layout all the upfront hardware costs.

Leasing still ends up substantially cheaper than EC2. Never paid upfront for any colocated hardware I've been responsible for. So does managed hosting at a number of providers. Last time I priced out this, EC2 ended up 2-3 times as expensive as managed hosting (with no upfront costs for the managed hosting either), and the gap to leasing servers and putting them in a colo was even larger (but there you do need some scale before you cover the extra ops costs).

> They would then have all the operational overhead of maintaining all that gear.

If you're small enough, sure, your savings won't pay for extra ops people. But you don't need to be very large before the savings outweigh the cost of more ops people. And with managed hosting this is a non-issue - at that point you don't have any more ops issues than you have with EC2.

> Further - if you need to scale, it can be done in seconds rather than weeks/months given lead times for procurement, design time, implementation time (i.e. scheduling the install in the colo, coordinating the need for more space etc...)

It's not either/or. In fact, being prepared to use EC2 to handle peaks means the cost difference between self/colo hosted (+occasional EC2 use for peaks) and EC2 gets even larger, as you can run your own servers at far closer to full capacity without the risk you'd take if you didn't have that ability. Handling occasional peaks with EC2 is a great use of it, and definitively cost effective.

> Further - if you need to scale, it can be done in seconds rather than weeks/months given lead times for procurement, design time, implementation time (i.e. scheduling the install in the colo, coordinating the need for more space etc...)

See above. But also consider that if instead comparing against the managed hosting option, a number of providers with auto-provision in minutes to a couple of hours once an order is placed. And many providers now also offer a mix between colo, managed hosting and EC2 like cloud solutions, so you can put your base load in a rented rack, scale in the mid term via managed hosting, and spin up cloud instances as needed if you want to deal with a single provider.

EC2 is great for "quick and dirty" temporary solutions, batch jobs or handling peaks that last less than about 6-8 hours a day, and I use it now and again for that reason. But the moment your instances are up more than about 8 hours a day, and you have more than a few of them, it will quickly start costing you more than the alternatives.


>EC2 is great for "quick and dirty" temporary solutions, batch jobs or handling peaks that last less than about 6-8 hours a day, and I use it now and again for that reason. But the moment your instances are up more than about 8 hours a day, and you have more than a few of them, it will quickly start costing you more than the alternatives.

I think Adrian Cockcroft & Jedberg may disagree with this statement.

Netflix has made a point (and a business model) of pushing all their infrastructure costs for their streaming service to AWS for many reasons.

They clearly have a HUGE amount of traffic across their service, and they are very successful in keeping a lean team on staff that has a focused skillset while not needing all the IT ops folks on staff. The HW costs to support their service would be very large as well as the distribution of that HW across the [nation|globe] to support their userbase.

Also, I do not think you're properly accounting for all the design and support considerations.

In a large infrastructure implementation you're going to need quite a few ops specialties: (in smaller orgs, these roles can be collapsed, in very large orgs they are discreet. Your ops costs get high fast in large infrastructure deployments)

Architect

Network

Server

Support (deployment, ops, maintenance etc..)

With the need for 24/7/365 ops coverage - especially if you have multiple regions/internationally deployed infrastructure... you can see how this can get expensive.

So, I think there are a few sweet spots that can be looked at.

Finally, there is also the hybrid model, where you have your own base-line infrastructure which scales out to AWS to support larger load (CDN model)


> I think Adrian Cockcroft & Jedberg may disagree with this statement.

They might. But either they haven't priced it out, or they have decided it's worth paying several times as much for some reason. Given that the high price of EC2 gets brought up and how I've never seen them actually address the pricing issue, I'm not going to speculate why they've decided to make that tradeoff. I find it quite baffling, though, and I'd be very interested in it if they have done a serious assessment of it somewhere.

> They clearly have a HUGE amount of traffic across their service, and they are very successful in keeping a lean team on staff that has a focused skillset while not needing all the IT ops folks on staff.

Given the very public, very extensive issues that in particular Reddit have had with their hosting, and how they kept taking the entire service down for maintenance seemingly always when I want to use it (since I tend to want to use it when Americans are sleeping, I guess), I'm not so sure this is a glowing endorsement of doing things their way. I certainly couldn't get away with the stability-record Reddit has - the CEO where I currently work would look at me as if I was crazy if I suggested even the amount of scheduled maintenance windows Reddit takes. I don't use Netflix, so I haven't kept track of how they're doing stability wise.

EDIT2: Actually looking at their numbers, and comparing EC2 prices, I'm fairly comfortable in saying that the setup we're running is actually larger than their in terms of total computing resources (but nowhere near them on bandwidth use), which is quite interesting...

> while not needing all the IT ops folks on staff.

You can have someone else do the IT ops for co-located services too. There are literally thousands of companies offering suitable services on an hourly basis, and dozens that offers it globally. Outsourcing ops is easy.

And with managed hosting, the ops you need to do yourself if you don't pay for extra service tiers is pretty much the same as for EC2. Someone else handles the hardware, just as with EC2. Someone else handles the network, just as with EC2. What you need to handle is what is installed on your servers, just as with EC2.

> The HW costs to support their service would be very large as well as the distribution of that HW across the [nation|globe] to support their userbase.

You pay for the HW with EC2 too. You just don't get to own it at the end. A typical colocated setup often involves leasing rather than purchasing, so you're still typically dealing with monthly payments. And if you don't want to own, managed hosting is still vastly cheaper.

As an example, leasing costs for our lates purchases of a quad-server box containing 4x dual hex-core 2.6GHz cpu's with 24GB RAM each, and 24x 256GB OCZ Vertex 4 SSD's, is about $600/month per unit. With their share of our rack space, power, bandwidth etc. the full hosting cost excluding our ops cost for this box is about $750/month (this is accounting for the fact our racks are currently nowhere near full, and so this price is higher than it could be).

Comparing them to EC2 is a bit tricky, since there's no direct equivalent. But to be very generous to EC2 and using a model that these servers substantially outperform, consider that 4 x single M3 Double Extra Large in US East is around $3300/month (which is indeed quite a bit better than last time I look - I'll grant that), and I have about $2550/month left to assign to ops every month for that single box.

In reality, for our loads the more direct equivalent would likely be the High I/O EC2 instances, which are almost 3 times as expensive.

(EDIT: Note also that this is before account for any bandwidth charges or costs for EBS volumes or similar for EC2; on the other hand you can of course cut the hourly cost by paying upfront for reserved instances - effectively you're then paying for "fractional managed hosting"... Last time I looked that still ends up more expensive, though the margin is definitively better)

If we had hardware that required enough extra time to deal with to cost us anywhere near that, we'd throw it in the garbage. We're in London. Here, that's 30%-50% the fully loaded cost of a mid-level ops person...

In reality our dev-ops cost per server (remember the box above is four individual servers) is ~$400/month and dropping as part of that cost is development work to automate more of our maintenance. That is our total. Of that ~$100/month is related to the physical server or network infrastructure and maintenance, and thus costs that are included in the EC2 cost.

The rest are related to maintenance of the VM's running on those servers as well as monitoring of the VM's that we'd still pay if were using EC2.

So comparing against the relatively underpowered EC2 instances above, one of our new boxes costs us ~$1150/month for equivalent service, or ~$2350/month total. So we're getting all the dev-ops and monitoring for our VM's "for free" and then some compared to EC2 despite being small enough that we have a lot of ops overhead.

Judging from our growth, our dev-ops cost per server with twice as many servers as we have today would likely only increase by ~ 10%-20%, and so our per-server cost would drop accordingly. Similarly, our rack and power costs would remain roughly constant as we have spare space in our racks, and so the per server costs would drop even more. I'd expect our rough per box costs for the quad server boxes above to drop to ~$900/month if the number doubled with "EC2 equivalent" ops included.

Keep in mind again, that this is comparing to an instance type I know these servers outperform comfortably, and excludes EC2 bandwidth and EBS or other services.

> In a large infrastructure implementation you're going to need quite a few ops specialties:

I don't know why you believe that EC2 is any simpler to work with than managed hosting in this respect. It isn't. Simpler than a co-located setup where you own your own servers, sure. You don't need much size before it's still cheaper, though.

Many hosting providers even provide API's for their managed hosting, and deploy them all using Xen, with the only difference being that you commit to pay for full months of service and a dedicated physical machine. At the same time you often get the benefit of being able to order custom setups tailored to your workload.

> Finally, there is also the hybrid model, where you have your own base-line infrastructure which scales out to AWS to support larger load (CDN model)

I mentioned exactly that, and it is what I recommend unless there are other reasons not to use EC2, because if you handle peaks via EC2, and your traffic is suitably spiky, you can load your dedicated base servers to 90%+ if you're careful instead of often <50% if you don't have any way of rapidly scaling up, and this drives the cost advantage of dedicated for your base load even higher.


Please make a way to authenticate incoming pinterest EC2 requests from the web vs other garbage traffic from EC2.

Or just simply publish your outgoing EC2 ip pool list.

We cannot completely block EC2 because of Pinterest and that's a bad situation.


Why do you need to block EC2?


Because it is a source of some very bad traffic. Anybody doing web-facing stuff that is targeted by bots/jerks/spammers/all of the above will block hosting facilities wholesale.


I am interested in doing this. Do you have any information on how to find out which IP ranges belong to server hosts? I couldn't find any useful results searching google.


here are a few lists to get you started:

http://proxy-ip-list.com/download/proxy-list-port-3128.txt http://proxy-ip-list.com/download/free-usa-proxy-ip.txt http://www.proxylists.net/http_highanon.txt http://www.proxylists.net/socks4.txt http://www.proxylists.net/socks5.txt http://www.stopforumspam.com/downloads/listed_ip_90.zip

I've got my own db of hosting facilities which I made by taking 100M urls and doing a lookup on the hostname, then saving the IP found in a db. This gives you some level of confidence that a certain class 'C' is used for hosting.


It's also possible to build such a list by watching for static ips that do more than "x" requests and queuing rdns on them.

Google is easy to identify this way, even with a spoofed user agent (which they do a lot now).

But this technique is not possible with EC2 because Amazon refuses to make a public database of what customer is using what.


> Google is easy to identify this way, even with a spoofed user agent (which they do a lot now).

That's part of their page-cloaking detection code.


Thanks!

How do your watchdog instances monitor other hosts? Do they track usage/load via snmpd, or something else?

Also, are they directly calling the EC2 API to launch or shut down hosts, or do you have an in-house deployment system?


Our watchdog process calls the EC2 APIs directly to identify how many instances are running, which ones are spot instances, etc. Boto, the AWS client library for Python, makes that pretty easy. The watchdog isn't very sophisticated -- it just checks to make sure that the correct number of instances are running in each auto-scale group. Our application servers aren't very efficient in certain respects, so we don't trust metrics like usage/load to make auto-scaling decisions.

If I was doing it over again, I'd just use Amazon's auto-scaling features for all of this. At the time we built this, EC2's auto-scaling didn't support some of the features we needed. Since then, they've made it a lot easier to do things like set up a repeating schedule for auto-scaling, rather than using metrics.

We only have one EC2 AMI that we use for all of our servers. That AMI is pretty basic; it only does enough to connect to our Puppet configuration management servers. Puppet then configures the boxes as web servers (or databases, or...) and adds them to the appropriate load balancer.


Very interesting, thanks! I've spent a bit of time working on a library to accomplish this, and Boto has been extremely helpful.

The watchdog isn't very sophisticated -- it just checks to make sure that the correct number of instances are running in each auto-scale group.

How did you decide on the right number of instances? The article mentions 20%--is that based on latency, a cost-saving target, or something else?


We revise the "right number of instances" every few weeks based on latency and traffic numbers. But sometimes when we release updates, we'll find that we suddenly need a lot more capacity (or a lot less if we improved performance). We have automated tools to help us notice performance regressions. Once we decide that we need to change the pool size, we adjust the watchdog configuration by hand.


How do you decide whether to run a spot or on-demand instance to handle dynamic load? What happens if spot instance cost suddenly spikes?

Also, how do you use regions/availability zones?


Right now we run everything in the US-East region, and we have all our services balanced across 4 availability zones. If there's a problem in a single AZ, it will affect every layer of our system, but only about 25% of the hosts in that layer. Some of our services are automatically resilient and can handle that easily. Others aren't so great, but we're working on more automatic failover.

When we need more servers for an auto-scaled service, we open spot requests and start on-demand instances at the same time. For most services, we want to run about 50% on-demand and 50% spot. We have a watchdog process that continually checks what's running. It launches more instances whenever there aren't enough, and terminates instances when there are too many. So if the spot price spikes and a bunch of our spot instances are shut down, the watchdog will launch replacement instances on-demand. It will also request more spot instances once the price has dropped back to normal. In reality we don't often run into spot capacity issues -- maybe once a month, and it's almost never apparent to our users.

I spoke about this in detail at AWS re:Invent last month, and the full talk is available online here: http://www.youtube.com/watch?v=73-G2zQ9sHU


I'm interested in your db setup, specifically how do you replicate and locate them (regionally) to handle requests from the application instances, So that a request from Spain goes to the right server instead of trying to hit a db server all the way in Texas ... for example.


Can we have some stats?

Number of Servers? How much bandwidth per month? How much space needed and at what rate is it growing? Traffic stats (if possible)?


it would be nice if you could share some of the logic that the watchdog uses in figuring out when hosts are shut down and others fired up


Out of curiosity what OS do you use? And do you make your own custom AMIs for various workloads?


Do you use any Light or Medium Reserved? Or are they all Heavy?


For now they really only saved $300k/yr, which is less than the cost of two engineers. Though I guess cutting a nontrivial expense that is likely to grow by nearly a factor of three is pretty great and probably worth investing in.

That said, this adds complexity to their systems with the only benefit being cost savings. Given that we can assume that no code is perfect, it's likely that at some point the auto-downscaling will cause an outage or period of slow responses, which could easily lead to lost usage and trust that costs them as much as they're saving on ops.


IMO designing systems that can be powered on and off regularly (and quickly) is a good thing in itself; it encourages "proper" setup. I've found servers that have been on for months or years tend to need manual intervention after a reboot. When the machine could reboot every day, you can't have that.

In other words, it's just good design.


> IMO designing systems that can be powered on and off regularly (and quickly) is a good thing in itself

Despite the fact that, in theory, a mainframe should never go down, most dinosaur pens will power cycle them regularly just to see what happens if you come up from a cold start.

A mate of mine worked in a dino pen where they did this on Saturday evenings. He told amusing stories.


Nitpicking: but it's not good "in itself", it just has other benefits.


> That said, this adds complexity to their systems with the only benefit being cost savings.

That's not the only benefit. They also get much better reliability. The systems scale down with load, but they also scale up. As their load increases their system scales up along with it, giving greater reliability during increased load. Since they have to architect for that, it makes scaling up in general easier.


> For now they really only saved $300k/yr, which is less than the cost of two engineers.

It took them 2 weeks to implement. That means it's saved a lot more than it took to implement (2 weeks times a couple of engineers) already.

To use your metric, if they saved close to $300K/year, that means they could afford to add 2 engineers to their staff, which is very significant.


Is cost savings really the only benefit? It seems to me that this is a better engineering outcome.


That depends on for who. For Amazon, perhaps.


Better for Pinterest too. It means they need to develop services that can be launched and terminated quickly, and that their services are much more resilient during times of high load.


If spot prices spike and spot instances are shut down, on-demand replacement instances are launched. Spot instances will be relaunched when the price goes back down.

I heard this at AWS re:invent, and thought I must have misunderstood. I'm still confused as to how this can possibly be a good strategy.

The pool of EC2 on-demand instances has a finite size -- it isn't magical -- and does hit its capacity limit from time to time. When there's high demand for EC2 instances -- say, when there's an outage in another AZ -- you're likely to see both spot prices going up and a lack of capacity in the on-demand pool. As a result, this strategy seems designed to only ask for on-demand instances at the times when they're least likely to be available.


I deal with this a lot with my work at PiCloud and you very rarely see a lack of capacity in the on-demand pool (across every availability zone).

You actually see high spot prices due to what at first glance seems "irrational"; incredibly high bids on instance types, sometimes 3-4x over on-demand prices. I suspect such high bids are placed by customers of the spot instances who absolutely do not want their workload terminated early by Amazon and are willing to take the risk of paying more to run it to completion.

If you are a webapp though, like Pinterest, you don't have this desire. Hence, it makes sense to dynamically switch.


Spot instances come from both leftover on-demand instances as well as unused reserved instances. So it's quite possible to run out of on-demand and still have a low spot price.


It's possible, sure... but only if the people who find that they can't launch on-demand instances don't think to try spot instances instead.


Many people aren't set up to handle spot instances. You need to be much more resilient to single instance failures than when using on-demand or reserved instances.


$54 per hour for what? Is this simply the cost of running the instances? It seems to me like pinterest is throughput bound and the vast majority of their costs would be accrued via transit and storage charges.


I agree with your questions. I'd be interested what percent that signifies versus the total per hour bill for all services, not just CPU time, but bandwidth as well


Given the premium that Amazon charges, how does this compare to dedicated servers? And why are dedicated servers in the US so much more expensive than in Europe?


Their engineer answered earlier in the thread. It was more along the lines of they had a small team and needed to scale up quickly (which I think is probably most of these EC2 stories you see, really). He also said that now their engineering team has more breathing room and considering dedicated/colo would be in the cards.


Scale and operation. European operators that open up in America also provide cheap pricing. See OVH for example. Pinterest could easily get 80 servers by them for $5/hr.


power is cheaper in Europe than in the US


Nope, it's the reverse, so that can't be the case. Some quick googling: http://en.wikipedia.org/wiki/Electricity_pricing US is between 8-17 cents. EU is 20-30+.

See also http://blogs.platts.com/2012/11/20/electric_prices/


That wikipedia article is not enough to back your claim. As you can see, it lists a lot caveats. In my limited knowledge, electricity pricing is quite complicated, and those numbers are probably not even close to what business and industry are actually paying.

Edit: The blog-post is more convincing, but again, are those numbers really comparable? Maybe they are, I don't know enough about the subject, I just find it a little too simple to just compare numbers from different websites without deep knowledge of the topic.

Here's another link, the prices differ (?): http://epp.eurostat.ec.europa.eu/statistics_explained/index....



In our investigation, we've found AWS to be around 5x more expensive. Being able to save 63% during off hours (or, let's just say, reducing AWS' bill an average of 30%) doesn't really seem to make much of a difference - and that's with paying a large amount upfront.


5x more expensive than what? Owning/leasing servers? Similar cloud providers?


More popular alternative. Either dedicated or collocating.


Reminds me a lot of how some energy-intensive plants can spin up less-efficient / quick-start units during offpeak hours to squeeze a little extra production out.

And vice versa: most electrical utilities have slow-starting, efficient-as-possible turbines that never get turned off (baseload -- coal is most common), and a bunch of relatively inefficient but flexible turbines (usually natural gas).


> relatively inefficient but flexible turbines (usually natural gas).

Actually, natural gas plants are at least as efficient as coal. They're just more expensive, especially if you turn them on and off a lot, which is pretty bad for the lifetime of a lot of components.


Depends on the kind of plant, there's baseline gas plants that heat water to steam and thence to steam turbines, they're akin to coal plants, high capital costs, low operating costs. Then there are straight gas turbines, akin to the ones that power jet airplanes but more like the ones that power most of the US Navy's ships. These have lower capital costs but higher operating costs and are used for peaking.


Nuclear also makes for great baseload.

Wind or solar are similar in the sense, that you do not gain by turning them off, but their supply is not stable.


Nuclear is asymmetrical. It's very fast to shut down but horrendously slow to start back up again.

In case of sudden load drops where nuclear plants will be shut down for safety reasons this can cause availability to be affected for weeks afterwards.


Here is the AWS re: Invent STP 204: Pinterest Talks Rapid, Cost Effective Scaling on Amazon Web Services -- https://www.youtube.com/watch?v=73-G2zQ9sHU


That's pretty interesting. I've been frustrated in the past by both AWS and GoGrid (and I'm sure every other cloud provider) that keep incurring VPS instance costs even when the instance is shut off. I understand that even if I'm not using the VPS the resources need to be kept in reserve (in theory), but the solution of destroying and reprovisioning instances sucks pretty bad, way to time consuming if you are dealing with only a handful and operationalizing it is not not worth it.

I'd love to move to a provider that let me provision an extra instance or two for either failover or testing/staging but not be charged for it if I wasn't running traffic to it.

EDIT: I stand corrected, I might have been thinking of Rackspace's cloud (can't remember what it's called now) instead of AWS. But I know for a fact I am right on GoGrid (and pretty sure Azure) because I have a long email chain arguing about charges for provisioned instances in off states.


Unless AWS has changed something since I last used it (which admittedly has been at least a year), they don't charge when an instance is off except for storage.


Well, not exactly true. If you pay a the Heavy Utilization instances, you are charged regardless of whether you even have the instance allocated.


That's also not exactly true. If you purchase a reserved instance, you pay the up front price, and then the reduced hourly price for whenever the instance is running.


To anyone reading this later... I am wrong here. Heavy Utilization reservations are charged whether or not you have instances running. Medium and Light are only charged when running.


AWS does not, and has not ever, charged for stopped instances (except for things like EBS volumes, which are billed separately).


I'm wondering if you did (or anyone has done) any cost effectiveness on using SSD's rather than HD's. (Or maybe that's an order-of-magnitude too low to consider?)


Netflix has done such benchmarks [0]; TLDR: _their workload_ cut costs in half with hi1 instances.

[0] http://techblog.netflix.com/2012/07/benchmarking-high-perfor...


I can't imagine that SSDs would make any difference for web app servers. I am not familiar with app servers workload that uses a material amount of disk I/O.

That being said, if I built custom app servers, I'd use SSDs because the cost is small for a system that doesn't need much storage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: