Hacker News new | past | comments | ask | show | jobs | submit login
How to colocate your first server at a datacenter (definedcode.com)
87 points by kvmosx on June 11, 2014 | hide | past | favorite | 65 comments



I am so glad I live in a world where I will never have to do this again for anything but large infrastructure projects. I've colocated servers at major POPs like 111 8th Avenue in Manhattan, Equinix in Seacaucus, One Wilshire in LA.

For smaller projects, the convenience of the cloud is absolutely worth the price. For a larger build - like over 10k a month in infrastructure cost the cloud starts to make less sense economically but 'colocating your first server' is not a right of passage anymore - its unnecessary and a huge waste of time.

All of the functionality/services you have to provision yourself in colo like - redundancy, backup, remote hands, environmental monitoring, hardware maintenance are just not worth figuring out until there is substantial cost savings to realize.


I like to keep control of my data and manage my own systems, instead of using some black box where I'm at the mercy of a cloud provider.

You're not going to be able to troubleshoot or optimize down to the hardware level in the cloud. If some cloud service doesn't work or is not available, all you can really do is wait and hope they get it working again, while if you manage your own systems you might have a chance of fixing it or at least finding out what happened and work to prevent it from happening again. Some applications just won't work on the cloud or exhibit mysterious bugs, failures, or bizarre behavior.

Security and data ownership is pretty much out of your hands when your servers are in the cloud. You can only hope and pray your cloud provider is doing a good at securing your data and isn't stealing or selling it themselves. You generally have zero visibility in to how security is handled by your cloud provider or whether a security compromise has taken place.

And then there's the issue of vendor lock-in, which becomes more and more likely the more unique cloud features you use.

Of course, for maximum control, you wouldn't rely on a colo either, and just host your servers in your own server room(s).


> redundancy, backup, remote hands, environmental monitoring, hardware maintenance are just not worth figuring out until there is substantial cost savings to realize.

So you don't need to monitor temperature sensors any more with a VM, but most of the above are still costs with cloud - flaky RAM, redundancy, backups, monitoring, etc. There are also the things you previously didn't have to worry about - crappy resource isolation turning your scratch disks into 2kb/sec joys, total ineffectiveness of the CPU cache, managing a now-essential network fabric to tie pieces of your app together where previously it all fit on 2 master/slave machines, etc.

Of course if your application isn't simply some stock PHP/MySQL application, and you want to really "embrace cloud", then the time you saved fighting a subset of hardware problems is replaced by a fixed development cost integrating with someone else's higher tier APIs (S3, Dynamo, etc) you can then never escape even if you wanted to.

I've never seen any realistic numbers comparing the use of traditional hosting facilities, say, providing managed servers, to the new generation VM stuff. Any material I've seen has been sponsored crap involving some multinational.

My own experience is similar to yours - hosting your own hardware is a pain in the ass. However there is middle ground, there are many colos that will happily provide managed hardware, and perf/pence, this still tends to be far cheaper than the equivalent in VMs, and increasingly they're coming with similar APIs to order/replace machines


From my limited experience, the cloud is always more expensive if you know your exact usage requirements. If, for example, you know that six octo-core 16 GB RAM, 512 GB SSD-in-RAID1 servers would fit your needs from now until 10 years from now, you will do better to just rent them from SoftLayer, Hetzner, etc.

However, if you anticipate growth, or need to be able to spin up a test server, then shut it down a day later, etc. then you are better off paying premium for the cloud. Sure, there are economies of scale at play here: AWS has so many servers, they are not paying a person to log into every one of them every so often to run updates, etc. However, make no mistake: everything you would have to do with a server, Amazon has to do too. In fact, they have to do much more to keep all of them running at once. That cost will be passed onto you.

Even with all of that, it's cheaper if you want to be able to spin something up, then shut it down. Another great example is the additional services provided by the likes of AWS: you can get things like load balancers, cache servers, database servers, orchestration services, etc. You can do all of this yourself, but at some point it's cheaper to just pay for something like ELB than to learn how to do it yourself and spend the hours to set it up. Human time is more expensive than that.

Lastly, if you just need a really small machine, there is no beating the cloud. You simply cannot get a dedicated machine for $5/month, and you likely never will.


You can get damn close!

http://www.kimsufi.com/uk/

I found OVH's dedicated server offerings to be so cheap that there was no point in using shared hardware for the flexibility. Then again, I'm not running my entire business on these boxes... but I don't think I'd have a major issue if I wanted to!


Huh; that's actually a better deal in many cases than my existing virtual servers. Thanks for that recommendation; I'll keep my eye on that.


The day I got my first ovh recommendation was a good day for my inner accountant.


I actually find colo'ing a new server quite fun and enjoyable... even after doing it a half dozen times... as a techy... something is just so cool about putting a server in the same racks as some monster equipment. ;-P

However, I disagree about colo'ing not being economical. If you setup a physical box in a colo with a hypervisor (kvm, xen), then the density you can get out of 1U is amazing, and makes the entire thing much more affordable.

Take this site for example: https://www.ubiquityhosting.com/cloud

16GB ram, 8 coers for $128 a month. Considering most 2U colo spots I've seen hover around $150 a month, seems like the colo is not a good deal. However, the VPS for $128 a month is a single server. Your $150 a month colo, with xen, you can pack 3-7 vm's in the same space, making the cost-per-(virtual)server much lower.


"I actually find colo'ing a new server quite fun and enjoyable... even after doing it a half dozen times... as a techy... something is just so cool about putting a server in the same racks as some monster equipment."

Agree. If there was a question on some test that said "I find colo'ing a new server quite fun and enjoyable" I would give that a 9 or 10 for sure. Have always loved the sound of the machine room. (This dates back to the days of the computer center at school with the tape drives and Dec terminals.)


Mostly agree, although in this case I say mostly because the colo facilities where you can easily rent a single unit of space are far and few between these days. If we were still in an environment where there were tons of hosts like Verio still around, I'd say there's nothing wrong with coloing a single box.

For sure once you reach above the $10k/month level there's a very good chance that it will make sense to colo. You can fit a LOT of hardware in a single full rack, and they're like $750/month plus bandwidth and power costs (so around $1200-1700 all said and done per month).


In good 'ol California bay area, you can get a full cabinet with un-metered 100Mbps port + 20A power for $600 a month.

Hurricane Electric data center in Fremont, CA, always has specials running.


Couple of bits of advice to anyone doing this:

Firstly get your own IPs from your local RIR. Have your co-lo provider publish your routes, but they will be YOUR IPs. if you co-lo provider sucks, you can move and keep your IP space. (This is vital for email, but I recommend it for everyone).

Secondly buy a Out of Band Management card with your server (iDrac for Dell, iLO for HP, etc) these cost fairly little and will save you hours of access / remote hands. They will pay for themselves, you can even boot an ISO from your laptop over the internet. Get your co-lo provider to give you an extra uplink for this and give it a separate IP (use one of the providers range)

Thirdly consider Mission Critical support on the servers from a solid vendor (In australia I consider the enterprise vendors Dell, HP, IBM and Acer, and of those I will only use Dell or HP). 4 hour response means you don't need as much spare hardware, and you can have things fixed FAST. I have only lost 2 disks in a rack of servers over 4 years. Both had a replacement in place within 4 hours (once at 1am).

Fourthly look at a good Virtualization solution. We initially went oVirt (Open source version of Red Hat Enterprise Virtualization) but ended up migrating to VMWare. VMWare Essentials Plus costs us $15K for 3 years in extortionate australia prices and is worth every cent. It provides Backup (VMWare Data Protector), Failover, Virtual SAN, Live Migration and a heap of useful features that save huge amounts of time.

Finally if your going to grow consider getting a rack (or half / third of a rack). This will likely give you unescorted access to the data centre, and is often not that much more then a few RU of servers (depending on the DC and racking availability).


Do not put your IPMI controller on a public IP without any sort of access controls in place. These controllers are pretty terrible security wise, and it's not a good move.



Buying the 4-hour support package for your servers is so worth it. You don't have to pay for remote hands since the vendor will dispatch a tech to replace/upgrade bad gear. If you handle your monitoring correctly, you can get a tech sent out to hot-swap a dead drive before you even realize it's failing.

In terms of picking a good colo, find one that has high security ratings in an area that doesn't have fluctuating power. If they're in Florida, make sure they're Cat 5 Hurricane rated. If they just happen to also have an entire floor dedicated to government hardware, or are on the same power grid as a hospital, and have buried fiber/power lines instead of exposed, even better!


I thought you could only get an IP delegation if you were multi-homing and could justify a /24. Is that still the case?


We did this with APNIC (being based in Australia), and we were able to justify a /24 on a single server without multi-homing. I don't think they allocate less then a /24 so if you have cause for 1 you get the whole /24.

That said ARIN and others might be different. For example APNIC will only allow up to a final /22 allocation at the moment, where as ARIN just allocated a much larger block despite being on final allocation rationing.


"if you co-lo provider sucks, you can move and keep your IP space. (This is vital for email, but I recommend it for everyone)."

Why is this "vital for email"?

I agree that obviously if you can you want your own space.

But if you are planning your move you can handle the new ip space by dns ttl. Which is what we have done since the 90's with moves where we didn't have our own IP block. And yes it is a huge pain to be avoided so I'm not disagreeing just wondering about the "vital for email".


These days making sure email gets through is about jumping through a huge amount of hoops and having a long term "trusted" track record at the domain level, ip level, etc.

Spammers have gotten good, and because of that anti-spam efforts have had to increase their restrictiveness over time. Because of this reliably getting email out of your network and into another inbox is anything but trivial these days.


Excellent point. We actually did go through that and I had forgotten that issue. Agree.


Yes a million times, console-over-IP and virtualization are must-haves


Also don't forget to check out the rented dedicated server market too. It provides a good middle ground in cost/power/performance to the extremes of cloud providers and colocation.


Yes, always compare with rented dedicated, and only go with colo if it makes sense. I did this analysis last year and rented won big time. Never looked back since. Main points in favor of rented dedicated servers: - Much cheaper in my business case (5-10 TB/month by server, only a few high CPU requirements) most dedicated come with bandwidth included - No need to take care of the hardware in case of failure (opening a ticket versus managing the whole process) - Easier to switch machine if needs changes - Scales faster (as quickly then 2-3 hours with decent automation tools, and virtualization) - No switch, firewall, etc (IPMI should always be behind a firewall, never opened on the Internet) - The upfront cost to buy a server was high, and monthly cost of dedicated was actually lower, due to the lack of competition in the colo sector in my area (Montreal, Canada), prices were (and still are) rising quickly, it was hard to predict the long-term costs


Maybe it is a Canada thing. I looked at some colos in the Toronto area a few back and found rental prices were about the same as bringing your own hardware. Didn't make sense. I was just looking for a home for some server hardware I have in my garage. Oh well.


Almost all the independant colo centers in Montreal have been bought out by big players with the corporate world in sight, not the small 1-2 servers market. So the prices skyrocketed.

The opposite is true for dedicated, since they have to compete with us-based players, and OVH got in the East Coast market big time. Prices are lowering.


I am surprised how few people know about this.

It isn't just a "good middle ground". It is an amazing middle ground that allows you to benefit from AWS services (simply choose providers close to AWS data centres) whilst getting substantial performance/cost benefits.

I would imagine at minimum you are getting 10x the performance compared to a typical VPS.


How does the cost work out vs. colo when you consider capitalization?


This is a generality but anytime I've looked at that what I see is a bunch of low loss leader prices (that lure you in) that when you add on extra features you get nickel and dimed up to a point where you are overpaying.

I've seen cases where there are charges (as only one example) of $10 per month to be able to do a remote reboot which of course can be done on any box with an IPMI at no charge if it's your box and that's the way you bought it. So if you don't know what you are doing my point is you can easily be lured into thinking you are paying a reasonable price but then paying more simply because you aren't doing a true comparison of features and benefits.


Thanks for the article, I've seen that term floating around and never knew how to get started. I've been itching to get into the server hosting business as a side thing ever since renting my own KVM VPS.

I was blown away by the fact that I can sit there and watch it reboot over screen sharing from my iPad. I treat it as a cloud desktop (runs the latest vanilla Ubuntu) and so of course it was easy to get apache and PHP and Ruby and a whole web server environment up. I do all my work on it, as well as my play. I use Plex to stream myself media, and OwnCloud and other tools to replace Dropbox and even deploy sites.

I want to sell people on the idea that it's easy to have a cloud desktop you can access from anywhere, that can also be a web server (not selling web servers that can also have a desktop). I want to sell people on the Idea that with freely available software, we can each have a private cloud with just our data.

I'm not quite sure how to get started, and I'm not trying to make a killing with profit, I just don't see people trying to make it simpler for the average Joe to have a cloud desktop and not need to pay to use shared cloud services which then become huge targets for data breaches.


Not trying to dissuade you or imply this is exactly the same thing, but I wanted to make sure you're aware of Amazon WorkSpaces [http://aws.amazon.com/workspaces/], "a fully managed desktop computing service in the cloud."


I colocate my little cluster of servers with Opus:Interactive in Portland, OR. It's neat to look at the homebrew artists with their motley crew of machines all colocated together in racks in the corner followed by many homogenous racks filled with boxes from big name companies.

I don't prefer building powerful hardware. I prefer reliable and cheap to build and maxing out the 4U of my rack. I like to think of my servers as "life support for an internet connected harddrive". My CPU's are fanless Intel Atom's with 2GB of ram and I get Mini-ITX motherboards that can be powered by a brick DC power supply. Ultimately, I'll move to Flash hard drives so I'll have no moving parts in my server, but I'm waiting for the price to come down and for reliability to match spinning media.


I'd like to know more about how you do that. I just looked at Opus' site and the 4U they advertise in Hillsboro is pretty good when compared to what I pay for 1U of hobbyist colo in Seattle. Do you put multiple machines inside the 4U with a switch in the same space? Can I be so bold as to ask for pictures? What kind of physical access do you get? I have a 1U server-grade machine but moving to a more flexible space would be nice, even if it does mean a train trip to go see my boxen.


I do put multiple machines in the 4U. I feel sheepish about sharing pictures. My setup is not as trim as I would like. Hopefully my description will give you an idea of what it looks like.

For the boxes, I have a 1U enclosure holding two Mini-ITX motherboards. Then I have a 1U switch, it's nothing special, cheapest gigabit switch that got good reviews. And then for my load balancing and SSL, I got a Kemp Technologies hardware load balancer. It's got a little ASIC in it that offloads the SSL from the servers. I think I can sustain 200 concurrent SSL requests, which is fine for me right now while I develop my app.

In my 1U holding the servers, each server has a two disk software RAID-1 setup. I can't physically get to the colo but once or twice a year max, so I need to be able to withstand a drive failure here or there.

I think I get a drive failure about once every 2 years, and one ram stick failure so far. I had one motherboard failure, but it was a VIA technologies chip/board, and since then I have switched fully to the intel-produced slim Mini-ITX Atom motherboards, and those things are rock solid - just wish I could get 8GB of ram for them for more memcached goodness. ;)

I don't even know if my dual Mini-ITX server enclosure is sold anymore, it's kindof freakish especially with heat. Since I built this box I have been investigating "shorty" or "short depth" 1U enclosures. I wanted to be able to pack in the servers, and you're allowed to bolt servers to both sides of the rack. So by transferring my servers to shorty enclosures, I should be able to spread the heat out a little better and max out my space. I think I have space right now for 3 more shorty boxes if I need to expand my cluster.

Edit: for physical access, you just call the support line and either ask for remote hands/smart hands if it's something simple like rebooting a box for you, or you can schedule a time to come in. I've never been turned away from coming in the same day and when I want to. I am usually alone in the server room when I work. With all the cooling equipment and servers, it is very loud in there. Sometimes I just wear earplugs to dampen the noise. They've never charged me for smart hands, but I don't ask very often, once a year maybe.


Awesome, thanks for the info. I might have to look into doing something like this. Right now I have a 1U with gobs of RAM and HDD space so I just virtualize everything. Having more physical segregation would make for a fun project since this is all personal stuff that I just like to use for tinkering.


I've been interested in hobbyist colo for a while but the cheapest I've found is over $100 a month. Is this in the ballpark of what you are paying or are there better deals to be had?


The rate I'm paying no longer exists and the company I'm with got merged into another one so my point of reference isn't so good. :)

That said, there are a couple of companies on Webhostingtalk that have good prices for hobbyist colo if you look in their colocation forums and search for "Seattle." Opus is nice ($129 for 4U and 3A of power with 400GB of transit) and there is a company in Seattle--their name escapes me but I've seen them on WHT--that is $35 for 1U and 1A with 500GB of transit.

So, yes, I think you can do better especially if you just want space for a medium-usage 1U.


Opus is nice, I like to support them, always very friendly and understanding with my noobishness. When I first signed up, I didn't understand how to mount my case even. There were these brass grommits that were in the toolchest off in the corner nobody told me about that needed to be snapped into the rack uprights, and that's what you screw your case into. The tech was totally nice about it when I asked and he even helped me lift the case into place while I screwed it in since I was by myself.


Yeah, I'm over $100/month. I negotiated a deal with Opus though where they wouldn't charge me extra if I ever happened to go over my bandwidth cap, just throttle me. I figured if my app ever hits the mainstream and I'm making money, my little infrastructure will crumble way before my bandwidth cap is hit and that's when I'll upgrade my plan to a private cabinet with lots more U or just move to the cloud.


While the article stress the importance of understanding your power requirements on the server side you also need to be very cognizant of the power restrictions in your colo contract. Often times power draws will be your biggest limiting factor and it is very easy to buy a rack and fill it with equipment that surpasses your max allotted power draw. Depending on your host your only solution may be to buy additional racks to spread out your load even though you don't need the physical space.

Another power gotcha comes with the redundant circuits provided. For example if you are allocated 15Amps total that usually means total across both circuits not 15Amps per circuit.


The article talks about 1U being a good form factor, but one thing to bear in mind is that cooling fans tend to work harder in a 1U chassis than a 2U, hence drawing more power.


I would also add that in most DCs your never going to have the power capacity to have racks filled with 1RU servers, so 2RU servers are typically not much more expensive to Rack then their slimmer brethren.


And no mention of ipv6 at all. at the very least it gives you much easier management of private networks (yay real ip addresses) and there is more and more traffic coming over it (think mobile phones and etc.)


What are the advantages of doing this over just paying for computing power on AWS or some other cloud service? Feels like it would be a very small niche at this point.


Cost is the big one, If you can manage your own metal you can save a LOT. 50K AUD of gear would cost us ~ 10x PER YEAR for the same VM sizes (many are 32gb of Ram, 8+ vCPU, reasonable IOPS).

IP addresses is another. AWS gives you 5 before you have to start applying for them. We do email marketing, so we need a lot of IPs (we give dedicated IPs to each customer). With APNIC (Asia Pac) we have a /22 and a /23 range (1536 IPs) for ~ $2900 per year.

Legal reasons are a whopper as well, Australia has some tight laws around privacy and liability if an overseas partner leaks your data.

We still host a lot of things on cloud providers (Rackspace & AWS in both Sydney and US) sites, but at the end of the day our VMs out perform cloud VMs and are considerably cheaper.


Let me also say that comments about vm size on EC2 almost never consider that bare metal performance can be magnitudes better.

If you can build a redundant system, and don't need the extra 99.999999s of hardware resilience, but can have 99.995 network uptime, dedicated is great.


I would argue that dedicated servers are MORE resilient then any cloud provider. And it really isn't that hard, 2 enterprise class servers + VMWare or similar and you have a HA cluster. The hardware support is largely done by the hardware vendor under their support contract (4 hour mission critical) and having redundancy means a single hardware failure isn't critical.

I have had 4x 1 minute dropouts in the past 18 months from our hosting vendor upgrading their routers always at midnight localtime. They provide diverse data paths and we have not lost connectivity despite major outages. If I thought I needed to I could get a separate connection from one of about 20 providers in the local DC I am located in.

I really believe many people put too much weight on the complexity of managing physical hardware, when your already doing 90% of the administration anyway on a cloud server. Yes it is capital intensive, but you will likely make the money back in your first 12-18 months.


This is exactly what we do and it has worked out well for years. You can also defer the upfront capital expenses by finding an IT vendor that is willing lease or rent the equipment for a fairly fixed monthly cost.


cost! Cost!! COST!!!

I collocate 5x 4u servers with 24 and 36 drive bays, 128GB ram, SSD drives squished in for OS, for a total (multiple raid 5 volumes for each 6 disks) usable space for my project of 375 TB

Power and 2-3gbit of bandwidth is included, as well as remote hands to check up on server (for example IPMI sometimes does not work) or replace drives (pay extra for enterprise grade drives would save you alot of hassle in long term!)

for .... €1500 / month

now go and calculate the cost of that on AWS


I absolutely agree in the long term, but can you say a little about how much those servers cost initially?

I tend to favour dedicated servers which I own over VPS but for small businesses with extremely constrained budgets ("prove we can make money before we invest in hardware") and startups, flexible virtual servers can be a great way to ramp up in the beginning.


About $10,000 per server, equipment (very heavy) was shipped from US so there were import duties but it worked out a little bit cheaper than buying in EU in end, and had other reasons to shop in US.

Anyway its a once off cost, that accountant can do all sorts of magic with this.

Ive been in business for 7 years and hope to remain around for as long.

Have already saved money as to compared to renting dedicated servers before. AWS etc were never an option the sums simply never work out.

AWS might be great at first when you are starting out, but the costs can cripple a business, especially if you dont have others peoples venture capital to burn.

edit: when bandwidth costs are factored in the difference is and order of magnitude between my current setup and amazon, at time of this post using 2200 mbit outgoing, 800 incoming

edit2: my only regrets is not collocating earlier, have spent well into upper 6 figures over all the years :( on renting, AWS etc werent around when was starting off either.


that long term can be pretty quick though... in the UK at least you get 100% tax relief for plant (hardware) expenditure up to a limit (currently half a million a year)

you get no tax relief for opex.


400TB s3 storage in US-East is ~US$11800, so it's about 6 times the cost. You also get power and remote hands. Bandwidth isn't free, though, and can get pricey. On the other hand, for data you can shift to Glacier, you only pay a third of that cost. On the other, other hand, if you're using this amount of gear, you may be able to talk to an account manager and get something else sorted out. You'll probably want to throw some VMs in there with S3, of course, so that adds to the cost... 122GB ram reserved instance VMs run to about $700/mo, so with five of those you're looking at about US$15k + bandwidth, though of course you're not paying for the hardware or hardware setup time.


AWS comes with a ton of added functionality that may not be relevant to you, especially if you've no need to scale or deploy machines in seconds. That comes at a cost.

It's a shallow orange vs. apples comparison, but look at the price of colocating 1/3 rack at Hetzner (119EUR). If you already own the hardware, have no need to use any advanced AWS feature, have the skills to manage it, etc.. sometimes it make sense.

For personal and/or SMB needs, it usually not worth the trouble (specially regarding skills to maintain it).

So you're right, it's becoming more and more a niche.


Cloud is definitely better if you need to scale out, but if you need either raw performance, have requirements for particular hardware (e.g. Backblaze with their custom disk shelves, certain hardware appliances, tape backup requirements), or if you want the cheapest cost per month then a colo may be right for you. Remember it's cheaper because you're paying for the sysadmin time and remote hands, versus Amazon with their people.

You could also do hybrid approach - colo for core infrastructure and cloud to scale out, but that's more difficult to setup.

At a certain scale of colo or with very heavy security requirements you'll find it's cheaper to have your own datacenter.


I priced and ran benchmarks for my last server upgrade. AWS was 2x the cost of collocating and some outages around the same time. I think it would have been only marginally more expensive since I could scale my server down to current need rather than project 3-4 years into the future.

AWS had the fastest network and but colocation allowed for cheaper CPU and RAM upgrades. As it turns out, my RAM/CPU needs plateaued at 4x my previous server so I have twice as much as I need. AWS got steadily cheaper over that same time and the servers got faster.

Upgrading to a new server is pretty disruptive. It would probably be easier to do slowly over time rather than big jumps.



i think AWS is good, when you're not sure if you will use all resources of your servers or need scalability. If you know your growth rate, what type of resources are used, etc. it will be cheaper to have servers in a datacenter instead of using AWS. Many companies, upon growing, starting to rent a space in datacenter or even build their own.


Another tip, be cognizant of the difference between sustained drawing amperage, and spikes; when a server starts up, it can spike to 1.5+ amps. I've seen many circuits trip because, while the draw should have been in an acceptable range, spinning up a number of servers at the same time was too much.


I blogged about my own experience with this here: http://blog.preinheimer.com/index.php?/archives/413-Buying-a...


Nice article, thanks, but does it applies to all types of the datacenters?


I can't see a type of datacenter it doesn't apply to. You pay for transit, power draw, and to a lesser extent rack units. If you get a rack or a half rack with your expected power draw and want to spill past that they'll be charging you extra because that's space they can't sell to someone else. Some datacenters also provide cages, so your hardware is physically separated from other people. That'll cost extra too.

The only thing I didn't see a mention of is DC power, whereas your out of the box power supplies on most OEM equipment is for AC. Most of the server supplies nowadays should be able to handle 240V, 208V, and 120V with AC on the same unit, when you go DC you want to consider buying a separate AC power supply for setting up the server in your office (unless you drop ship it to the colo).

Make sure you get a very efficient power supply too, because while you can get the most efficient or power miserly server on the planet, an inefficient power supply will increase the draw significantly. You also want to right-size the power supply, because drawing too little power lowers the efficiency (there's an efficiency curve available for most PSUs that are rated).


How is it a “colocation” if it is your “first server”? In the age of on-line dictionaries no less?


A colocation center is a type of data center where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms—and connect them to a variety of telecommunications and network service providers—with a minimum of cost and complexity.

Colocation by no means indicates you have multiple servers that need housing nowadays.


Your server is co-located with other customers' servers.


The irony is delicious, you'd have fared much better with the first sentence alone.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: