Hacker News new | past | comments | ask | show | jobs | submit login
96 MacBook Pro’s in one server rack (simbimbo.wordpress.com)
118 points by johnsho on July 29, 2015 | hide | past | favorite | 154 comments



Only just realised from this that MacBook Pro's no longer have an Ethernet port! When did that happen?

You need to buy an expensive thunderbolt adapter. I can understand removing it from the ultra portable super-thin models but not from the top end professional model.


You don't 'need' the thunderbolt adaptor. Any old USB NIC will do. Like this $15 USB3 Gigabit ethernet adaptor:

http://www.amazon.com/TRENDnet-Ethernet-Chromebook-Specific-...


Good point. That will be compatible with other computers too. Apart from the new MacBooks which you'll need a USB-C to USB adaptor for.


If you stay within the Apple ecosystem like they really want you to, then you don't need that because your thunderbolt display has an ethernet adapter as well as some extra usb ports. The Apple monitor is their replacement for the docking station. Power and connectivity, as well as a place to put the laptop for reasonable space efficiency.


You don't need the thunderbolt adapter, but if you want gigabit Ethernet it's either thunderbolt or USB3 (as you mentioned). One of the perks of the thunderbolt adapter is that it doesn't require any third party drivers, which tend to be hit or miss with the USB3 adapters.


Well the MacBook Pro's are ultra portable and super thin now. The laptop would probably have to be at least 20% thicker to make for enough room for an ethernet port. And since 90% of MBP Retina owners probably rarely if ever use ethernet, it's a good tradeoff to make. The Retina model is much lighter and easier to carry (especially with one hand) than it's predecessor.


I think Apple could have come up with a solution without compromising on this. What about a neat folding Ethernet port that flips out and is flush when not in use? I remember some old ThinkPads had something like that but it might have been a modem port (RJ11/RJ14 not RJ45).

The rare use is what makes it a problem. It's not worth the investment to buy and always carry an adaptor for that one time you really need it.


These flip-out connectors were always the first thing to break on your notebook and would render your only useful connection (modem, ethernet) completely useless.

The Thunderbolt widget is less than ideal, but it fits. Hopefully USB-C will introduce more options with a lower price-point.


Taking someone else's widget, tweaking it, rolling it out across their product line and claiming to have built it from scratch isn't exactly a foreign concept to them...


Are you going to make baseless accusations or are you going to cite an example we can debate?


Ethernet cables are huge.

I welcome not longer having a gaping hole in the side of my computer. Wifi has come a long way and if people want to use ethernet it can be run through a multi purpose more modern port with a smaller footprint.


One con to doing that is how relatively large that would be inside the computer. Have you seen the inside of these computers? There is almost no space to spare.

although adding the 3.5mm headphone jack to an auxiliary board was super smart. people keep dropping them with headphones inside.


They did come up with a solution. Thunderbolt. As a bonus, it can be used for several other things when not being used for ethernet.


My desktop is basically filled with an octopus of dongles these days. It's ugly and annoying, and increases the footprint of my laptop 2x.

I guess USB-C is supposed to help with this?


an octopus of dongles

Who do we need to talk to for that to be the official collective noun?


Consistently talk to many people in publicly available texts over a period of a few years. That should do it.


Tell me where to vote and I will.


i am interested in this query as well


I wish I knew why Apple's top quality engineers didn't embed or bundled a USB-C to 60GHz WiGIG Dockinstation. Having only a single USB-C port is absolutely cool, if you have a wireless 7GBps dockingstation.

Is there some reason why Apple doesn't promote WiGIG?

Here's an overview over the technology:

http://ultrabooknews.com/tag/wigig/

http://www.slashgear.com/intel-wigig-docking-station-in-2015...

http://www.cnet.com/news/60ghz-tech-promises-wireless-dockin...

Personally I wish they used Ultrabroadband-Radio (>500MHz Bandwith). There are numerous reasons fellow RF enthusiasts will recognize in UWB radio. But I am happy with whatever technology allows me to have a cable-free desktop 😌 This is the reason I loved the original Ubuntu Phone (with it's powerful specs).


Docking stations are awesome.


And pretty freaking expensive too. $250 for a thunderbolt docking station, give or take. Despite the fact that the company will freely provide a $2,000 laptop and a $600+ monitor, getting them to pony up for a docking station has proven to be remarkably difficult.


As a WFH employee I have the opposite issue, I've got a ~$2,000 ThinkPad W540 and matching dock, but I have to provide my own monitor (which honestly is fine, due to amblyopia of the left eye I had as a child I lack peripheral vision for large displays - it's easier for me to just shop for my own).


Docking stations ARE awesome. Seems like USB-C is going to kill docking stations though. Already they feel a bit like a relic from the 00's.


Bad news: Ethernet ain't going anywhere in the foreseeable future. So you'll still be stuck with a USB-C-to-Ethernet dongle :(

But at least you'll be able to hide all the dongles behind some wall and just have a single USB-C cable running from your laptop to a powered hub.


Yeah, I was disappointed by that, too. It's not about the cost, I don't care about the extra $40 or so, but now there is one more "dongle/cable" I need to worry about.


You're right that it's more about not having it with you when you need it. However it does add up.

£25 for Ethernet and then another £25 if you want FireWire too. And £25 for VGA and £25 for DVI (though to be fair you can get this from the HDMI port). But you can only have 2 of these plugged in at a time. Other laptops of this price come with all the adaptors in the box if the ports aren't integrated.

I like having the option of Ethernet available easily. For example, BT have been having big internet issues for the last few days and it's handy to rule out the WiFi. Also with a previous ISP (sadly not with BT) the WiFi was the limiting factor and I had to plug in to get full speed. Then the HDD was the limiting factor!


A $30 adapter for a $2000+ machine isn't what I would call expensive. It does make the machines a lot thinner though and normally, you don't need wired Ethernet any more.

I easily reach 60+ MB/s now over WiFi, so for the times where I really need the additional power I don't care about having the additional dongle with me.


It's not expensive, it's inconvenient. Being able to quickly jack in to a gigabit network and not have to worry about fiddling about with wifi keys and unreliable speeds is something you should get with a high end laptop like an MBP.


I've been using (and carting) a Thunderbolt to Ethernet adapter since I bought my first Retina in 2012 and it's really not all that bad. If you're constantly using it daily then carrying it around isn't a huge ask.


When you work in an actual office with several dozen machines besides you, interference becomes important.

Also, the profit margin on those adapters is pretty astounding.


In the office, all I plug into my MacBook Pro is the Thunderbolt connector to the monitor and the power adapter. The monitor is connected via wired ethernet.

This way I only have to plug in two cables instead of an array of cables. This is why I really don't miss the Ethernet port directly on the machine. For me personally, the gains in portability due to smaller size (and weight) are way more relevant than the theoretical ability to plug in an ethernet cable which I never need.


There have been about 10 times in the last few years when I have walked into a meeting someplace where there wasn't any guest wifi, or we needed the internal network for other reasons.


Here in Switzerland companies stopped granting guest access over the last few years. You're expected to bring your own infrastructure which usually boils down to tethering your mobile.

I can see that this is very much country dependent (we have truly unlimited plans that allow tethering), but at least for me, the smaller size of the machine trumps the ability to plug ethernet cables due to the general lack of available cables to plug.


>Here in Switzerland companies stopped granting guest access over the last few years.

Why, they don't have the skills to create a private wi-fi network for guests?


Why bother when your own 4G connection could be significantly faster than sharing a 100/100Mbit with 80 other people?

I never use guest networks for that exact reason. I know that my traffic and my connection is my own, and works. Way easier imo.


4G tethering using your mobile! It's probably a lot faster as well, and no stupid company restrictions and logging.


The MacBook Pro I bought last year was the first that didn't have one. I seldom used Ethernet but I still miss it. Apple probably wouldn't use it but can't they develop a small connector for the next generation thin laptops? Microsoft Surface form factor is going to become prevalent in the PC world.


I'm more miffed about Apples' USB Type-C strategy.. but on the other hand, Docking stations have their value .. I was disappointed in this as well, but nevertheless am an rMBP user .. so until Henge upgrade this to include Ethernet:

http://hengedocks.com/pages/vertical-macbook-pro-retina

I've been doing quite fine with the DOCK as my 'need ethernet at my desktop' solution:

http://www.landingzone.net/products/for-the-macbook-pro/

I'm sure there are other options; it is a hassle that Apple removed it, but on the road I rarely need Ethernet, and at the desktop, its kind of easy to just plug-in.


That USB type-C strategy is for a laptop that is thinner than the Macbook Air. I don't think they will pull that trick for the normal Macbook Pro.


What trick? Of course they'll add USB type-C to MBP.

If you ask whether they'll keep regular USB or thunderbolt, perhaps but just for a few releases more. E.g. both could be gone by 2018.

That said, MPB will of course have > 1 USB C ports.


The trick - having only one connector. For normal macbooks they will probably have multiple USB-C (for power and other stuff), maybe one USB-B, a card reader, video etc.


What seems to be happening is that we are re-inventing PCI.


Another problem, very slow to actually connect to the wired network. I'm using it almost every day for different networks, I just can't get used to the wait.


This os probably an Yosemite problem rather than a hardware problem. Olders macbook have it too.


This is a constant frustration in the office.


We had to buy ethernet dongles for everyone in our office.


Or, you know, you could buy a wifi router...


One AP per 5 active devices, give or take. Plus positioning them and managing their power and ensuring they do proper hand-off as you move around the office, plus...

Providing reliable wifi for multiple people with multiple devices is a hard challenge.


It's shocking the amount of people that don't grasp this. "It works in my home, why isn't it the same in the office." Is it misinformation? Have people become so used to things "just working" in home they don't think of the technical differences between a couple devices and a couple hundred devices?


> Have people become so used to things "just working" in home they don't think of the technical differences between a couple devices and a couple hundred devices?

Until bitten by a problem, yes, that's exactly the thinking. In theory, a single AP can handle two thousand plus simultaneous connections. Most users have found this number, and don't think beyond it.

However, most APs don't have the memory to manage more than 40 or 50 keys for encrypted connections. Throw in shared bandwidth on this maxed out AP, and suddenly the (again theoretical) 1 Gb+ connection is down to less than 20 Mbps per user, or 1/5 that of a wired connection of ten years ago.

That's obviously not acceptable, so the best way to bring up the average bandwidth per user is to have fewer devices on each access point. And when each user is bringing 2-3 wifi capable devices with them everywhere they go...

The math is not very forgiving.


>Have people become so used to things "just working" in home they don't think of the technical differences between a couple devices and a couple hundred devices?

Are people so used to working in inept and el-cheapo companies that they cannot fathom a company wide Wi-Fi with APs (professional, not your home router free with your cable connection) serving "a couple hundred devices"?

Is that considered IT voodoo in some companies?

Heck, your $30/night Motel 8 can manage that...


Professional APs are in the $1-2 thousand dollar range, and you need one for every 5ish users. So, for a small company of 20 people (which won't have it's own IT professional), you're looking at an initial outlay of between 20 and 40 thousand dollars (IT contractors are expensive, and you have to include the Ethernet drops, rack, routers, patch panels, labor, etc), and a few grand a month for maintenance and support.

Compared to using plain Ethernet drops, the cost is hardly negligible, to the point where it isn't worth it for many companies.

My company has been going through the process of implementing this for a new office to hold around 100 people, and the amount of planning required to provide good coverage at a reasonable price has been enlightening to observe.

> Heck, your $30/night Motel 8 can manage that...

Not in my experience. At least not well enough to support bandwidth above the 100Kbps range.


>One AP per 5 active devices, give or take.

Probably never heard of professional office routers. You can have several times that.

Even if working in a 100+ person company, all such companies I've been already have all the company wide wi-fi and APs the need, so I don't see why the parent's company couldn't.

As for "ensuring they do proper hand-off as you move around the office", it's much easier with APs than plugging and unplugging ethernet cables -- which was the alternative we were discussing.


Try doing the math. Take your bandwidth expectations from such a connection, and divide the total bandwidth provided by an AP by that value. The answer is typically between 5 and 10; less if you're in an office building where each floor is providing their own WiFi network and you have to contend with interference.

Working with docker containers, virtual machines, streaming, and VOIP/hangouts... bandwidth becomes the main limiter to the number of users per AP, not the memory or other capabilities of that device.

> it's much easier with APs than plugging and unplugging ethernet cables

For the user, and only when the handoff actually occurs smoothly. When it goes wrong, folks have to power cycle their WiFi, or their entire machine. Compared to picking up a nearby Ethernet cable and plugging it in, this isn't that challenging usually. Of course, having to also pull out a dongle can get aggravating, but we were discussing that as well.


For the ethernet port, it is very much Apple's style... but either way, using consumer Apple devices for servers only make sense if you get them for free (civil forfeiture for example)...


Retina models ditched the Ethernet port, so 2012 is when that happened.


At least since the mid-2012 Macbook Pro Retina (which is what I have.)


If your laptop and wireless router supports 802.11ac then what are the most common reasons to still prefer Ethernet port?

Is it mostly about spectrum getting full or security concerns?

I just noticed that I haven't used Ethernet cable for the last few years.


Depends where you are. In my house in Chicago next to some high rises I could see 200-300 access points. My own fully powered 802.11ac access point had a range of about 12 feet. Worked for some rooms, but not for others. And the signal was jumpy. Was super nice to have a hardwire at my desk so I could go full speed.


> what are the most common reasons to still prefer Ethernet port?

Attempting to provide wireless coverage for a densely populated open office gets pretty expensive pretty quickly. It also can cause degraded signals for everyone in the office. The ethernet port just becomes more feasable way to get stable internet at your desk.


reliable network, consistent (low) latency and the security aspect. At least that is my top 3.


How on earth do you test retina displays when inside a server a rack? What are they doing here that can't be done with Mac minis?


"We require Mac OS X because the products we make run on OS X and we believe in testing on the same hardware our customers use, this helps produce a better product".

If the software they're testing is used mainly on Macbook Pro's, then they're wanting to test on Macbook Pro's. Things like GPU and CPU differences may make a difference to how their software performs.


"We have data centers with thousands of machines configured with all 3 OS’s running constant build and test operations 24 hours a day 365 days a year. This is just a small look at the Mac side of things."

Seems safe to say this is not a small company


Any Mac hardware will do like Mini's, xServes, and Mac Pros. Mac Pro's work really well and if you just rent them from a data center, the capital expenditure isn't there so you don't get hit with the $3k price tag.


Without knowing what product is being testing, how can you say that "Any Mac hardware will do"? Like I said - if they're wanting to test on the specific hardware that their customers are running the software on, you can't know that for sure. A Mac Mini has a different CPU/GPU to an XServe, likewise a Mac Pro, likewise a Macbook Pro.


Sorry to offend; I was referencing the "We require Mac OS X" part more liberally. It looks like this author could use any hardware as long as they had HDMI enablers to enable the GPU's in headless servers like the Xserve and Mac Pro.

I don't think they're opening up each row of MacBook Pro's continuously for they're software though. I think they've just got the lids open to make sure the GPU is enabled for better performance in remote sessions.


There are ways around the lid opening thing as far as activating the GPU (same issue applies to Minis and Pros that have no monitor attached in a datacenter environment).

The big requirement for keeping the lid open is probably to control the temperature. You can run a MBP with the lid closed, but it will get pretty toasty, and these aren't being used for a typical laptop workload one presumes.


No idea what they do, but they had a mac mini setup in the past: https://simbimbo.wordpress.com/2012/12/07/in-production/


- These systems run 24/7 in our Build and Test environment constantly building and testing the software we produce.

- Can you say who you work for?

- Sorry, unfortunately no.


He works for MathWorks, the company behind MATLAB.


Build and test is cool. Doesn't need screens, though. Unless you're actually testing the graphics card, in which case you might as well use the Mac Pro.


Potential reason is that they need see how their tests affect screen burn. They mention they specifically need Mac hardware so this could be why they specifically need retina screens.


It might have more to do with running GPU-assisted tasks through remote sessions. You need to either leave a screen attached and on or trick the device into thinking one is attached and on to take full advantage of the GPU on Mac hardware. You can actually do it with any mac hardware (even headless devices like Mac Mini's and Mac Pro's) but it's not advertised well enough.

For example, with mac Mini's in a server environment, you need to plug in a hdmi enabler to get full GPU performance front the system while using it over remote access.

Here's more information: http://www.macstadium.com/blog/osx-10-8-10-9-headless-gpu-en...


The HDMI dongle is a good solution for most users. It can also be worked around in software (although it is not trivial). It's kind of annoying that OS X requires jumping through these hoops, but at least there's a few solutions out there.


Wouldn't the high temperatures void that test though? I'd have thought screen burn will differ depending on screen temperature.



Racking portable devices. The circle is complete


Why are you keeping the lids partially open and drawing excessive power for the displays + backlight, instead of just using InsomniaX [1] ?

[1] - http://www.macupdate.com/app/mac/22211/insomniax


"They are actually being held open 7mm by a custom 3D printed wedge. This opening allows for the screen to be used for testing as well as ample air circulation. You can’t see the temperature sensors tucked into each notebook’s keyboard area."

Air circulation makes sense. There is a vent in between the screen pivot and the base, and also the gaps in the keys allow for some heat to escape.

Agreed that the screens should be turned off, though. Could just turn the brightness down to 0.


What's brilliant is that when the screen is closed, the same vent opens on the back of the computer. That's extremely elegant.


Because 3d printing is cool and trendy.


My first thought was, "Wouldn't a folded beermat have sufficed?"


Lots of DCs refuse to let you bring cardboard into the hall _at all_, never-mind keeping some in a rack.


It is a complete no-no, even if your DC doesn't say anything to you about it.

Just think of the potential loss of equipment if the sprinklers were to activate in a datacenter suite -- and if the flammable material is located in your cage, it's going to be you or your insurance covering that.


Usually the fire suppression is halon-like gas systems (can't use halon anymore), sprinklers are an absolute worst case last resort.

But yeah, there's plenty of

  NO CARDBOARD ON THE DC FLOOR
  NO CARDBOARD ON THE DC FLOOR
  NO CARDBOARD ON THE DC FLOOR
At every multi-tentant dc.


I have only seen dry-pipe water systems lately, since halon is long gone.

I did see one system that used HI-FOG sprinklers. Smaller water droplets are supposedly able to extinguish a fire with significantly less water. You're still sad, but maybe you can salvage more equipment.


I've seen a few chem agent and inert gas systems, but only when someone realized cost of equipment + loss from downtime > cost fire suppression system, which is less often than it should be.

dry pipe systems are probably the most common though.


Why use a menubar program…?

    $ man caffeinate


Yes, that :)


My first guess is heat.


I wonder why all of the display backlights are on? I understand having the laptops open to stop them going to sleep without an external display, but I'd've thought you could turn the backlight all the way to "off".


>I understand having the laptops open to stop them going to sleep without an external display, but I'd've thought you could turn the backlight all the way to "off".

You can even disable sleep with closed lid[1], but addmiteddly that might intefere with ventilation, since there are vents in front of the hinge.

[1]: https://developer.apple.com/library/mac/documentation/Darwin...


Perhaps the light is on so that they can spot dead devices at a glance?


Yeah perhaps, but I'd've thought some software mechanism (regular pings etc) would be easy enough to offset the power / heat of having the backlight on. Although I guess backlights are just a few LEDs these days, so probably actually not too bad to keep on.


Maybe it's just that it makes a cooler photo.


This is pretty neat, but I wonder how much more dense this configuration could be made if one took away the display, keyboard, battery, and chassis of the laptops and just had the motherboard, which (presumably) is fully integrated with a DIMM connector.


If you switch to Mac Mini's and assemble mutiple mini's in a private cloud running VMware, you can have a much more efficient setup. You can do it with xServe's and the latest Mac Pro's as well.


Is it really simpler and more efficient to have 96 individual power bricks with custom mounting hardware rather than one (or a few) larger, high-efficiency AC to DC converters and just distributing DC within the rack?


You would have to terminate in magsafe connectors. You'd either have to butcher the wiring on all the power bricks or come up with some frankenstein thing with airline adapters, which I believe are now discontinued for some reason.


I'm guessing it's because airlines are moving toward just having standard power jacks. In fact, I can't remember the last plane I flew on that actually had the power jack the airline adapter needed.


That's an expense rack. If the average MBP cost $1500, that's a total of $144,000 in the one rack. They must have some serious reasons to create something like this.


For a full rack that's not a ridiculous price. Oracle will gladly sell you racks that cost in excess of a million.


As will Cisco, or Dell, or HP.


With enterprise-ready hardware though, not consumers laptops that fail within a few years, even when they're not turned on 24/7.


Are there good reasons to believe that "enterprise" hardware is more reliable in practice?

Anecdotally, most desktops I used lived much longer than most servers I worked woth. I know a Compaq desktop that worked as a server for 11 years. Still does. One hardware failure in that entire time. (Power supply.) Could be a result of lower workload, but still...


There are superior aspects to the design of "enterprise" hardware (or rather, rack mounted server hardware) that have been mentioned by others: ECC RAM, redundant PSUs and HDDs, motherboard designs that (hopefully) are laid out for ideal airflow, etc.

Putting all of this aside though, the real secret to the superiority of hardware meant for the datacenter often comes down to the practice of binning. Manufacturers have different tolerances for the products they produce; hence why Intel has a million models of CPUs, Seagate produces so many different hard drives and Samsung sells DRAM chips to other vendors and makes their own DIMMs.

The products that perform the best are binned in the server-y bins, the others are moved down the list until they fit another bin. No manufacturer wants to discard parts if they can possibly avoid it.

Sometimes you're actually getting a deal, but a lot of the time you're just trading reliability for cost when you use lower binned items like desktop hard drives in a server environment. Sometimes that trade-off is worth it though.


Really??

ECC RAM and redundant PSUs alone should be unquestioned advantages over standard desktops.

Not to mention hardware raid controllers, IPMIs, designs that allow fans and other parts to be serviced without downtime... :P


Sure, but we're not speaking about overall system reliability. We're speaking about hardware dying. RAID will prevent data loss, but your actual disks might fail as often as something from vanilla PC.


Okay, to address it from that aspect then:

ECC RAM will both work-around and warn you about the start of RAM failures before complete DIMM failure.

And sure, consumer drives have the same MTBF as enterprise drives in practice[1], but: when RAIDing consumer disks, make sure that you get models that immediately report read/write errors, instead of retrying, which is the main (or only, if SATA) difference between enterprise and consumer drives.

If you have a nice RAID setup, you DO NOT want a drive to retry a failed block read 15 times before reporting a problem, and causing latency: you want the failed read to be reported immediately, so your controller can pull the same block from a different disk, and start diagnosing possible faults.

1 https://www.backblaze.com/blog/enterprise-drive-reliability/


I would say manufacturers try to stick to stable designs, select best components, offer guarantees, but I don't have anything specific to back that.


If you're spending a million dollars on a rack you're demanding considerably more what your old Compaq can do as a 'server' methinks


If you want to spend more than a million on a rack, I'll happily sell you one!


It would make sense to try the refurbished ones. I doubt you'd get 96 of the same model, though.

http://www.apple.com/shop/browse/home/specialdeals/mac/macbo...

    Refurbished 13.3-inch MacBook Pro 2.7GHz Dual-core Intel i5 with Retina Display
    $1,099.00
    Save $200.00
    15% off


The refurbished machines are a decent deal and I haven't had any problems buying them for personal or business use in the past, but the big limitation is on supply at any given time and the configs offered. Refurbs are largely stock configs and you can usually only buy 5-10 at a time. There are big benefits to config uniformity and bulk purchases, so refurbs can be tough in that respect.

Also, no one buying in this quantity should be paying list price to Apple, which cuts into the refurb discount considerably (if not outright eliminating it).


Though in a business environment, if this is going to be used for any significant length of time, the (presumably) shorter warranty might be an issue.


Refurbished Macs have the same warranty as non-refurbished.


Their requirements apparently include "i7 CPU’s, 16GB RAM" so I imagine the selection of refurbished and good enough is limited at best.


$1500 is the entry price for 4-way xeons, and that's before you factor in the chassis, motherboard, RAM and PSUs. You can easily find 4U servers more expensive than that, the rack seems to have a 32U capacity and could fit 8 such servers.


A standard server rack (fully populated, just regular app or web servers) can easily be 400k+. Everything built to run in a datacenter is pretty expensive, even if you've done your negotiating work and aren't foolishly paying list.


With such a large capital expense, it makes a lot more sense to rent dedicated mac hardware from a data center. You get simple monthly bills, guaranteed uptime, and the infrastructure is looked after for you.


If you were trying to do automated software testing on native OS X machines (instead of virtualization), I suppose it could make sense. Or something.


That's not terrible. We have a couple of high end HP machines that are 4U and cost more than that each.


Volume discounts?


What kind of testing do you do with this?


How much money the VC is willing to waste


96(!) macbooks, 3D printing and thermal imaging. I think you're spot on.


This made my day. Unfortunately, it's probably accurate...


Meh, you could resell them and get most of your money back.


>"Synergy, You know what that means?" >"Does it mean take a stack of cash and light it on fire?"



Ah nice, thank you.

They have tons of customers using this setup (Matlab on Macbook Pro) in my uni at least, so it makes a lot of sense.


Thanks for this.

Now i feel the OP's usage justified


An answer to this would be great. I'm not sure I'm going to be able to stop thinking about that question for a while.


For many people that need to develop, test, and deploy on Mac hardware they're doing iOS app or OS X development.

For example, you may have a macbook pro, but it's not powerful enough to run tests in a few minutes while you do other work on it at the same time. In this case, you'd want another Mac like a Mini, Pro or Xserve to send jobs to.

Anyone using X Code has to run it on a Mac and often, you're wasting precious development time when your jobs are taking up your Macbook's processing power for an hour while it tests your latest build.

Travis-CI, the continuous integration platform, will send your jobs to a cloud of Mac servers when they require OS X to build against.


Also, Mac Minis and Pros in a rack: http://photos.imgix.com/racking-mac-pros (still seems weird to me, but makes a bit more sense than racking laptops…)


(I wrote the imgix post you linked to)

I think Steve has done a great job on his MacBook Pro rack design, but I do wonder about the advantage the MBP has over the Mac Pro for their workload.

It may be that the GPUs of the Mac Pro don't matter to them, which would reduce the value proposition of those systems by a fair amount. The MBP also has an advantage in that it was just updated, and the current Mac Pro is a bit older. That advantage is hopefully just temporary, but it is pretty likely that the MBP will continue to receive more frequent updates in the future.

You can definitely squeeze more systems into a rack with the MBP, but my preference would still be to go with the Mac Pro:

  No batteries on your datacenter floor
  No power bricks
  Better airflow
  No screen burning electricity (although this is minimal)
The batteries particularly worry me. They're a potential fire hazard, and over time you'll need to replace them (even though they don't matter) before they expand and deform the system chassis. They also make power outlet control over a system annoying, since you'll have to wait for hours after the outlet is cycled off and the battery has discharged to cycle the outlet back on.


Here is a QA where he answers some of the main questions we all have in mind.

https://simbimbo.wordpress.com/2012/12/11/102/


At first I thought the headline meant he was using MacBook Pro's from 1996.


That would be "'96 MacBook Pros..." or I guess, since it was written by a Mac user, "’96 MacBook Pros..."


This is ridiculous! Is there no other way to test features on OSX ? or like a cloud environment?

Not only is this waste of money but also non eco friendly buying all that hardware


That's pretty amazing. It would be interesting to hear more about the testing they are doing.


why creating something like this?


Test farm.


There was similar post with ARM Chromebooks. A few years ago it was impossible to find stable ARMs which could sustain decent load without crashing. Company had to buy bunch of Chromebooks, strip batteries and put them into rack.


> There was similar post with ARM Chromebooks. A few years ago it was impossible to find stable ARMs which could sustain decent load without crashing. Company had to buy bunch of Chromebooks, strip batteries and put them into rack.

Well, no. The guy putting that rack together didn't know what he was doing and insisted on using the stock AC adapter for the devices. You NEVER use the stock AC adapters in a cluster. They are (usually) made to be cheap and not operate at full load. Maybe 5% will fail under continuous full load. Put 16 in a cluster and now you are looking at a 60% chance of a single failure.

ALWAYS ditch the bundled AC adapter and use a single, good quality, high power PSU that branches out to all the boards. 5 volts and 40 amps, for example. These PSUs are more like 99.9% reliable, and as a bonus output much cleaner power.

The only reason that this guy had success with Chromebooks is because laptop PSUs are typically sized at 3x capacity (for battery charging). Running a stock PSU 24x7 but only at 30% output greatly reduces the failure rate.

edit:

http://www.systemcall.eu/blog/2014/06/trashing-chromebooks/

https://news.ycombinator.com/item?id=7876235

Besides ignoring the power supplies, the original author used SD cards on the HK boards instead of eMMC, which is another reliability no-no. Though HK is pretty good about shipping quality PSUs with their hardware, so I suspect it was uSD being flaky in this guy's case.


Do you have a link? This sounds like an interesting problem.


It is a few years, no link, sorry.


I've have the distinct pleasure of opening an Apple package twice in my life. Looking at that cart full of boxes, 96 in one day/week/whatever... I don't know what to say about that.


Any idea who's that Steve and what he's testing? Parallels?


> I know some of you will reply with the standard “Why didn’t you just use Linux?”

No, i ask myself why dont you just use the Mac Pro (ie the one which isnt a laptop?????)


How much power does this draw?


I think those are 15" MBPs, their stock power supplies are rated for 85W, so we're looking at a maximum draw of just over 8kW for the laptops alone.


Why all this stupidity?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: