Only just realised from this that MacBook Pro's no longer have an Ethernet port! When did that happen?
You need to buy an expensive thunderbolt adapter. I can understand removing it from the ultra portable super-thin models but not from the top end professional model.
If you stay within the Apple ecosystem like they really want you to, then you don't need that because your thunderbolt display has an ethernet adapter as well as some extra usb ports. The Apple monitor is their replacement for the docking station. Power and connectivity, as well as a place to put the laptop for reasonable space efficiency.
You don't need the thunderbolt adapter, but if you want gigabit Ethernet it's either thunderbolt or USB3 (as you mentioned). One of the perks of the thunderbolt adapter is that it doesn't require any third party drivers, which tend to be hit or miss with the USB3 adapters.
Well the MacBook Pro's are ultra portable and super thin now. The laptop would probably have to be at least 20% thicker to make for enough room for an ethernet port. And since 90% of MBP Retina owners probably rarely if ever use ethernet, it's a good tradeoff to make. The Retina model is much lighter and easier to carry (especially with one hand) than it's predecessor.
I think Apple could have come up with a solution without compromising on this. What about a neat folding Ethernet port that flips out and is flush when not in use? I remember some old ThinkPads had something like that but it might have been a modem port (RJ11/RJ14 not RJ45).
The rare use is what makes it a problem. It's not worth the investment to buy and always carry an adaptor for that one time you really need it.
These flip-out connectors were always the first thing to break on your notebook and would render your only useful connection (modem, ethernet) completely useless.
The Thunderbolt widget is less than ideal, but it fits. Hopefully USB-C will introduce more options with a lower price-point.
Taking someone else's widget, tweaking it, rolling it out across their product line and claiming to have built it from scratch isn't exactly a foreign concept to them...
I welcome not longer having a gaping hole in the side of my computer. Wifi has come a long way and if people want to use ethernet it can be run through a multi purpose more modern port with a smaller footprint.
One con to doing that is how relatively large that would be inside the computer. Have you seen the inside of these computers? There is almost no space to spare.
although adding the 3.5mm headphone jack to an auxiliary board was super smart. people keep dropping them with headphones inside.
I wish I knew why Apple's top quality engineers didn't embed or bundled a USB-C to 60GHz WiGIG Dockinstation. Having only a single USB-C port is absolutely cool, if you have a wireless 7GBps dockingstation.
Is there some reason why Apple doesn't promote WiGIG?
Personally I wish they used Ultrabroadband-Radio (>500MHz Bandwith). There are numerous reasons fellow RF enthusiasts will recognize in UWB radio. But I am happy with whatever technology allows me to have a cable-free desktop 😌 This is the reason I loved the original Ubuntu Phone (with it's powerful specs).
And pretty freaking expensive too. $250 for a thunderbolt docking station, give or take. Despite the fact that the company will freely provide a $2,000 laptop and a $600+ monitor, getting them to pony up for a docking station has proven to be remarkably difficult.
As a WFH employee I have the opposite issue, I've got a ~$2,000 ThinkPad W540 and matching dock, but I have to provide my own monitor (which honestly is fine, due to amblyopia of the left eye I had as a child I lack peripheral vision for large displays - it's easier for me to just shop for my own).
Yeah, I was disappointed by that, too. It's not about the cost, I don't care about the extra $40 or so, but now there is one more "dongle/cable" I need to worry about.
You're right that it's more about not having it with you when you need it. However it does add up.
£25 for Ethernet and then another £25 if you want FireWire too. And £25 for VGA and £25 for DVI (though to be fair you can get this from the HDMI port). But you can only have 2 of these plugged in at a time. Other laptops of this price come with all the adaptors in the box if the ports aren't integrated.
I like having the option of Ethernet available easily. For example, BT have been having big internet issues for the last few days and it's handy to rule out the WiFi. Also with a previous ISP (sadly not with BT) the WiFi was the limiting factor and I had to plug in to get full speed. Then the HDD was the limiting factor!
A $30 adapter for a $2000+ machine isn't what I would call expensive. It does make the machines a lot thinner though and normally, you don't need wired Ethernet any more.
I easily reach 60+ MB/s now over WiFi, so for the times where I really need the additional power I don't care about having the additional dongle with me.
It's not expensive, it's inconvenient. Being able to quickly jack in to a gigabit network and not have to worry about fiddling about with wifi keys and unreliable speeds is something you should get with a high end laptop like an MBP.
I've been using (and carting) a Thunderbolt to Ethernet adapter since I bought my first Retina in 2012 and it's really not all that bad. If you're constantly using it daily then carrying it around isn't a huge ask.
In the office, all I plug into my MacBook Pro is the Thunderbolt connector to the monitor and the power adapter. The monitor is connected via wired ethernet.
This way I only have to plug in two cables instead of an array of cables. This is why I really don't miss the Ethernet port directly on the machine. For me personally, the gains in portability due to smaller size (and weight) are way more relevant than the theoretical ability to plug in an ethernet cable which I never need.
There have been about 10 times in the last few years when I have walked into a meeting someplace where there wasn't any guest wifi, or we needed the internal network for other reasons.
Here in Switzerland companies stopped granting guest access over the last few years. You're expected to bring your own infrastructure which usually boils down to tethering your mobile.
I can see that this is very much country dependent (we have truly unlimited plans that allow tethering), but at least for me, the smaller size of the machine trumps the ability to plug ethernet cables due to the general lack of available cables to plug.
The MacBook Pro I bought last year was the first that didn't have one. I seldom used Ethernet but I still miss it. Apple probably wouldn't use it but can't they develop a small connector for the next generation thin laptops? Microsoft Surface form factor is going to become prevalent in the PC world.
I'm more miffed about Apples' USB Type-C strategy.. but on the other hand, Docking stations have their value .. I was disappointed in this as well, but nevertheless am an rMBP user .. so until Henge upgrade this to include Ethernet:
I'm sure there are other options; it is a hassle that Apple removed it, but on the road I rarely need Ethernet, and at the desktop, its kind of easy to just plug-in.
The trick - having only one connector. For normal macbooks they will probably have multiple USB-C (for power and other stuff), maybe one USB-B, a card reader, video etc.
Another problem, very slow to actually connect to the wired network. I'm using it almost every day for different networks, I just can't get used to the wait.
One AP per 5 active devices, give or take. Plus positioning them and managing their power and ensuring they do proper hand-off as you move around the office, plus...
Providing reliable wifi for multiple people with multiple devices is a hard challenge.
It's shocking the amount of people that don't grasp this. "It works in my home, why isn't it the same in the office." Is it misinformation? Have people become so used to things "just working" in home they don't think of the technical differences between a couple devices and a couple hundred devices?
> Have people become so used to things "just working" in home they don't think of the technical differences between a couple devices and a couple hundred devices?
Until bitten by a problem, yes, that's exactly the thinking. In theory, a single AP can handle two thousand plus simultaneous connections. Most users have found this number, and don't think beyond it.
However, most APs don't have the memory to manage more than 40 or 50 keys for encrypted connections. Throw in shared bandwidth on this maxed out AP, and suddenly the (again theoretical) 1 Gb+ connection is down to less than 20 Mbps per user, or 1/5 that of a wired connection of ten years ago.
That's obviously not acceptable, so the best way to bring up the average bandwidth per user is to have fewer devices on each access point. And when each user is bringing 2-3 wifi capable devices with them everywhere they go...
>Have people become so used to things "just working" in home they don't think of the technical differences between a couple devices and a couple hundred devices?
Are people so used to working in inept and el-cheapo companies that they cannot fathom a company wide Wi-Fi with APs (professional, not your home router free with your cable connection) serving "a couple hundred devices"?
Professional APs are in the $1-2 thousand dollar range, and you need one for every 5ish users. So, for a small company of 20 people (which won't have it's own IT professional), you're looking at an initial outlay of between 20 and 40 thousand dollars (IT contractors are expensive, and you have to include the Ethernet drops, rack, routers, patch panels, labor, etc), and a few grand a month for maintenance and support.
Compared to using plain Ethernet drops, the cost is hardly negligible, to the point where it isn't worth it for many companies.
My company has been going through the process of implementing this for a new office to hold around 100 people, and the amount of planning required to provide good coverage at a reasonable price has been enlightening to observe.
> Heck, your $30/night Motel 8 can manage that...
Not in my experience. At least not well enough to support bandwidth above the 100Kbps range.
Probably never heard of professional office routers. You can have several times that.
Even if working in a 100+ person company, all such companies I've been already have all the company wide wi-fi and APs the need, so I don't see why the parent's company couldn't.
As for "ensuring they do proper hand-off as you move around the office", it's much easier with APs than plugging and unplugging ethernet cables -- which was the alternative we were discussing.
Try doing the math. Take your bandwidth expectations from such a connection, and divide the total bandwidth provided by an AP by that value. The answer is typically between 5 and 10; less if you're in an office building where each floor is providing their own WiFi network and you have to contend with interference.
Working with docker containers, virtual machines, streaming, and VOIP/hangouts... bandwidth becomes the main limiter to the number of users per AP, not the memory or other capabilities of that device.
> it's much easier with APs than plugging and unplugging ethernet cables
For the user, and only when the handoff actually occurs smoothly. When it goes wrong, folks have to power cycle their WiFi, or their entire machine. Compared to picking up a nearby Ethernet cable and plugging it in, this isn't that challenging usually. Of course, having to also pull out a dongle can get aggravating, but we were discussing that as well.
For the ethernet port, it is very much Apple's style... but either way, using consumer Apple devices for servers only make sense if you get them for free (civil forfeiture for example)...
Depends where you are. In my house in Chicago next to some high rises I could see 200-300 access points. My own fully powered 802.11ac access point had a range of about 12 feet. Worked for some rooms, but not for others. And the signal was jumpy. Was super nice to have a hardwire at my desk so I could go full speed.
> what are the most common reasons to still prefer Ethernet port?
Attempting to provide wireless coverage for a densely populated open office gets pretty expensive pretty quickly. It also can cause degraded signals for everyone in the office. The ethernet port just becomes more feasable way to get stable internet at your desk.
"We require Mac OS X because the products we make run on OS X and we believe in testing on the same hardware our customers use, this helps produce a better product".
If the software they're testing is used mainly on Macbook Pro's, then they're wanting to test on Macbook Pro's. Things like GPU and CPU differences may make a difference to how their software performs.
"We have data centers with thousands of machines configured with all 3 OS’s running constant build and test operations 24 hours a day 365 days a year. This is just a small look at the Mac side of things."
Any Mac hardware will do like Mini's, xServes, and Mac Pros. Mac Pro's work really well and if you just rent them from a data center, the capital expenditure isn't there so you don't get hit with the $3k price tag.
Without knowing what product is being testing, how can you say that "Any Mac hardware will do"? Like I said - if they're wanting to test on the specific hardware that their customers are running the software on, you can't know that for sure. A Mac Mini has a different CPU/GPU to an XServe, likewise a Mac Pro, likewise a Macbook Pro.
Sorry to offend; I was referencing the "We require Mac OS X" part more liberally. It looks like this author could use any hardware as long as they had HDMI enablers to enable the GPU's in headless servers like the Xserve and Mac Pro.
I don't think they're opening up each row of MacBook Pro's continuously for they're software though. I think they've just got the lids open to make sure the GPU is enabled for better performance in remote sessions.
There are ways around the lid opening thing as far as activating the GPU (same issue applies to Minis and Pros that have no monitor attached in a datacenter environment).
The big requirement for keeping the lid open is probably to control the temperature. You can run a MBP with the lid closed, but it will get pretty toasty, and these aren't being used for a typical laptop workload one presumes.
Build and test is cool. Doesn't need screens, though. Unless you're actually testing the graphics card, in which case you might as well use the Mac Pro.
Potential reason is that they need see how their tests affect screen burn. They mention they specifically need Mac hardware so this could be why they specifically need retina screens.
It might have more to do with running GPU-assisted tasks through remote sessions. You need to either leave a screen attached and on or trick the device into thinking one is attached and on to take full advantage of the GPU on Mac hardware. You can actually do it with any mac hardware (even headless devices like Mac Mini's and Mac Pro's) but it's not advertised well enough.
For example, with mac Mini's in a server environment, you need to plug in a hdmi enabler to get full GPU performance front the system while using it over remote access.
The HDMI dongle is a good solution for most users. It can also be worked around in software (although it is not trivial). It's kind of annoying that OS X requires jumping through these hoops, but at least there's a few solutions out there.
"They are actually being held open 7mm by a custom 3D printed wedge. This opening allows for the screen to be used for testing as well as ample air circulation. You can’t see the temperature sensors tucked into each notebook’s keyboard area."
Air circulation makes sense. There is a vent in between the screen pivot and the base, and also the gaps in the keys allow for some heat to escape.
Agreed that the screens should be turned off, though. Could just turn the brightness down to 0.
It is a complete no-no, even if your DC doesn't say anything to you about it.
Just think of the potential loss of equipment if the sprinklers were to activate in a datacenter suite -- and if the flammable material is located in your cage, it's going to be you or your insurance covering that.
I have only seen dry-pipe water systems lately, since halon is long gone.
I did see one system that used HI-FOG sprinklers. Smaller water droplets are supposedly able to extinguish a fire with significantly less water. You're still sad, but maybe you can salvage more equipment.
I've seen a few chem agent and inert gas systems, but only when someone realized cost of equipment + loss from downtime > cost fire suppression system, which is less often than it should be.
dry pipe systems are probably the most common though.
I wonder why all of the display backlights are on? I understand having the laptops open to stop them going to sleep without an external display, but I'd've thought you could turn the backlight all the way to "off".
>I understand having the laptops open to stop them going to sleep without an external display, but I'd've thought you could turn the backlight all the way to "off".
You can even disable sleep with closed lid[1], but addmiteddly that might intefere with ventilation, since there are vents in front of the hinge.
Yeah perhaps, but I'd've thought some software mechanism (regular pings etc) would be easy enough to offset the power / heat of having the backlight on. Although I guess backlights are just a few LEDs these days, so probably actually not too bad to keep on.
This is pretty neat, but I wonder how much more dense this configuration could be made if one took away the display, keyboard, battery, and chassis of the laptops and just had the motherboard, which (presumably) is fully integrated with a DIMM connector.
If you switch to Mac Mini's and assemble mutiple mini's in a private cloud running VMware, you can have a much more efficient setup. You can do it with xServe's and the latest Mac Pro's as well.
Is it really simpler and more efficient to have 96 individual power bricks with custom mounting hardware rather than one (or a few) larger, high-efficiency AC to DC converters and just distributing DC within the rack?
You would have to terminate in magsafe connectors. You'd either have to butcher the wiring on all the power bricks or come up with some frankenstein thing with airline adapters, which I believe are now discontinued for some reason.
I'm guessing it's because airlines are moving toward just having standard power jacks. In fact, I can't remember the last plane I flew on that actually had the power jack the airline adapter needed.
That's an expense rack. If the average MBP cost $1500, that's a total of $144,000 in the one rack. They must have some serious reasons to create something like this.
Are there good reasons to believe that "enterprise" hardware is more reliable in practice?
Anecdotally, most desktops I used lived much longer than most servers I worked woth. I know a Compaq desktop that worked as a server for 11 years. Still does. One hardware failure in that entire time. (Power supply.) Could be a result of lower workload, but still...
There are superior aspects to the design of "enterprise" hardware (or rather, rack mounted server hardware) that have been mentioned by others: ECC RAM, redundant PSUs and HDDs, motherboard designs that (hopefully) are laid out for ideal airflow, etc.
Putting all of this aside though, the real secret to the superiority of hardware meant for the datacenter often comes down to the practice of binning. Manufacturers have different tolerances for the products they produce; hence why Intel has a million models of CPUs, Seagate produces so many different hard drives and Samsung sells DRAM chips to other vendors and makes their own DIMMs.
The products that perform the best are binned in the server-y bins, the others are moved down the list until they fit another bin. No manufacturer wants to discard parts if they can possibly avoid it.
Sometimes you're actually getting a deal, but a lot of the time you're just trading reliability for cost when you use lower binned items like desktop hard drives in a server environment. Sometimes that trade-off is worth it though.
Sure, but we're not speaking about overall system reliability. We're speaking about hardware dying. RAID will prevent data loss, but your actual disks might fail as often as something from vanilla PC.
ECC RAM will both work-around and warn you about the start of RAM failures before complete DIMM failure.
And sure, consumer drives have the same MTBF as enterprise drives in practice[1], but: when RAIDing consumer disks, make sure that you get models that immediately report read/write errors, instead of retrying, which is the main (or only, if SATA) difference between enterprise and consumer drives.
If you have a nice RAID setup, you DO NOT want a drive to retry a failed block read 15 times before reporting a problem, and causing latency: you want the failed read to be reported immediately, so your controller can pull the same block from a different disk, and start diagnosing possible faults.
The refurbished machines are a decent deal and I haven't had any problems buying them for personal or business use in the past, but the big limitation is on supply at any given time and the configs offered. Refurbs are largely stock configs and you can usually only buy 5-10 at a time. There are big benefits to config uniformity and bulk purchases, so refurbs can be tough in that respect.
Also, no one buying in this quantity should be paying list price to Apple, which cuts into the refurb discount considerably (if not outright eliminating it).
$1500 is the entry price for 4-way xeons, and that's before you factor in the chassis, motherboard, RAM and PSUs. You can easily find 4U servers more expensive than that, the rack seems to have a 32U capacity and could fit 8 such servers.
A standard server rack (fully populated, just regular app or web servers) can easily be 400k+. Everything built to run in a datacenter is pretty expensive, even if you've done your negotiating work and aren't foolishly paying list.
With such a large capital expense, it makes a lot more sense to rent dedicated mac hardware from a data center. You get simple monthly bills, guaranteed uptime, and the infrastructure is looked after for you.
For many people that need to develop, test, and deploy on Mac hardware they're doing iOS app or OS X development.
For example, you may have a macbook pro, but it's not powerful enough to run tests in a few minutes while you do other work on it at the same time. In this case, you'd want another Mac like a Mini, Pro or Xserve to send jobs to.
Anyone using X Code has to run it on a Mac and often, you're wasting precious development time when your jobs are taking up your Macbook's processing power for an hour while it tests your latest build.
Travis-CI, the continuous integration platform, will send your jobs to a cloud of Mac servers when they require OS X to build against.
I think Steve has done a great job on his MacBook Pro rack design, but I do wonder about the advantage the MBP has over the Mac Pro for their workload.
It may be that the GPUs of the Mac Pro don't matter to them, which would reduce the value proposition of those systems by a fair amount. The MBP also has an advantage in that it was just updated, and the current Mac Pro is a bit older. That advantage is hopefully just temporary, but it is pretty likely that the MBP will continue to receive more frequent updates in the future.
You can definitely squeeze more systems into a rack with the MBP, but my preference would still be to go with the Mac Pro:
No batteries on your datacenter floor
No power bricks
Better airflow
No screen burning electricity (although this is minimal)
The batteries particularly worry me. They're a potential fire hazard, and over time you'll need to replace them (even though they don't matter) before they expand and deform the system chassis. They also make power outlet control over a system annoying, since you'll have to wait for hours after the outlet is cycled off and the battery has discharged to cycle the outlet back on.
There was similar post with ARM Chromebooks. A few years ago it was impossible to find stable ARMs which could sustain decent load without crashing. Company had to buy bunch of Chromebooks, strip batteries and put them into rack.
> There was similar post with ARM Chromebooks. A few years ago it was impossible to find stable ARMs which could sustain decent load without crashing. Company had to buy bunch of Chromebooks, strip batteries and put them into rack.
Well, no. The guy putting that rack together didn't know what he was doing and insisted on using the stock AC adapter for the devices. You NEVER use the stock AC adapters in a cluster. They are (usually) made to be cheap and not operate at full load. Maybe 5% will fail under continuous full load. Put 16 in a cluster and now you are looking at a 60% chance of a single failure.
ALWAYS ditch the bundled AC adapter and use a single, good quality, high power PSU that branches out to all the boards. 5 volts and 40 amps, for example. These PSUs are more like 99.9% reliable, and as a bonus output much cleaner power.
The only reason that this guy had success with Chromebooks is because laptop PSUs are typically sized at 3x capacity (for battery charging). Running a stock PSU 24x7 but only at 30% output greatly reduces the failure rate.
Besides ignoring the power supplies, the original author used SD cards on the HK boards instead of eMMC, which is another reliability no-no. Though HK is pretty good about shipping quality PSUs with their hardware, so I suspect it was uSD being flaky in this guy's case.
I've have the distinct pleasure of opening an Apple package twice in my life. Looking at that cart full of boxes, 96 in one day/week/whatever... I don't know what to say about that.
You need to buy an expensive thunderbolt adapter. I can understand removing it from the ultra portable super-thin models but not from the top end professional model.