Hacker News new | past | comments | ask | show | jobs | submit login
iMac Pro's T2 chip (macworld.com)
538 points by lumisota on Jan 4, 2018 | hide | past | favorite | 279 comments



I know I trust Apple[1] and all that but I (tin-foil hats on) don't like the system on a system stuff going on. I don't like closed systems that I have no oversight into, into what they might be logging, etc. The industry will likely follow Apple here and it's not too much of an issue given how low volume that iMac Pro is going to be but this could trickle down into macbooks and that'd be sketchy. I should just go hide in my bunker and build linux from scratch on a fully GNU open laptop -- alas that's not practical.

[1] more so than Google because Google uses my data and habits to sell me ads and sell that data to their customers, Apple wants to sell me gadgets and songs and movies, if that changes I'll drop Apple


I agree with you entirely, except in the case of Apple. I don't like closed systems either, especially when it comes to security, but, for some reason, I trust that when Apple closes a system, it's for security reasons. Their history with the Secure Enclave on iOS devices has given me a track record of security that I trust and, although it's still a closed system, the fact that they release white papers on every secure system gives me far more confidence than closed systems distributed by other companies.

I won't trust the industry when this becomes the norm but, until Apple does something to violate my trust, they've made it a point to earn trust when it comes to privacy and security.


They have deliberately hobbled Apple maps, Safari, Siri, and anything else that would be enhanced by big data for the sole fact for a secure experience .

They created Identificaton systems that are easy to use and deployed them to all iPhones .

If there is any company that actually cares about not monitoring your behavior it is Apple. They have the best track record comparing it to other big players .


Or, more pessimistically, they couldn't get Apple Maps, Safari, Siri working well with big data so they hobbled them and claimed it was for a secure experience.


This comment only makes sense if you accept the premise that the working well with big data is more valuable than a secure experience.


I would take the bet that something that provides direct revenue incentives is worth more than something that only an admittedly vocal minority of users even care about, let alone might spend more for.


Isn't Apple's entire business model selling to a minority of users who are willing to spend more for their products?


It's not self-evident that the minority of users who care about privacy is related to the minority of users who prefer Apple products.


I see, so the pessimistic view you suggested above is that Apple has had such remarkable success selling to its subset of the market in spite of its inability to integrate big data into its products. This could be true, though it seems a bit less likely than a more direct explanation that Apple is selling its customers what they prefer.


I mostly only own Apple stuff because of the privacy reasons (and that their are the only company which got scrolling animations/feel right)


When the standard of “well” is Google, who has very much data indeed, that’s not a farfetched premise.


Well, are they orthogonal, or aren't they?


I think I should have used a different word than secure exprerienc. Maybe sacrifice for security.

Although, I don't disagree with your statement. Considering how they seem to be attempting to catch up with ML projects and maybe it wasn't as emphasized in Apple's culture that could be it.

It seems like the AI so far has been powered by acquisitions and they are playing catch up.


I know and that's why I still buy their stuff but as soon as that trust is broken it's a firesale on hardware over at gigatexal's. I mean they are a corporation with a lot of people depending on them for their fortunes, should the era of excess market returns fail or slow what avenues of revenue might they turn to next? I think perhaps it was a huge deposit in terms of trust when they left Google's mapping service for maps to roll their own now that i think about it.


What do you mean by "huge deposit in terms of trust" re rolling their own maps? Am I interpreting correctly that that was a good thing?


Apple took a huge hit business-wise by rolling their own (adequate but inferior to Google's) maps. One of the reasons they made the split was that Google wanted user and usage data that Apple was unwilling to share. Google was offering vector map data (as opposed to tile-based) in exchange for that user data, and Apple refused. So, Apple bought user information protection (or, at least soloing) in exchange for loss of map quality. I believe that's the "deposit of trust"


I remember to have read that they shut down the G maps because Google expected them to force users to sign in for the full experience (eg turn-by-turn navigation, which at that time was only featured on android). That was the point at which Apple had to walk away from the deal


My guess is that the GP meant “huge deficit in trust” and autocorrect or fast typing made that into “huge deposit in trust”. So GP is saying that Apple ditching Google Maps to roll out its own wasn’t a good idea at that time and eroded trust (on that topic, Apple Maps is still pathetic and comical outside of the U.S. and a few other countries).


Nope I meant what I wrote. Ditching Google for mapping was probably for two reasons: quit giving google data from iOS users using maps, and corollary to that, garner more support from those more security minded.


I value Apple’s stance on privacy, and I agree that Apple balking at Google’s terms to get user information from maps was the right decision. But I don’t believe it added any deposit of trust from Maps — on the contrary, its own implementation was so poor to get trashed in the press and was the main reason Scott Forstall was fired. I’ll say it again, based on personal experience (even recently), Apple Maps is still pathetic to be completely useless in many Asian countries. It didn’t and doesn’t garner any trust in those places.


All Maps apps have issues, it's not the easiest problem in the world to solve. I have no doubt Google Maps is better. But I also have no doubt that Apple should get megakudos for Apple Maps, even in countries where it works worst at least it's an option that doesn't require you being tracked.


I really, really wish they would have bought Waze. It was a big coup for Google to buy them up. Had Apple bought Waze, Apple maps would be much better today.


I’m pretty sure Waze never had their own maps, so that wouldn’t have helped Apple at all. And it would have only been a small aquihire since the Waze UI is too opinionated and non-Appley to be the default maps app anyway.


Waze started out as a community-based map building startup. They had their own maps.


I believe you’re still misunderstanding the deposit of trust thing. It’s not about app quality at all or any of that.

Imagine “trust” is a currency, and each user had a “trust account” into which more trust could be deposited (or withdrawn). Apple made a large deposit of trust-as-currency into each user’s account when they axed the Google Maps relationship. Google was demanding a large tax/withdrawal from each user’s trust account. Apple stopped that, effectively making a large deposit of trust back into each user’s account.

Yes, Apple’s Maps has suffered and been inferior. But it isn’t taxing/withdrawing from each user’s trust balance.


I'd say it represented both a deposit in terms of security trust, and a withdrawal in terms of functionality-quality trust.


I think it's gotten better but if I want to get to a place and trust that I can get to a really hard to find place I use google maps. From a user's perspective I agree. But if Apple could or did, I'd use Apple's search engine over Google's (a la the use case for duckduckgo), just so I wasn't funding Google. But I make compromises like anyone else.


Imagine you are now the only person who has to be satisfied with the confidence you feel in map products.

Otherwise you equally weight everything giving nobody any preference other than no data and no location.

Now you realize that everyone has disabled maps until you switch them on because you rely on API location information.

However the five years since you began enforcing privacy has allowed everyone to reach parity.. partly because nobody is piggy backing the others.

I can't think of the way I could trade off the what next situation I just described: get everyone using power hungry chips to get you a chance at the bad guy .. how does it work if you balance the players?


I think Apple uses 'security reasons' as a cover to advance their monopolistic business models, in a way to keep their platforms locked down, and a way to extract a minimum 30% transaction osst and yearly fee for access to their walled gardens.


monopolistic business models

Apple are far from a monopoly, they don't even have a majority of the phone market, their PC share is still in single digits.


Monopoly isn't defined by share of whatever market description you find convenient, it's defined by pricing power: whether the seller can raise prices over some range without losing sales to competitors. There's lots of times when things in what intuitively descriptively are the same market by some product description are different markets in practice because people don't, in practice, substitute between them in response to price movement.


From Wikipedia:

A monopoly (from Greek μόνος mónos ["alone" or "single"] and πωλεῖν pōleîn ["to sell"]) exists when a specific person or enterprise is the only supplier of a particular commodity.


My hypothesis is that Apple uses 'security reasons' as an excuse for their lack of dominance in voice assistants + machine learning. Siri's suggestions are virtually nonexistent compared to Google, Amazon, or even Microsoft's offerings. Siri's extensibility is entirely local, meaning that all new functions you want to develop for Siri need to be packed inside an iOS app for the user to install rather than a cloud function. Siri's 'app suggestions' to the user are also inferior, usually defaulting to the apps you used most recently.


"I don't like closed systems that I have no oversight into"

Whoa there, so you have insight into all those chipsets on your current motherboard? You know, the ones that are outside the CPU, made by third parties and control your audio, communications, video, networking, etc?


Yup, I don't get the GP's comment. You have two systems: A) T2 SOC B) Multiple chipsets working on its own. In both cases, it is a closed system.


> You have two systems: A) T2 SOC B) Multiple chipsets working on its own. In both cases, it is a closed system.

Sorry but I have to disagree with that. In the latter case, it's not a closed system because the manufacturers of the chips (Intel, Realtek, etc) have publicly available datasheets describing in detail how to configure and use the chips. This allows people to write their own drivers for the hardware.

The article says the T2 is implementing, at a minimum:

* RAID

* Sound

* Storage encryption

I have yet to see a public datasheet from Apple describing how a third party OS like Linux can utilize the features of the T2.

You can get chipset datasheets from Intel which describe the registers, how to configure them, chipset IO pins, etc. [1]

Similarly, you can get datasheets from audio chipset manufacturers that describe their chips in detail. [2]

Same goes for many other components in a standard PC system, such as the SuperIO chip, TPM, USB controllers, etc.

What Apple is doing is making more and more of their hardware proprietary, and (to my knowledge) not publishing a datasheet for these replacement components. This will actively harm anyone trying to run a non-Apple OS on the hardware.

Sure, the component datasheets can't help you verify that the chip isn't doing something nefarious interally, but how is that any different than trusting Apple not to have any bugs or do anything nefarious in the T2?

The replacement of components having publicly available datasheets with one that is a black box bothers me.

[1] https://www.intel.com/content/dam/www/public/us/en/documents...

[2] http://realtek.info/pdf/ALC888_1-0.pdf


Thanks for the detailed response. I understand your point completely - I agree that having datasheets publicly available certainly provides a level of transparency.

The problem is that you're already trusting Apple by buying their system which is inherently closed. macOS is a completely closed OS with literally zero information about how these discrete chips may be used. The datasheet provides you with the API to the hardware, but you have no idea how Apple would be using the microphone for example - whether it is T2 chip or Realtek.

GP's argument about "closed system" is moot when you're talking about using an inherently closed system - meaning, OS + Hardware.

Also datasheets are what Realtek, for example, wants to publicize. How would you know if there is additional functionality built into the controller for backdoors, etc. that is deliberately left out of the datasheet?


> The problem is that you're already trusting Apple by buying their system which is inherently closed.

What? Perhaps we have different definitions of a closed system.

I mean, even if you buy a Librium you're still getting a "closed" system because there are binary blobs such as microcode updates that run on it.

The only way you can have a 100% open system is if it's open source hardware and something like RISCV (IMHO).

Anyway, with a datasheet for the motherboard components there's a reasonable chance that someone could get coreboot working on the board. Without datasheets, it's nearly impossible to replace the system firmware with a different implementation.

> macOS is a completely closed OS with literally zero information about how these discrete chips may be used.

I think Apple is still releasing the XNU source, so you should be able to glean some information about the device functionality from the kernel module source code (assuming that is also published). [1]

> The datasheet provides you with the API to the hardware, but you have no idea how Apple would be using the microphone for example - whether it is T2 chip or Realtek.

So what? I never said I wanted to know how macOS is using the microphone.

> GP's argument about "closed system" is moot when you're talking about using an inherently closed system - meaning, OS + Hardware.

No, it's moot for your specific definition of a closed system. My definition of a "closed system" differs from yours.

> Also datasheets are what Realtek, for example, wants to publicize. How would you know if there is additional functionality built into the controller for backdoors, etc. that is deliberately left out of the datasheet?

You don't. Invest in tin-foil hat manufacturers.

> I understand your point completely - I agree that having datasheets publicly available certainly provides a level of transparency.

From your response I don't get the impression that you understand my point at all.

My point was that Apple is replacing standard components used in PC designs since decades with a black box and not publishing a data sheet.

I didn't argue that macOS was open. I didn't claim Apple should provide the VHDL files of the T2. I just said, if they're going to replace components with public datasheets with a magical black box lacking any public datasheet, I don't like that.

My comment was specifically about how lacking a datasheet for the T2 is going to make using the computer with Linux (and without forcing the T2 into "terribly insecure" mode) much more difficult.

[1] https://github.com/opensource-apple/xnu


I do and what you are claiming is:

Knowing the datasheet = Knowing exactly how the chips are being used.

That's not true at all. You have no insight into the source code. Knowing the datasheet just gives you the functionality definition and capabilities of a particular chipset.


Let's agree to ignore vendors going to the additional effort of putting in intentional back doors in their chips for the moment. That's not the issue I'm discussing in any of my comments.

> I do and what you are claiming is:

That is not what I'm claiming at all. The datasheet is the hardware equivalent of an API interface. I have not stated otherwise.

> Knowing the datasheet = Knowing exactly how the chips are being used.

By having the datasheet and the kernel source code you can see how the chips are being used by the operating system.

Without the datasheet, you have to reverse engineer what the OS/kernel is doing to the chip.

If you also happen to lack the OS/kernel source code, then you have to resort to black box reverse engineering.

> Knowing the datasheet just gives you the functionality definition and capabilities of a particular chipset.

This. Is. Exactly. My. Point.

Apple is still, to my knowledge, not publishing any datasheets for the T2. Therefore you CANNOT KNOW the "functionality definition and capabilities of" the T2 inside the iMac Pro except by the methods I describe above (either source code inspection or black box reverse engineering).

None of my comments have been about the internal operations of these chips or what nefarious nation states or three letter agencies may or may not be doing. It was entirely about Apple replacing components with datasheets with a component lacking a datasheet. jfc


Remember all those components can't "talk" directly to other components. They must all go through the CPU. So if your graphics card want's to make an internet connection, it must go to the CPU which will then go to the network device.

So if you don't have control over a peripheral (say your GPU for example) then yes, it could be doing things you have no control over. But it can't interfere with anything else unless the CPU says so.

But if you don't have control over your CPU, the "central" processing unit, then it's game over.


Your Ethernet controller and hard drive controller don’t really need to talk to anything else. If either is compromised, it’s already bad.


Ever hear of DMA, dude?


Isn't that a feature of the CPU though?

*edit sorry I am wrong. DMA seems to bypass the CPU [1]

1. https://www.csoonline.com/article/2607924/security/stop-snea...


I was about to correct you when you edited your own comment !! :)


Not having a central controller multiple subsystem vendors would have to cooperate using an agreed DMA communication protocol to monitor you and send the information back using the wifi/ethernet chip. Possible but unlikely.


The DMA communication protocol is already defined. It is part of PCIExpress.


On the iMac Pro, will PCI devices be able to DMA into both the Intel and ARM CPUs? Is there a single IOMMU which will arbitrate DMA for both CPUs?


The IOMMU functionality is built into the Platform Controller Hub, which is between the baseboard management controller (the ARM) and the main processor.

Theoretically it would be possible to prevent DMA between the two, but it is highly doubtful Apple would program it that way.


This is what an IOMMU is for.


He said he didn't like it, not that he refused it in ever case. I don't like it but I accept there are certain limitations I must deal with. Its about trade-offs and where you draw the line.


Not with that attitude, and not with this crowd.

Even if that doesn't magically create good things out of thin air, I so would love to separate for good from those who don't even want good things.


Yes after a full source code review. :-P

Like it or not, you have to place trust somewhere. Maybe it’s not Apple, but pretending one has full visibility over every system is just going to create cognitive dissonance.


I've heard the argument before that you are better off reviewing a disassembled binary than the original source code because you are going to be influenced by layout, variable names, and comments which could mislead you.


At least with Linux I can do a code review. Or hire someone I trust to do it for me. With a closed system that is not an option.


I mean yea, but this is about a processor and related microcode. It sucks, but there literally is no real mainstream opensource audit-able option for the consumer market right now.


There's this:

https://www.sifive.com/products/risc-v-core-ip/

Not quite ready for prime-time, but it's a big step in the right direction. At least there's hope for future.


So you can do a code review on the entire Linux codebase and you are skilled enough to find any security vulnerability? Can the person you hire do it? Would they have found the Heartbleed issue that was in open source code for at least two years?


> So you can do a code review on the entire Linux codebase and you are skilled enough to find any security vulnerability?

My odds there are much better than they are trying to do a similar audit on MacOS.

> Can the person you hire do it?

There are people out there who are extraordinarily skilled at this sort of thing. Security flaws in, say, iOS are found regularly. That's how jailbreaks work. (And, BTW, the fact that Apple doesn't give me the option of opening up the jail on my own phone means that if I want this freedom I have no choice but to run some black-box binary from some unknown Chinese hackers. That is by far the biggest security risk I face when using Apple products and it is a direct result of their policies.)

> Would they have found the Heartbleed issue that was in open source code for at least two years?

They did find it, and I didn't even have to pay them. If I had paid someone to look, they might have found it sooner.

No system will ever be perfect. But I like my odds on Linux much better than on Apple.


My odds there are much better than they are trying to do a similar audit on MacOS.

Based on what metrics? Someone skilled in Assembly can and has found many vulnerabilities in closed source software. The chance of you as an individual being able to audit the entire Linux codebase and finding a new vulnerability are nil unless you have some special skills that you haven't mentioned that make you better than some of the top hackers and researchers.

There are people out there who are extraordinarily skilled at this sort of thing. Security flaws in, say, iOS are found regularly.

Despite the fact that is closed. How does that support your point that "open" software is easier to find vulnerabilities. You said that you could hire someone to audit the code, could you afford them?

That's how jailbreaks work

Without the code being open...

(And, BTW, the fact that Apple doesn't give me the option of opening up the jail on my own phone means that if I want this freedom I have no choice but to run some black-box binary from some unknown Chinese hackers. That is by far the biggest security risk I face when using Apple products and it is a direct result of their policies.)

Apple has the source code for all of the chips they use and they had a hand in designing most of them. As opposed to the random Android manufacturer who outsources their chips and the manufacturers give the OEMs binary blob drivers.

Even Google said they couldn't upgrade one of their devices because they couldn't get drivers for one of their chips.

Do you really run your Android phone with all open source software that you compiled and installed yourself? Including the various drivers? Have you audited all of it?

They did find it, and I didn't even have to pay them. If I had paid someone to look, they might have found it sooner.

And they've also found bugs in close Software and if you pay someone enough they could find stuff too.

No system will ever be perfect. But I like my odds on Linux much better than on Apple.

Based on what objective metrics?


> Based on what metrics? Someone skilled in Assembly can and has found many vulnerabilities in closed source software. The chance of you as an individual being able to audit the entire Linux codebase and finding a new vulnerability are nil unless you have some special skills that you haven't mentioned that make you better than some of the top hackers and researchers.

Auditing C source code is orders of magnitude simpler, than auditing binary. With the same resources, you can have much more done in the former case than in the latter.


Then why do you hear more about Android vulnerabilities than iOS vulnerabilities?


Why do you keep insisting it comes down to him to audit the entire Linux codebase? The advantage of it being open is that there are millions of potential eyeballs out there all with different areas and levels of expertise.


As time goes on, that argument is becoming increasingly debunked. Heartbleed, already mentioned in this thread, is a good example. OpenSSL is one of the most visible pieces of open source software, with millions of actual (not potential) eyeballs on it, and a complete security defeat sat in the code for years. I suspect that having "all bugs are shallow" programmed into our brains means that we don't take the time to review code that we use, because we assume that other people did. Group psychology, just like at accident scenes: nobody's going to help, because they're waiting to see what everybody else does. OpenSSL is popular and open source, so surely people would have found all the issues. Same thought.

OpenSSL is also extremely arcane. I tried to work on it once, and spent days simply understanding the data structures. It was, when I was working with it, entirely undocumented. Out of those millions of eyeballs, say a few dozen completely understand the library, and a percentage of them have the capability to review the exact algorithms being implemented. Simply publishing source code is not a silver bullet to gain competent reviewers and contributors, otherwise Linux would be bug-free and have bug-free drivers for every device in existence.

Linus's Law has compelling arguments against it. esr may have been wrong about the bazaar.


Because he said he could do it. Those millions of potential eyeballs didn't find the Heartbleed bug. And out of the "millions" of potential eyeballs how many are skilled enough to find anything?

There are people looking and finding security vulnerabilities in closed software. Why do you assume that it's any harder for a skilled assembly language coder to find vulnerabilities in closed source software. Heck back in the day I knew assembly language well enough to read and understand dissembled code and I wasn't that great at it.


"Those millions of potential eyeballs didn't find the Heartbleed bug."

Sorry, how would we be talking about it if they didn't find it?


After two years when it was in open source code?

How is that any better of a track record than vulnerabilities found in closed source code?


We don't really know when it was first found, iirc, only when it was publicized.


That's not helping the case for open source software...


Exactly.. Open source is awesome, don't get me wrong, but it's not safer by definition. Sure, people can look for problems and openly discuss it and fix it, but that's assuming they are whitehats. Blackhats are also looking, all day every day, for exploits in open-source code. And they can find them before whitehats do.


Yes. Not necessarily with the knowledge I currently possess or with the effort I'm currently willing to put in, but if I wanted to put in the necessary effort and study, I could. Having that option open to me has value in and of itself even if I choose not to exercise it.

And the option is also open to you to be really good and find security vulnerabilities in closed source software just like the researchers at Google do and all of the people who found jailbreaks.

But even then, that wouldn't have helped with the latest bug found in x86 microcode...


> he said he could do it

Yes. Not necessarily with the knowledge I currently possess or with the effort I'm currently willing to put in, but if I wanted to put in the necessary effort and study, I could. Having that option open to me has value in and of itself even if I choose not to exercise it.


This is a super important point.

A complete layperson can use open-source software knowing that it has been looked at by at least some part of the community, conferring all kinds of security benefits.


And that same lay person can assume some experts (including some at Google) have looked at closed source software.

But all of the people running Linux on x86 had no way of knowing that there was a bug in the x86 microcode.


These are all terrible arguments, sir. There is absolutely no way to make an argument that closed-source software is just as auditable or secure as open-source software. There will always be methods of trust required when you use things other people made, but when software is developed in a black box, with NDAs you didn't know existed, or 0-days known about for years, you simply CANNOT make an analogy to heartbleed. Heartbleed was one vulnerably in a poorly maintained OSS library. It is foolish to use it as your primary example of the failure of OSS security when there are tens of thousands of closed tickets on Github, Bitbucket and Gitlab to that would argue OSS is easier to audit and easier to openly discuss ihfosec issues.

By coming up with the same examples over and over again in lower threads you are coming close to trolling this thread and ought to stop and come up with a better argument for why a corporate system-on-a-chip system is inherently just as safe as hardware developed with open source firmware.


So where is the "secure software running on open source firmware" that you can point to as being super secure? In your x86 based PCs running Linux? In the Android devices using chips, shipped with binary blobs that even the phone manufacturers don't have access to?

How has all of the "openness" of the Android ecosystem helped customers get patches for security vulnerabilities compared to iOS users?

And no one has pointed to a real world metric showing open source software being less vulnerable and getting patches faster than closed software.

And that one poorly maintained library was the basis of a mass industry wide security problem. It wasn't some obscure backwater piece of code.


I really wish the community would deal with the reality that open source software really isn't outstandingly secure - and that in fact there are exactly zero reliable processes in place that can guarantee its security.

Suggesting that "the community will find bugs, because eyeballs" is wishful hand-wavey exceptionalism that has been disproven over and over.

It's an unprofessional and unrealistic belief system that has no chance at all of creating truly secure systems.


That's all perfectly fair criticism about a specific type of OSS advocate. But the discussion here is not actually about "it's more secure because it's open" and really about "it has a higher likely-hood of being secure because all of the code is fully audit-able"

Again, heartbleed is a terrible example because many many people have complained for a long time about the cruft building around OpenSSL. Do you think i don't read the code of an open source python module before I roll it into a project? Would I ever consider including a python module in my code that came to me as a binary blob (not that this possible ... yet)? Not on your life.

The reality is it's shitty that I have to trust corporations with closed source firmware and I wish it were otherwise.


it has a higher likely-hood of being secure because all of the code is fully audit-able

If that's true, there should be a way that you can compare the amount of security vulnerabilities in iOS vs Android and the time it takes to get it patched and get the pst h distributed.

It should also be possible to have the same metric in Windows vs Linux, the various open source databases vs the closed source databases etc.

On the other hand there is an existence proof that closed source software doesn't prevent smart people from finding security vulnerabilities in closed source software - the number of vulnerabilities that researchers have found in iOS.

But why is everyone acting as if reading assembly code is some sort of black magic that only a few anointed people can do?

And if open source, easily readable code was such a panacea, the WordPress vulnerability of the week wouldnt exist.


How did you audit your intel CPU the last 10 years?


I didn't because I can't (obviously) and this is a real problem, which I am trying to solve by building my own hardware to act as a root of trust:

https://sc4.us/hsm/


Website mentions fully open hw+sw, is this public?


Depends on what you mean. The code is all open source, and the hardware is all off-the-shelf. But the board design has not been released. If you really want it drop me a line and we can probably work something out.


You can reverse engineer the firmware files though. I do this very often when new versions of iOS are released.


Even with Linux on an open system, you're going to be hard pressed to get access to the source code for all of the controllers in your system.


The only system I trust is mutual distrust. Where competing beligerents beat the tar out of each other.

The US Govt’s Constitutional balance of powers (executive vs legislature vs courts) is based on this principle. It seems to be the best available solution found thus far.

Note: I am an Apple fan boy.

Source: Am also an election integrity activist, which greatly informs my views on trust, privacy, security, transparency, accountability.


Open source visibility is hugely exaggerated, are you going to read one million lines of code or more ? You think others will do when every open source project struggles to even find contributors to fix small bugs ?

Sure its more visible than closed source, no question, but still it's no angel.

Apple is no angel either but at least they do not pretend to be ;)


Sure, something might fly under the radar for a while, but once there's an accusation of impropriety or a security hole discovered, anyone can look into it.


Open source brings transparency when the publisher is not a malicious party, but a flawed basically legitimate actor that doesn't have reason to risk the reputation hit of hidden backdoors.

People can then look at what the code does instead of relying on corporate spin.


> I should just go hide in my bunker and build linux from scratch on a fully GNU open laptop -- alas that's not practical.

I'm hopeful that this is at least becoming slowly more practical for some (more technical) people. Open hardware is the big blocker, but there at the very least seems to be much more mainstream popular interest in projects like System76's un-ME'd machines and similar initiatives then there was, say, 10 years ago. Baby steps.

As for building Linux from scratch, while I have quite happily used Gentoo for a while in the past, I think I'd echo some other commenters here on personally auditing every line of open source code: if you're not doing that, distrusting checksummed precompiled binaries is a somewhat odd distinction. The main differentiator is that the code is "auditable", not necessarily that it's actually been audited (see OpenSSH). If it's auditable, I consider that to be a not-absolute-but-still-significant improvement over closed systems.


> I'm hopeful that this is at least becoming slowly more practical for some (more technical) people.

I would say that it is, and not just for the technical.

Putting Ubuntu on a laptop more than 6 months old is now trivial, and (given a non-exotic WiFi card) works fine. The problem is people being used certain to proprietary apps (Adobe Creative Suite/Cloud, Excel etc.).

The primary problem however, is that for the ordinary user, there’s little benefit to free software, assuming the machine is new enough to be performant, or the user is wealthy enough to replace it.

I’ll be interested to see if this changes when all the wonderful TPMs and for-your-own-good black-boxes start to get used for DRM.


> not just for the technical

I was mainly referring to putting Gentoo on a free-hardware* machine, so I don't think that would be recommendable to non-technical users for some time, but otherwise, yeah, agree with all your points.

Especially about buying new machines - Linux is very much a real benefit when hardware isn't considered disposable.

* which doesn't really exist in full, yet, but there's seems to be some initiatives gaining momentum.


> ...I (tin-foil hats on) don't like the system on a system stuff going on.

These inner systems all already exist in modern devices. Here Apple is consolidating them into their T2, but you're going to be trusting all kinds of firmware regardless.


Yeah, I don't understand the parent post. We have ASICs all over the computer - network offload, sound, IPC, etc.


What’s the difference between this, and all the myriad of component controllers that are on previous computers?


[flagged]


I did the first time I commented, and I don’t understand the difference between trusting dozens of vendors of closed, proprietary subsystems, compared to just one.

In my mind, it’s ‘better’ to have to trust less people?


Might I suggest that Apple has a sealed off part of the system because they need something they can trust completely and they don't feel like they can trust Intel?


it's interesting that a default argument vs Intel is very unpopular vs Apple. i don't see any reason what that should be; if anything, both should be under the same scrutiny for doing basically the same thing.


reaperducer said it well:

> Considering the number of times US security agencies go to court to force Apple to give them locked up information, fail, and then go to third-parties to extract the information, I'd say we're safe. For now.

Apple has earned their reputation the hard way.


And when we're not safe, it will be too late, if you want to take that argument to its conclusion.


Your comment highlights that there are very different types of security/insecurity to be concerned about.

Hacker OSS types who advocate for the security of OSS based on the fact that the code is reviewable, are implicitly prioritizing security as being an issue of being free of government based attacks on freedom, or perhaps corporate attacks.

The model of security that Apple has pioneered with their “walled garden “ approach is concerned primarily with attacks by criminals and black hat hackers.

In terms of practical concern, this type of security model is far more impactful to most users.

Apple’s resistance to government inquiries is of a piece with its commitment to a walled garden approach.

The resistance to acknowledging the success of this model by OSS advocates indicates a profound myopia, that in my view makes their views on security almost worthless, as they do not include an accurate understanding of what the real world threat landscape actually looks like.

It also fails to account for the game theoretical issues that differentiate different companies approach to security.


well it is sealed off from us and Intel, but do any US security agencies have their hands in there? How would we know?


Considering the number of times US security agencies go to court to force Apple to give them locked up information, fail, and then go to third-parties to extract the information, I'd say we're safe. For now.


> don't like the system on a system stuff going on

This was already the case, right? Previously, there were all these controllers (disk, I/O, cooling, etc) running only-god-knows-what-firmware from only-god-knows-who vendors, and now that's all been rolled up into a single integrated thing.

I could see an argument that this is a bit of a security concern: "There is now one God-level firmware unit with multiple possible attack vectors! Own the USB controller, and get access to the disk controller for free because they're on the same chip..." but it's not clear to me that this would be worse than what we already had...


Yeah, if you looked into it deeper you’d find that this is entirely to make the system more secure, not less. When this thing boots, every stage of the boot is cryptographically verified. Camera, microphone, payments, etc are all routed through the ARM chip and can’t be accessed through the CPU at all. Compiling Linux and not getting any of these features just so you can theoretically “know what’s going on” (as if you’re going to read the source anyway) does not really make much sense.


Seems like a great way to stop the Hackintosh movement once every new Mac has this.


Hasn't this already been in MacBooks in the form of a T1 chip? For the touch id?


Yes, but the T2 has much more control over the boot process and hardware. The T1 was relatively isolated, communicating with the rest of the computer only for Touch Bar related stuff, and verifying data in the Secure Enclave.


Not entirely true. The T1 also controls the FaceTime camera, TouchID and probably parts of SMC functionality (ambient light sensor), although that last one still has to be confirmed.


TouchID is part of the Secure Enclave along with Apple Pay and the keychain. I’d expect the FaceTime camera is wired up to it for security, to prevent access from malicious programs. Is the ambient light sensor used for controlling the brightness of the Touch Bar?


I'm pretty sure the FaceTime camera is wired up to the T1 chip so apple can avoid an extra 3rd party part that connects the camera to the system via USB.

Those 3rd party USB webcam chips are also notoriously bad for image quality. This way apple has total control over the image processing of the camera, using the same technology/software they use on the iPhone.


When using macOS the ambient light sensor is afaik only used for the brightness of the keyboard backlight.

The brightness of the TouchBar isn't related to ambient light and just depends on the usage of the MacBook Pro. In use it's simply on with full brightness, after 1 minute of inactivity it dims to a certain level, before shutting off the TouchBar completely 15 seconds later. While macOS doesn't allow to customize these timeouts, the experimental Linux driver [1] does.

[1]: https://github.com/roadrunner2/macbook12-spi-driver


Out of curiosity, is there a Linux distribution that can be used full time on a MacBook?


All MacBooks prior to the 2015 generation and MacBook Pros prior to the 2016 generation work fine with up-to-date Linux distributions.

Newer ones might still have some limitations like:

- stuff might not work out of the box and needs manual setup (keyboard, Bluetooth, NVMe, Touch Bar, ...)

- internal Audio not working (MacBook Pro >= 2016)

- WiFi not working (MacBook Pro with Touch Bar)

- Suspend not working (MacBook >= 2015, MacBook Pro >= 2016)

- Battery draining significantly faster than under macOS

If you can live with these limitations you can use a MacBook/MacBook Pro very well full time with a Linux distribution of your choosing.


There is an ARM processor in your SSD


The risk of getting found out is huge and Apple isn’t gonna do that. I would think any half decent security research can detect if encrypted data is being sent somewhere when they are not supposed to and report it to the public


> I don't like closed systems that I have no oversight into, into what they might be logging, etc.

I’m going to be generous and assume you’re both diligent and technically competent enough to inspect all the security-relavant open software you run - henceforth I’ll speak more generally of you as an arbitrary person.

If we take your view in the abstract, it’s incorrect. In the abstract, you do not have any more oversight over open source software than you do closed source software. You could have that oversight, in the same sense that you could run a 5K everyday, but if we’re being realistic, you do not.

In theory this perspective makes sense (“many eyes make all bugs shallow”), but in practice virtually no one actually examines what their open software software does, which means it’s not actually very helpful - you don’t have very many people picking up the slack for you either. This provides a false sense of security in my experience, because we’re talking about the world as we’d like it to be, not as it actually is.

What I usually observe is a curve - the modal company’s software is less secure than a large open source project, but once a company’s software has any appreciable fraction of comparable open source software (i.e. the company matures), the curve flips and the commercial software is more secure. From what I can tell this happens for two reasons: 1) once open source software hits a certain critical complexity point, it becomes exceedingly difficult to review effectively, and 2) the best and most qualified people for this work can get paid significant, double-take amounts of money for doing it, and are almost entirely employed by large companies. The result is that software like iMessage is some of the most secure that exists in its category, which essentially no fully open source solution can hope to compete with, because it simply lacks the same organizational direction and experienced talent working on it.

I think a small subset of all open source software has something close to idyllic security and organizational direction, typically the software used by sufficiently many people that real talent is drawn to look at it too. Project Zero looks at software like this. Beyond that, there is an extremely long tail of open source software projects (approximately all of them) which are used by a nontrivial number of people but which 1) will never receive a serious security assessment, and 2) will be more insecure in perpetuity than commercial alternatives once those alternatives have had 1 - 2 competent security reviews. Before those reviews, I’d say it’s probably a toss up of two equally suspect options.

In any case, the reason I’m responding with all of this is because it’s something I see pretty commonly repeated that I think doesn’t reflect reality. I hate to be so maximally capitalist about this, but if you’re optimizing your software for security, you probably want to use something produced by the resources of a large company. The best of both worlds is obviously a mature open source project officially developed and sponsored by a large company, but when that fails, I would personally choose closed-source solutions most of the time.


this is why we specialize and trust the findings of other experts through published papers and, sometimes, blogs. most people don't individually audit all of the software they run, but people as a whole cover most of it. especially security-critical things.

with closed-source blobs built into the hardware design, nobody can properly audit it.


> I know I trust Apple[1] and all that but I (tin-foil hats on) don't like the system on a system stuff going on.

This actually isn't that different from what we've been doing for years. This chip is basically just a Southbridge (the standard x86 architecture catch-all chip) with Apple's shiny marketing behind it.


Right cause if they would have just used an Intel chip there we could sleep better at night


Google displays Ada for money. It doesn't sell your data, it rents out your eyeballs


Don't buy apple if you don't like closed systems.


if youd like to cut past the fanboy fluff... https://en.wikipedia.org/wiki/Apple_mobile_application_proce...

The Apple T2 chip is a SoC from Apple mainly serving as a secure enclave for encrypted keys in the iMac Pro 2017. It gives users the ability to lock down the computer's boot process. It also handles system functions like the camera and audio control, and manages the solid-state drive.


So it's basically SecureBoot + lets remove the SATA controller, the audio chip, the H264 chip and roll them into one chip.

It feels like horizontal integration; similar to what Elon Musk does with SpaceX by integrating the pipeline and producing everything internally to save costs.

The article doesn't sound very revolutionary. They're saving money, maybe a little space, by producing all those subsystems themselves. So long as they don't lock down the hardware so you can run Linux or other operating systems on it, I guess that's okay.

Still, it sounds very non-upgradable. The article mentions the memory comes in two banks. Is it replaceable, or is it soldered to the board? nvme ssd cards are super tiny, super fast and super standardized. I can upgrade the ones in most laptops. I can replace a dying battery. Apple seems more geared to people who don't like to fix things or work on their own machines.


This is called vertical integration, same for SpaceX.

Horizontal integration is like making computers, and also making elevators (e.g., Samsung)

EDIT: that wasn't a correct example. Horizontal integration would be company making mobile phone SoC, then make server SoC like Qualcomm (since both use the same supply chain).


That's called no integration. It's a conglomerate.


Horizontal means owning everything at the same level. So things like making mobile and server CPUs or acquiring competitors. Vertical is selling CPUs and computers.


How has [edit] T1/T2 device driver support worked out in practice so far for Linux and less-popular alternative operating systems on the new hardware?

https://github.com/Dunedan/mbp-2016-linux (T1) not working: audio, suspend/hibernation. 2/6 models working: WiFi

I'm probably most interested in hearing more about OpenBSD support, I'm not finding much recent info.


You probably mean T1, as the repo covers MacBook Pros and not the iMac Pro. While I agree that hardware support isn't where it should, it's getting better. Hey, we even have a working driver for the internal keyboard since several months. ;)

No, seriously: Currently most effort is going into mainlining Bluetooth support for the current MacBook Pros (I expected that to land in Linux 4.17, although it still might get squeezed into 4.16) and preparing the keyboard/touchpad driver for mainlining as well.

The Wifi issue is really a shame, but as far as we know it's entirely to blame on Broadcom as their Linux firmware has an issue preventing it to work properly [1], something which can't be fixed by somebody outside of Broadcom.

And to be clear: That so much stuff isn't working under Linux with the current MacBook Pros isn't really something to blame Apple for. What Apple simply did is what they always do: Rethink how they can improve stuff and they did a great job with it. E.g. it makes perfect sense to connect the internal keyboard via SPI instead of via USB, simply for power-saving (and probably complexity) reasons. As Apple is the first vendor to do such things there are simply no Linux drivers for it yet. Or look at Bluetooth, which is now connected via UART. It appears that this will be the way to go for all vendors in future, but only a few are already doing it, so there are still some rough edges which need to be smoothed out.

[1]: https://bugzilla.kernel.org/show_bug.cgi?id=193121


If "upgradeable" matters at all, you're not buying Apple stuff anyway. You may as well assume everything they make is an iPhone and make your purchasing decisions from there.


> The article mentions the memory comes in two banks. Is it replaceable, or is it soldered to the board?

Depends on your definition of replaceable. Apple can do it, and you can too, since it’s not soldered onto the motherboard. However, you’d have to open up most of the computer and remove the display, which is “unsupported” by Apple.


> It feels like horizontal integration; similar to what Elon Musk does with SpaceX by integrating the pipeline and producing everything internally to save costs.

Since the T2 is Apple's chip, this is vertical integration not horizontal. Owning the supply chain is vertical integration.


It's PCIE/NVMe and removable [1]. But I don't know if it's a standard M2 slot. I doubt it.

I didn't mind these proprietary connectors until I dropped tea on my laptop. I took it to the apple store to get them to retrieve my data, but they said even THEY don't have the means to read data from their proprietary SSD connector, and instead told me to go to a data recovery company.

I understand they use proprietary connectors, if they empower their support techs to have means to also recover from catastrophe, it would help the customer a lot more.

[1] https://www.ifixit.com/Teardown/iMac+Pro+Teardown/101807#s19...


The thing is that it's not PCIe/NVMe.


I don’t know for sure but you can still be pcie and nvme with a different connector probably.


You can't be PCIe/NVMe without a flash controller, though. The chips on the storage cards in the iMac Pro are all just raw flash; there's no controller or associated DRAM.


From what I saw from the teardowns both RAM and SSD chips are replaceable. It’s not super easy because you need to take the screen off first, but it is doable.


Take the screen off, then take the whole mainboard out, since the slots are on the back.


How can you say ‘mainly serving as a secure enclave’ then list a bunch of it’s other equally (or more) important functions?

It’s not ‘mainly’ any one thing, anymore than your laptop’s CPU is ‘mainly’ a JavaScript processor.


Have fun attacking your strawman.


The secure enclave is the only part of this that might be considered unique... and I'm not even feeling like that's a terribly great idea (combining it with other functions).


> For security reasons, the T2 is the iMac Pro hardware’s “root of trust,” and it validates the entire boot process when the power comes on. The T2 starts up, checks things out, loads its bootloader, verifies that it’s legitimate and cryptographically signed by Apple, and then moves on to the next part of the boot process.

If they start putting this in every machine going forward I wonder if this will be the end of the hackintosh


Well, beginning of the end. It would be 3 years from whatever date they release a new version that locks it to a chip like this and drop support for old models without it. So lets say, 4-5 years. Actually, probably longer since they've been supporting old systems for as long as 8 years(2007 imac, others) with some macbook pros approaching 9-10 even.

How long you think it would take for them to do this depends on whether you really think they'd just decide to cut off a ton of machines at once and reverse course on really long support windows. And long windows really seems to be a roadmap, if you look at how the ipad 2 was supported as well.

I think they care more about "look at how we support our products!" than hackintoshes. What i could easily imagine though is way more features than just imessage and such being gate-kept into "only systems with our secure chip are allowed to run this"


I understand the security benefits but I can't help but just look at this as yet another piece of disposability on an already very "disposable" piece of 5K+ (starting) hardware.

How many iPhones are now in landfills because you couldn't downgrade the OS to an old version that would be usable.


Judging by Apples track record for supporting iMacs, i don't see this as an issue. I owned a 2007 imac until 2014 and it was still getting updates. It was running the latest version of OSX until 10.11

This machine launched with, as i remember, tiger. There's machines barely newer that are literally still supported. How is a 9-10 year support window bad? You can't even do that consistently with windows(look at how many atom machines that were only 2-3 years old were cut off from windows 10, or recent updates therein)

And there's nothing stopping you from turning off the security controller and just installing linux, either. In 9 years almost all of the hardware will be well documented and old hat.

This could have a full 3-4 year or even longer life as a "pro" machine, then a long life as a backup/secondary machine, get donated to a nonprofit, or even as your parents cool looking desktop and be completely obsolete in a lot of senses before it will stop getting updates. This has been a proven track record over the past decade for apple.

And for clarification: i'm not shilling here or anything. I think they're screwing up a lot of stuff, and i refuse to buy one of the current macbook pros and am still riding along on the old style retina pro. But OS support is something they do a great job at. There's original-style silver keyboard macbook pros still supported on high sierra.


I agree with your main point about disposability. There shouldn’t be many iPhones in landfills, though. Apple recycles 100% of the materials, so as long as the old phone can be traded in or dropped off in an electronics bin, it won’t be wasted even though the owner will have been out the money by rebuying prematurely.


> Apple recycles 100% of the materials

Do you have a source for that? It sounds hard to believe


Thanks for asking - yes, I was wrong about that.

The main document you should read is https://images.apple.com/environment/pdf/Apple_Environmental... where there are some 100%s, but they don't make this particular promise.

In that report, they list iPhone 6 disassembly recovery by kg/100k phones (search for Tungsten.) Cross-reference with the components by weight for the iPhone 6S (6 is not available; see https://www.apple.com/environment/reports/ for all reports.) Ignoring battery, screen and plastic leaves .063kg/phone, and according to that side bar, Liam recovers .033kg/phone or ~50%. Assuming Apple recycles most of the battery, screen and plastic, they are recovering ~75% of each 6S. That's pretty good, and it's good to know that the unrecoverable portions are staying out of the trash (and hopefully heat/vapor waste is mitigated), but it's years away from being close enough to 100% to round up.


If Apple gets its hands on the phone they have an impressive system for recycling: http://www.businessinsider.com/apple-liam-iphone-recycling-r...

I could not find a reliable source for what portion of each phone is recycled, but Apple is certainly not slacking on the recycling front.


As far as I can tell, Liam is just an R&D project - it's not something they've put into actual production.

E.g. from the 2017 environmental report

> For example, we’ve melted down iPhone 6 aluminum enclosures recovered from Liam to make Mac mini computers for use in our factories

It's only recovered enough aluminum to build a couple of Mac minis for their own internal use


Apple has probably the best recycling program in the industry. I bet there are far more Motorolas, Asus, and LG devices in landfils than iPhones.


You can still turn it off and downgrade.


You've got it backwards.

A hackintosh would mean running macOS on non-Apple hardware, so any security checks could just be patched out. In fact, I'm pretty sure that is how hackintoshes work now.


Not really.

I've built Hackintoshes so I'm familiar with kext replacing and what not, my idea was more that Apple could do something to limit apps/OS from running on anything that didn't have a t2 chip and it might be difficult to get around, like OSX on exotic or AMD chips can be a bit of a pain. Haven't used one in a few years though. Also the chip stuff is out of my realm of expertise.

Was just a thought I had.


There’s no way they can do this in the foreseeable future, since they need to continue to support current Macs without T2 as well.


How long did it take them to drop support for PPC Macs in new OS versions? 6 years or something? Could be similar here.


The Intel transition was officially started with 10.4.4, IIRC, and the last version of Mac OS X that PowerPC Macs could run was Leopard. Mac OS X 10.7 dropped Rosetta, meaning all software had to be Intel-only. I know that by OS X all system binaries were Intel only, without a PowerPC slice.


Right, you're replacing kernel extensions with patched ones then? Problem is, intel is intel, so again any such limit could be patched.

I understand your concern, but consider it this way: The only real mitigation Apple might have is to offload certain critical functionality to the ARM chip, and even then, they would have to drop support for all macOS hardware without T1/T2.


True; it would be as simple as crucial parts of the kernel being targeted to ARM (to run on the T2.) Good luck patching around that without embedding an ARM VM kext to simulate the T2; and then good luck getting that to be at-all performant.


Well, it's easier to bypass a software check like that than it would be to create a hardware check like the T2. It's unlikely to hurt hackintosh anytime soon.


Yes. If part of the OS is written in ARM, that needs a separate chip to run on, this will be hard to hack.


I think that hackintosh for Apple is like pirated Windows XP for Microsoft: it brings more profit than losses by tying people to ecosystem.


Wouldn't be so sure considering the sort of machines people are building would slot into the iMac Pro slot in their lineup in terms of power.

Of course they wont care about $400 hackintoshes but if people are putting together $2K - $3.5 ones it starts to look more in the ballpark of a lost iMac or iMac Pro sale.


No one who was going to buy an iMac pro would likely to setup a hackintosh, it’s just not stable enough.


This is slightly off-topic, but developments such as these really make me look forward to when Apple will finally get around to releasing their new Mac Pro (coyly quasi-announced as not being due for release in 2017, but in development supposedly for a late 2018 or early 2019). This kind of technology might not be revolutionary per se, but makes for a very solid technological basis to build a system on. If the Mac Pro turns out to be an iMac Pro divorced from the screen and with some internal expandability in a suitably fancy-looking case, I'll be very content.


I hope that Mac Pro would be good old case with standard components without extra price. I really like macOS but all Apple's computers are just terrible for me. Mac Pro is my last hope to stay in Apple ecosystem.


Considering how long it is taking them to develop it, and considering the direction they have taken with their Emoji-keyboard MacBookPros with custom chips and now the architecture of the iMac Pro, the Mac Pro is going to be anything but standard.


But they have already developed a non-standard Mac Pro in 2013, and it is widely considered a failure. I'm not sure if they'll really make the same mistake twice?


Its not just widely considered a failure, they publicly admitted it was a failure.

https://techcrunch.com/2017/04/06/transcript-phil-schiller-c...

Craig Federighi: I think it’s fair to say, part of why we’re talking today, is that the Mac Pro — the current vintage that we introduced — we wanted to do something bold and different. In retrospect, it didn’t well suit some of the people we were trying to reach. It’s good for some; it’s an amazingly quiet machine, it’s a beautiful machine. But it does not address the full range of customers we wanna reach with Mac Pro.

But I think we designed ourselves into a bit of a thermal corner, if you will. We designed a system that we thought with the kind of GPUs that at the time we thought we needed, and that we thought we could well serve with a two GPU architecture… that that was the thermal limit we needed, or the thermal capacity we needed. But workloads didn’t materialize to fit that as broadly as we hoped.


Considering they identified one of the issues as designing themselves into a thermal corner and the next thing they do is design the iMac Pro with extreme thermal constraints too I'd be skeptical if they had learnt anything.

Modern Apple doesn't backtrack lightly, just look how they're still pushing the butterfly keyboard despite it being breakable with single specks of dust.


It was a failure because the thermal characteristics made it extremely difficult to expand. They’re not going to market something that is essentially an Apple-branded hackintosh devoid of custom components.


Good to read something positive about CPU technology today :). Beyond the obvious improvements for securing the computer, it is very interesting to see how the T2 chip not only operates the camera but most of all also works as a hard disk controller. So the "CPU" of the computer might be Intel still, but it is a bit of a guest in a system that is in fact controlled by the T2 chip.

As the T2 seems to basically the iPhone CPU it also shows how great a hardware is in current phones now, if using that chip creates a faster flash memory controller for the biggest Intel CPUs.


So... I read the article and I still don't know what, exactly, is the T2 chip or what, exactly, makes it different from other ARM / Intel hybrids, other than several different ways of expressing "it's incredibly, totally great" and "Intel is untrustworthy, so let's trust Apple instead" and "this changes everything all over again".

(It doesn't, unless you arbitrarily consider small, incremental changes exclusive to Apple products as the opposite of small and incremental just because of the size of Apple.)


It’s a chip which replaces the controllers (previously several of them, from different vendors, that Apple didn’t have control over) for disk and network I/O, speakers, microphone, camera, cooling system, etc. with a single Apple-designed chip. This lets Apple add features to their hardware unsupported by other controllers, move some features from the CPU to this new chip which can reduce attack surface area and free up CPU cycles for something else, save space on the motherboard, and maybe save money.


This just sound like the Intel Management Engine story waiting to happen again. A controller that handles camera, networking and is the "root of trust"? No thank you.


The camera isn't a vector of attack, it's a thing to be attacked. The camera right now on desktop computers is wildly insecure, giving authority of it over to a secure enclave seems like a great way to finally start securing that path.


On the other hand, the ability to compromise the T2 chip would lend attackers the ability to use the camera outside of OS control and thus outside of user control.

At least with OS updates, it's possible to patch potential security vulnerabilites. Of course it's much harder to patch silicon, as we're seeing now.


You misunderstand. It runs "bridgeOS" which can be updated with Apple-signed firmware updates, and most certainly will be for security updates.


I actually want Apple to control the network stack, because I trust them more than Intel, or the other chip vendors. If I understand it correctly, they can now stop the Intel ME from sending stuff out? That would be very reassuring. Apple has a very strong stance against selling my data, and others don’t.


In what way is this similar to the IME story? Apple's history with the Secure Enclave on iOS devices leads me to believe that this will be much more secure and that a white paper will be forthcoming unlike Intel's complete opacity surrounding IME.


Do you truly believe Apples sophisticated security enclave with computer-like capabilities (and probably running a full OS) will have a manual and white-paper? This is the exact opposite of what Apple will do, they will lock it down and put it behind patents as they do best.


https://www.apple.com/business/docs/iOS_Security_Guide.pdf

https://images.apple.com/business/docs/FaceID_Security_Guide...

Seems pretty likely there will be a whitepaper and some conference talks about this as well.


You're linking a minimal manual which is required for corporate deployment, hardly anything Apple is being forthcoming about. I'm pretty sure you and I have different definitions of acceptable disclosure - unless Apple makes the entire chip possible to audit, I don't consider it to be acceptable.


A completely transparent audit like you're suggesting would compromise the majority of the security features of the enclave. The white paper gives enough about the mechanism to understand how it works without detailing specifics that would compromise the security of tech.


FYI when you patent something, it becomes public. Trade secrets keep it that way.


Are you joking? They already did release those things...


> This just sound like the Intel Management Engine story waiting to happen again

Excuse me, implementation details matter. Outcomes matter. Apple has deployed 10x more devices with no known breach or defect.


"no known breach or defect"

Except for the many thousands that have been known, from every exploitable vulnerability that's allowed an iOS jailbreak to every security fix in every patch in the nearly two decades of OS X (and System 7/9/etc before that). These types of things are fairly complex pieces of software. It doesn't matter if the software is ultimately compiled to hardware circuitry or executed by a CPU -- bugs come from complexity, not from the medium.


Are you implying that Apple has deployed 10x more devices than Intel?


Yeah, this is all technology from iOS devices.


Even so, I somehow doubt that 15 years of iOS devices equals 30+ years of x86 devices.


You're right.

I was assuming the parent was thinking about Macs vs. Intel processors. Was just pointing out that the comparison of scale is iOS devices vs. Intel processors.


> iOS devices vs. Intel processors

There's no way there are 10x more iOS devices in circulation than Intel-based PCs

Annual PC shipments ( laptops + desktops ) are still above 250 million per year. No tablets haven't killed the market, they've killed growth.

Apple sells about 220 million iPhones per year. I don't know how many iPads; say half as many again?


Except for when they forgot to require a password for root access to her OS...


And this has what to do with secure enclave?


If the answer is "nothing", then what does Apple shipping 10x the devices have to do with their ability to produce the world's only Secure Enclave implementation that isn't simply a warm cosy environment for malware?


1. You can opt to not utilize the security features offered.

2. This is not used for device management.


It is a SuperIO chip with on board co-processors that can inspect and modify data transferring to/from peripherals. It is most certainly used for management and capable of being compromised like the ME.


You have not refuted anything I said besides implying that I am wrong. Again:

- You can turn off functionality if you do not want it.

- There are no management or remote access capabilities.

- The only way to compromise it would require compromising the main CPU anyway, and persistence would be a whole other (major) challenge.


Ah, so you're saying this chip is guaranteed to not have its own TCP/IP stack, no access to the NIC, and no latent zero-days that a remote attacker can exploit?


That is correct enough, but now sure what “it’s own” means.

To be clear, this is not some mystery chip, it runs a derivative of iOS, and you can check out the firmware in /usr/standalone/firmware (You can even reverse engineer it if you have experience with ARM).


Do you not use Chrome because IE6 was insecure? This attitude doesn't seem like it leads anywhere other than not using computers.


I mean at the end of the day, something has to manage those devices.


I find it interesting that Apple is back to high levels of customization in its PCs. If I'm remembering correctly, one of the big motivators to move to x86 was to be able to use more commodity stuff through out the system, rather than the all the custom work required to design PowerPC system. I recall initial speculation that Apple's x86 machines would be pretty much fully custom chipsets and designs that just happened to have an x86 CPU, but then when the release systems came out, they used Intel chipsets and were pretty ordinary from a PC hardware standpoint.


The biggest motivator to move to Intel, as stated by Steve Jobs in 2005 (paraphrasing from what I remember), was that the power to performance ratio was bad on PowerPC. All the lead that PowerPC had had over Intel vanished.

And given that IBM wasn’t interested in servicing Apple’s needs for consumer level chips (and Motorola, the other company in the PowerPC consortium, was struggling to scale up manufacturing), Apple’s pro machines like the PowerMac were struggling to keep up and improve. One PowerMac model with PowerPC G5 (IIRC) was even released with liquid cooling and had some customers complaining about leaks.

The driver for the processor switch was actually Apple’s inability to get the right kind of chips it wanted on scale. As others have mentioned, the in-house design and the ability to get its chips manufactured for iPhones on a large scale triggered a big wave (of competitive advantage) for Apple. We’ll only see more of this as time passes.


Yeah, I remember Jobs saying they wanted to put a G5 in the Powerbook but couldn't do it because the chip was too power hungry.


Based on my understanding of the issue, the main difference between now and then is scale. Pre-iPhone, Apple didn't have the scale to make their own chips economically; now, with the massive success of the iPhone, they're able to do this.

Their major value proposition is that they're able to build software-differentiated hardware, so building their own chips is extremely valuable to them.


Custom motherboards is an expertise Apple has built up over the years with the iPhone (of course they also do custom CPUs as well there!).

A huge portion of bugs and power issues on a modern PC stem from poor hardware drivers. Cutting out the middle man and just writing all the drivers themselves may very well be a net win for Apple if they want to keep a reputation for working hardware.

Then again they have gotten have their lighting out for display working quite right.


Apple have been making logic boards since the beginning.

The inclusion of custom silicon isn't new either.

Nor are in-house drivers for commodity equipmemt, as the move from NeXT Computer, Inc. to NeXT Software required it.

As you point out, by ceasing to rely on third party IP in that custom silicon they reduce the complexity further still.

100% in-house widgets have been on the whiteboard at Apple for a long time.


It is scaling out much better now thanks to large number of iPhones and watches sold. It is very difficult to do this on the scale of desktop computer sales.


That Stopped during the rise of the iPhone and iPad when they made the A4 chip. Once they figured out how to vertically integrate that during Jobs. After the death of Jobs and the appointment of Tim Cook we just see that vision taken to the end of which we have now.


> For security reasons, the T2 is the iMac Pro hardware’s “root of trust,” and it validates the entire boot process when the power comes on. The T2 starts up, checks things out, loads its bootloader, verifies that it’s legitimate and cryptographically signed by Apple, and then moves on to the next part of the boot process.

How is this more secure from a locked down system using "standard" UEFI secure boot powered by any other TPM implementation?

I understand that this is not an in-depth technical analysis, mainly catering to the Mac-loving audience of the site and getting them to feel better about their platform of choice.

But I'd be interested to hear why I would trust this Apple T2 chip more than a workstation motherboard with a TPM on it, and secure boot on and loaded only with the keys of myself (in case I were to build my own kernel/bootloader and sign it) or a vendor I trust. I could be missing something, but the process outlined in the article sounds exactly like secure boot.


Is this basically an integrated south bridge?


Sounds like it to me.

For those that are unfamiliar:

https://en.wikipedia.org/wiki/Southbridge_(computing)


It'll be interesting to see how well Linux ends up supporting these machines. Linux usually works pretty well on older Apple hardware, assuming you wait for a few patches to make everything 100% functional. This will probably mean a whole lot more code will be needed to get to that stage though.


Maybe some bits of Southbridge and TPM mixed together


> On most Macs, there are discrete controllers for audio, system management and disk drives. But the T2 handles all these taks. The T2 is responsible for controlling the iMac Pro’s stereo speakers, internal microphones, and dual cooling fans, all by itself.

Translation: T2 is a southbridge, but this time with a camera controller, and a TPM in addition to the normal disk, audio codec, peripheral bus, and GPIO functionality.


Or T2 is like the Fat Agnus, Denise, and Paula chips all rolled into one, similar to the Amiga. Everything new is just recycled from the past.


Came to say the same thing (or see if someone else said it). This looks like classic multiprocessing, and looks like computing, like music and clothing, is also subject to ~20-year fashion cycles for some things.


It does a bit more than a southbridge, but it does take over all of the functionality that was traditionally in the southbridge. It's more like a cellphone architecture where the CPU is a slave to the black box that actually boots the device.

This is exactly the sort of vendor lockdown/lockout on a PC that people have been warning us about since TPMs and signed bootloaders started appearing on cell phones.


Yeah, I was definitely expecting some more interesting than this. It reads like an NVIDIA nForce2 press release from 15 years ago.


"Your iMac, now with SoundStorm™!"

Though for what it's worth, nForce2 still did require a separate codec, even if it had the controller built in.


But hey, it controls the cooling fans! The DUAL cooling fans! All by itself! Apple must truly have a chip design team rivaling Intel.


I am wondering on the cost benefits this has with Apple. While it is insignificant with the iMac Pro, it will be important once it is filter down to entry level.

Apple now basically have its own IP in everything. Instead of sourcing and paying IP or chips, they can now mix and match their own and build with TSMC. All with the help of iPhone's R&D. I am pretty sure the next one on the list is WiFi and Bluetooth.

This a potential saving of up to $50 in BOM cost. If you tell most PC vendor a extra $50 profits per machine they would have their eyes wide open.

This roughly translate to a $100 cheaper Retail pricing, but given it is Apple they will likely use this saving to put YET another silly features on the Macbook to sell it at the same price.


I'm glad they allow you to completely disable the "secure boot" functionality so that other OSes like Linux can be installed. Glad Apple didn't pull a Microsoft here. I would be delighted, however, if the secure boot functionality was programmable with custom certificates!


> This version requires a network connection when you attempt to install any OS software updates, because it needs to verify with Apple that the updates are legitimate.

They can't just verify the signatures?


By default, they start out at the highest level of security, assuming an always-on network connection. It can be adjusted downward, which those of us reading HN are likely to prefer.

Signature verification alone may confirm that a release is from Apple, but it doesn't confirm that the release hasn't been superseded due to security issues. (Or marketing, if you're conspiracy-minded.)

The "Medium" setting allows older versions of MacOS, so although the article doesn't say, I suspect that is doing signature-verification alone, and does not require a network connection.


To be more explicit, this is exactly the same thing browsers (should) do: when verifying a (code-signing or server) certificate, is is proper to also retrieve an up-to-date Certificate Revocation List from the CA.



> Signature verification alone may confirm that a release is from Apple, but it doesn't confirm that the release hasn't been superseded due to security issues. (Or marketing, if you're conspiracy-minded.)

But fuse sets will. Thus is how downgrade attacks are generally protected against in high integrity consumer electronics.

That way your device continues to work when you reboot and Comcast is down again.


I'm sure the T2 doesn't verify the OS against the network at boot time, but rather when you're installing the OS update. Once it's been installed, it's trusted.

The trivial proof here is if it did anything else, ignoring an OS update would brick your device, which is obviously not desired behavior.


> I'm sure the T2 doesn't verify the OS against the network at boot time, but rather when you're installing the OS update. Once it's been installed, it's trusted.

This is correct. Network is only needed for re-install on the high security setting. When already installed, the only verification is to ensure signatures are valid, similar to how iOS devices function (You cannot re-flash/downgrade to an older OS, but if you have an older OS installed, the device will not prevent you from booting).


It might be about blacklisting older versions, so users can only install the newest version like on iOS.


Then you would be able to downgrade your OS version, they don't want that.

Edit: Apparently not, you can turn it off so it doesn't require network. See the other comment.


Except for the fact that it still lets you do exactly that.


> Then you would be able to downgrade your OS version, they don't want that.

Ignoring for the moment that you can already do this, allowing users to go back to older versions is a security hole.


I am surprised it has taken this long.

In the Marklar days, this was something we speculated about as a Clone-defeat mechanism. Essentially a hackintosh blocker.

I look forward to following the discoveries made in this subsystem.



> Before the iMac Pro was released, there was a lot of speculation that it was part of a trend toward creating a “hybrid Mac” that is driven by both an Intel processor and an Apple-designed ARM chip like those found in other Apple devices. The iMac Pro is definitely a hybrid of a sort, but probably not the one people were expecting.

Not the thing people were expecting, but still, does anyone know if the T2 is based around one of Apple's ARM cores?


The T2 is rumored to be a derivative of the A10.


Apple has basically replaced the SMC architecture with a new architecture that is the equivalent of an Apple watch. From an integration perspective, it makes sense. Apple no longer has to source an SMC/arm chip from another supplier, they use their own.

All Mac users have at some point had to reset the SMC on a Mac. The MacWorld comment that the iMac pro because of the T2 chip is unlike another Mac, is only half the story. The Functionality that was handled by the SMC (which was an arm based architecture), is now handled by a more beefier arm chip T2.

It's a cost savings for Apple, and allows for more advanced functionality. In the Interim , the T2 is doing exactly what the SMC used to do.


Slightly OT, Apple really needs to release a Macbook Pro or two with beefy video cards -- it's becoming nearly untenable to develop games or for AR/VR with their latest hardware. Their Intel-integrated cards are not up to the task.


Isn’t this why they added support for externals GPUs?


Where is the disk key? Is it stored in the main board, or in the flash disk itself? E.g. if you have to replace the motherboard, do you lose the data, or because of being in the flash disk it is OK? BTW, if being in the flash disk, how secure it is the key handling?


When all eyes are now looking to the future of branch prediction, this launch will be a bit obfuscated. Does the Intel Xeon inside the iMac Pro also vulnerable? If so, the impact of the T2 will not get attention as it should.


Apple addresses the (known) vulnerabilities in the 10.13.2 and 10.13.3 updates.


High Sierra fixed the KPTI issue without perfomance loss: https://twitter.com/aionescu/status/948609809540046849


"Without performance loss" sounds optimistic, but most things that people would be doing with a Mac likely aren't syscall-intensive enough to show major effects, especially on modernish chips with PCID. Most of the more impressive performance drops from the Linux patch are with things like heavy database workloads.


> if you get the 1TB model, your iMac Pro has two 512MB NAND banks

Looks like you meant “GB” here.


I love the level of integration with this chip. How big is their IC design team?


Not very big, is my understanding. The team mainly came from the PA Semi acquisition several years ago, and there was some recent news about Google hiring (“poaching”) a key senior person from this team.


It's impossible to read the article because of the ads.


It is easily readable with Reader View in Safari.


uBlock Origin


Intel ME? Hold my beer. -- Apple.


>The T2 is responsible for controlling the iMac Pro’s stereo speakers, internal microphones, and dual cooling fans, all by itself.

So it's just a fancy Super I/O chip


"The iMac Pro isn’t running iOS apps"

I always find this kind of remarks funny because iOS is essentially a specialised strip down version of MacOS. So basically any accusation of MacOS copying iOS , however exaggerated, is an accusation that MacOS is copying itself.


> Every bit of data stored on an iMac Pro’s SSD is encrypted on the fly by the T2, so that if a nefarious person tried to pull out the storage chips and read them later, they’d be out of luck.

What. The. Fuck. How am I supposed to recover my data e.g. if the mainboard gets fried or the machine has to go to service?! With a SATA or NVMe SSD I can plug them into another computer and either keep going (for Linux and macOS, only Windows is a different beast...) or dump the data to somehere safe. With this, Apple forces me to rely on TimeMachine working - which is not a bad idea in itself, but not cool that this is not widely announced on the product page: "BACK UP THE DATA YOURSELF OR IT WILL GET LOST IF ANYTHING GETS SCREWED UP".

There's already FileVault for encryption (or LUKS, VeraCrypt and Bitlocker), and in addition some SSDs (and iirc also some expensive HDDs) implement native on disk encryption via standardized SATA commands so one has encrypted storage but still portability (and for native disk encryption, no loss due to CPU-level encryption!).

Another reason to not buy anything modern from Apple. If one single problem happens to your modern work machine you're straight out f...ed.


What is your plan to access data on a drive if the SSD controller chip gets fried? Self encrypting disks are not new.


> What is your plan to access data on a drive if the SSD controller chip gets fried?

I have yet to see a failed controller chip - motherboard or PSU failures due to coffee, cats, dogs or any combination of these three however I have seen multiple times in my social circle.


I agree that one cannot say, "backing up is good practice, if you don't do it, it's your own fault." Because the real world simply doesn't always work that way. But I like having the option of that feature. For years now my local disks are effectively just local caches. They never have more than a days worth of non-backed-up data. Apple needs to work on making it as automatic and invisible as possible for this to happen to everyone's machine even if they're tech illiterate. All I've seen so far are disparate backup stories that leave me unconfident about any of them.


>Apple needs to work on making it as automatic and invisible as possible for this to happen to everyone's machine even if they're tech illiterate.

I suppose there's some degree of tech literacy required just to know that you should have a backup. But, honestly, Time Machine is probably the easiest backup system I'm aware of and I run all three major operating systems including running a cloud backup on my desktop Mac.

If I were to point a finger at anyone, it would be Microsoft. (I don't really expect Linux to be drop-dead simple.) On Windows 10, it's still not really "just works" out of the box.


Store everything on iCloud Drive and it does backup "automatic" and invisibly.


What is your plan to access storage if your phone's motherboard gets fried?


Use your backup.

IMHO, backups are even more necessary on Mac. Apple has has the fussiest file systems in my experience and of all of the backups I keep in my house my wife's Mac backups are the only ones I've ever had to really use. Three times now over the past 10 or so years. Just one day she opens the lid and gets a grey screen with a sad Mac and I know that just like last time the repair options won't do squat.


I, personally love and hate the disappearance of micro-SD cards with phones. Love, because many SD cards are crap, and will lose your data without much notice. Hate, because then there is no recovery option other than "travel back in time to backup your data", if you haven't already backed up your data.

I sort of give phones a pass these days because they are size and weight constrained. Desktop PCs though????? I disagree with the decision not to use a NVMe SSD card slot on this product.


Backing up your iPhone to iCloud is turned on by default.


So is backing up most of your Mac.


I actually would be fine as long as the SIM card is not fried - yeah I would lose some game progress data but that's it. My calendar, phonebook and mails are in the cloud, Whatsapp has regular backups to the SD card, as are the images.


Sounds like a similar strategy would work fine with the iMac Pro...


Time Machine has been a reasonably good option since 10.4, and hopefully anyone on this site has an online backup service by now…


What part of the hardware exactly is failing in your scenario? It looks like iMac Pro still supports Target Disk Mode to access disk contents if your computer is damaged (You'd need the password to decrypt, of course): https://support.apple.com/en-us/HT201462


He says "mainboard", so it's fair to assume the computer would not be bootable in this scenario, which precludes the use of Target Disk Mode.


Exactly. Or if the power supply is broken... or the CPU, or the RAM, or anything that requires the iMac sent off to servicing.

Also, I'd need another Mac to access the data, whereas with a USB-SATA enclosure I could use anything that runs Linux (as there are HFS+ drivers for it). For the new breed of cr.p Apple has forced upon us I'd need a Mac AND a Thunderbolt Cable AND the luck that the source system is still bootable.


If you're using HFS+ then you can put the system in target disk mode and access it from any machine with an HFS+ driver. Not sure if there's APFS support in linux yet.


How to put a machine that does not boot (any combo of fried CPU/motherboard/RAM/PSU) into target disk mode? Or, when the machine boots but you don't have a Thunderbolt link cable? That's the core question. Apple needlessly deviates from any standard.


> What. The. Fuck. How am I supposed to recover my data e.g. if the mainboard gets fried or the machine has to go to service?!

Either switch off the feature or use backups.


> Either switch off the feature

This does not help, as the modules are non-standard and can only be read in another iMac (if the encryption is actually disable-able in the first place).


So, read them with another Mac? That's been the case with Apple PCIe SSDs for the past few years, anyway. I mean, at least they're not soldered, I suppose... And yes, the encryption can be disabled. You can choose to have either the new encryption system, FileVault, both (recommended), or neither.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: