Hacker News new | past | comments | ask | show | jobs | submit login
New vuln in Apple M-series allowing secret keys extraction can't be patched (twitter.com/kimzetter)
199 points by A_No_Name_Mouse 10 months ago | hide | past | favorite | 137 comments




actual source of the article: https://gofetch.fail/


The Ars article is arguably more useful than the vuln website, with more added context; it's not just blogspam.


Google, in 2021 [0]:

> While the PoC demonstrates the JavaScript Spectre attack against Chrome 88's V8 JavaScript engine on an Intel Core i7-6500U 'Skylake' CPU on Linux, Google notes it can easily be tweaked for other CPUs... It was even successful on Apple's M1 Arm CPU...

And Augury [1] in 2022 also affected Apple's A14 and M1 chips.

So have Apple been attempting to mitigate and failing, or ignoring the issue?

Surely chip manufactures can't keep ignoring these fundamental flaws

[0] https://security.googleblog.com/2021/03/a-spectre-proof-of-c...

[1] https://www.prefetchers.info/


Some of the authors of this paper worked on Augury, too, but note that this is a different angle than Spectre: that was speculative execution (running instructions before knowing which way a branch would evaluate) and this is data prefetching.

The reason this keeps coming up is that it isn’t a single issue but a class of attacks exploiting performance features, and attackers are getting more sophisticated as smart people figure out new techniques. Chip designers have been adjusting but are trying not to throw out the last couple decades of performance improvements, too.


The title to article ..."secret keys"... had me thinking that this vuln might be a path to extracting the private keys from the secure enclave.

I'm not sure, but after a bit more reading, it sounds like private-keys or symmetric-keys can be extracted from other user-space or possibly kernel-space code execution. And NOT from the secure enclave.

Just for what it's worth.


Correct. It's still very bad, but does not affect the secure enclave.



Unfortunately, I don't think the real world applications of this exploit are explained anywhere. From skimming the paper , it looks like the attacker needs to be able to a) run code on the victim's machine and b) trigger the encryption process ("For our cryptographic attacks, we assume the attacker runs unprivileged code and is able to interact with the victim via nominal software interfaces, triggering it to perform private key operations.")

So for a) it might be sufficient to run javascript and for b) of course there are ways to inject data into server processes, processing data submitted by clients is what servers are for.

But a happens on clients (web browsers) and b would be a way to extract encryption keys from servers. But in what case can an attacker run code on a machine where they can also trigger the encryption (constantly for an hour like in the demonstration)? The only thing that comes to my mind would be a server side code-execution-sandbox that runs SSL termination on the same machine.

edit: Maybe stealing client certificate keys?


Kim Zetter has a great post walking through some details and commentary across a few sources, related to the vulnerability - https://www.zetter-zeroday.com/apple-chips/

> The cryptographic key itself isn’t placed in cache. But bits of material derived from the key gets placed in the cache, and an attacker can piece these bits together in a way that allows them to reconstruct the key, after causing the processor to do this multiple times. The researchers were able to derive the key for four different cryptographic algorithms: Go, OpenSSL, CRYSTALS-Kyber and CRYSTALS-Dilithium.

> [Green] notes that in theory this attack might be used to break the TLS cryptography that a computer’s browser uses to encrypt communication between their computer and web sites, which could allow attackers to decrypt that communication to extract a user’s session cookie for their Gmail or other web-based email account and use it to log into the account as them.


Suppose you have a MITM attacker, e.g. hotel WiFi. You have any page not using TLS open in a background tab, which the attacker uses to inject javascript. Meanwhile there is a different page open via TLS which you're actively using, so your browser is constantly using the session key to encrypt the traffic. The attacker is now recording the encrypted session and after an hour they crack the session key and can use it to go back and decrypt the traffic.



Wow, didn't this happen with Intel? I think that was a noticeable drop in performance.

This is probably worse given people were trying to experiment with local LLMs on CPU. Its not like they even offer Nvidia.


Macs have GPUs and their architecture means that GPU has access to the full system RAM. Cuda isn’t a requirement for running ML workloads on a GPU.


Clickbait. How can someone lacking the real docs for the CPU claim that this “can’t be patched”? How could they possibly know what chicken bits exist to disable what features?


> "After this story published, Apple told [Kim Zetter] they just posted the instruction about the DIT to their web site yesterday [MAR 21], timed to the public release of the researchers' findings, which means that developers were not told to do this fix prior to yesterday's release" [1]

The mitigation for the issue was posted in coordination with the publishing of the vulnerability. Given that the mitigation only applies to the M3 processor, it's reasonable to assume that there is no currently known mitigation for the M1 and M2 processors.

[1] https://www.zetter-zeroday.com/apple-chips/


The page for the vuln states there is a bit you can flip to disable the entire feature on the M3, but states no such feature exists on the M1 and 2. If Apple hasn't told them about a similar bit on M1 and 2 in the 100 days since it was reported, there's a pretty good chance it doesn't exist.


[dupe]

Discussion on the actual vulnerability post: https://news.ycombinator.com/item?id=39779195


Sweet. Wonder if this opens the door a DeCSS-style hack for open source iMessage clients?



Another day, another speculative execution vuln.. IMHO: all this speculation is a local maximum and it show we have fundamental issue with how we design 'computers'


It's security vs. speed. Can't have both. It's a bit like security vs. convenience.


Speculation is just a kludge trying to speed up a legacy architecture. It is possible to obtain speed even without speculation, just not without some other fresh ideas.

For example the vaporware Mill architecture is in-order on the CPU level but compilers can optimize to run code very concurrently.


yeah, my thinking is we are to focussed on the current state-of-the-art approaches (i.e ooo superscalar, ht, memory architecture etc) where we can eek out the last few % of performance. I wonder if doing something radically different, would have a different tradeoff for performance vs security/trust.


Or security vs. usefulness.


Now looking for an affordable M3 Max MBP that should cost less than my car :-)


Here in Singapore, everything that Apple does costs less than any car you could buy.


I know you're exaggerating because car prices in Singapore are very high (with good reason and kudos to the Singapore government for handling this well), but it's not true:

There are a bunch of second hand cars below 6000 Singapore Dollars on this website[1], which is the price of the 64GB/1TB Mac Studio[2].

1 - https://www.sgcarmart.com/used_cars/listing.php?MOD=&PRC=18&...

2 - https://www.apple.com/sg/shop/buy-mac/mac-studio


You forgot one little thing that you need to buy a car, which is the Certificate of Entitlement that you need to own a car in Singapore.

So it is $6,000 (car) + $100,000 (CoE)


You are right in principle. However, CoE is actually a bit less expensive at the moment. See https://www.motorist.sg/coe-results


If you’re looking at second hand cars to make your point, shouldn’t you also look at second hand computers too?

Also why look at a mid tier upgrade spec when you’re looking at a bottom tier car?


> If you’re looking at second hand cars to make your point, shouldn’t you also look at second hand computers too?

Nah, my cheeky statement didn't talk specifically about new cars only. So it's only fair to look at second hand cars.

I was specifically talking about the most expensive Apple products vs the least expensive cars.


“We also tested DMP activation patterns on other Apple processors and found that m2 and m3 CPUs also exhibit similar exploitable DMP behavior.”

Found this on the exploit site.

But it also says: "We observe that the DIT bit set on m3 CPUs effectively disables the DMP. This is not the case for the m1 and m2."

So if libraries set the DIT bit then M3 should be safe.


> The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future.

Are we nearing any sort of consensus that any form of speculation is bad? Is there a fundamentally secure way to do it?


My personal opinion is that we should solve it the opposite way - don't run untrusted code in the first place (with rare exceptions like dedicating an entire cpu core and a region of memory to a virtual machine, etc). Speculation is one of many side channel attacks, who knows what kind of crazy RF-based exploits are out there. AFAIK we still haven't fully solved rowhammer.

I think for "normal" users the main risk is JavaScript, which can (kind of) be mitigated in software without affecting the rest of the system, so no one really cares about these attacks. But the fundamental abstraction leak between physics and programming will always be there.


OpenSSL is "trusted code". The problem isn't that OpenSSL is doing something nefarious, but that the CPU breaks assumptions it makes about how one can write constant-time algorithms.


But the problem is not OpenSSL, it's that malicious code on the system can read the keys OpenSSL is using. If you don't run malicious code on the same system as OpenSSL, this attack goes away - there's no way to run a CPU timing attack from a different network.


"Remote Timing Attacks Are Practical" (2003):

https://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf

That's on the local network. I remember a paper doing this over the internet, but couldn't find it. A similar one over the internet:

https://www.usenix.org/conference/usenixsecurity20/presentat...

But in practice it's going to be malicious JS running in your browser: https://www.schneier.com/blog/archives/2021/03/exploiting-sp...


From what I understand, the problem is that algorithms which should be constant time are actually taking a variable amount of time depending on the data. If I have some server software whose security depends on those constant-time algorithms actually being constant time, why shouldn't this be exploitable over the network?


The mechanism is described in the FAQ here: https://gofetch.fail

> The GoFetch attack is based on a CPU feature called data memory-dependent prefetcher (DMP), which is present in the latest Apple processors. We reverse-engineered DMPs on Apple m-series CPUs and found that the DMP activates (and attempts to dereference) data loaded from memory that "looks like" a pointer. This explicitly violates a requirement of the constant-time programming paradigm, which forbids mixing data and memory access patterns.

> To exploit the DMP, we craft chosen inputs to cryptographic operations, in a way where pointer-like values only appear if we have correctly guessed some bits of the secret key. We verify these guesses by monitoring whether the DMP performs a dereference through cache-timing analysis. Once we make a correct guess, we proceed to guess the next batch of key bits. Using this approach, we show end-to-end key extraction attacks on popular constant-time implementations of classical (OpenSSL Diffie-Hellman Key Exchange, Go RSA decryption) and post-quantum cryptography (CRYSTALS-Kyber and CRYSTALS-Dilithium).

It sounds like the OpenSSL code is still constant time (it doesn't expect pointers in the intermediate values, to OpenSSL it is just data, not something it will ever dereference) but the attacker-controlled "monitoring" code's runtime changes based on whether the DMP ran or not.

If that's right, then the attacker still needs to run their monitoring code on the target, it isn't sufficient to just run OpenSSL etc itself.

Edit: it is more explicit in the paper, they assume that OpenSSL (the victim) is still constant time:

> In this paper we assume a typical microarchitectural attack scenario, where the victim and attacker have two different processes co-located on the same machine.

> For our cryptographic attacks, we assume the attacker runs unprivileged code and is able to interact with the victim via nominal software interfaces, triggering it to perform private key operations. Next, we assume that the victim is constant-time software that does not exhibit any (known) microarchitectural side-channel leakage. Finally, we assume that the attacker and the victim do not share memory, but that the attacker can monitor any microarchitectural side channels available to it, e.g., cache latency.


Thanks for the details, I concluded more or less the same: https://news.ycombinator.com/item?id=39789307


Because the timing difference is extraordinarily subtle, far too small to measure compared to regular network timing noise.


Noise doesn't protect against that, statistics is a thing.

But I think you might overall be right that this requires two colocated processes: the paper talks about how the DMP breaks assumptions made by "the constant-time programming model", and I took this to mean that constant-time algorithms aren't constant-time any more. Reading more closely, I think maybe the issue is that "the constant-time programming model" was also assumed to make secrets safe from cache timing side-channels leaking the secrets to other processes on the same CPU, and this seems like it might be the assumption that's broken by the DMP...

I'll have to read more, I've just skimmed the abstract and introduction so far.


My attempt at skimming for "what would be needed": controlled input specifically designed to make the process with the keys speculatively fetch or not fetch address lookalikes depending on key bits, and some observer comparing timing either of fetches to canary addresses after the key has or has not triggered a fetch, or observing how the timing of the crypto parts changes with our without canary fetches beforehand. Or perhaps even outside observability from inputs that would either fetch the same canary address twice, or two separate address, depending on key bits?

In any case, the stack of "this could not possibly be workable / but with enough statistics it can, and computers are plenty fast to generate statistically useful case numbers" is truly mindboggling with these attack vectors.


That usually just means that you need to collect more data to filter out the network noise.


If you can collect enough data you can break any password just by trying.

"More data" might be not practical.


Anything you execute on behalf of a user, even if the binaries are trusted, can effectively become untrusted code.


So, CPUs or cores (and maybe RAM) dedicated to run only trusted and only untrusted code?

Examples (I'm running Debian)

The kernel, the X11 server, terminal, ssh, bash, anything coming from the official Debian repos including the password manager: in the trusted environment.

browsers, electron apps, anything installed from unofficial repos or language and package managers (npm, rvm, asdf, etc): in the untrusted environment.

It reminds me of mainframes and their redundant and compartmentalized hardware architecture.


> terminal, ssh, bash

> X11 server

Those can very easily execute untrusted code.


Yes, but it can be countered by pinning "random-script-from-the-internet.sh" to the untrusted environment. The fork/exec inside bash (or whatever bash is using now) should take care or that, or the kernel itself which is probably a better option. bash + ls -> trusted because ls is in some way marked as trusted, bash + random-script -> untrusted, possibly by default.


Well it makes no sense to worry about side channel attacks if you don't have isolation in the first place, so there is an implicit assumption that you have a sandboxing layer like VM/container/browser (or the built in unix user separation) which don't care about terminals or X11 (usually a separate X server is used which is running inside the sandbox context).


the author(s) of the article, completely ignore the cpu itself can be patched. there is microcode underneath the ARM instructions for such scenarios. Look at intel, there has been undocumented microcode for decades I beleive these articles are just hype for street cred.


From what I have read, the microcode on M-series chips is NOT updatable. If this is the case, tsk tsk Apple.


I thought microcode was an artifact of CISC architectures, while the Arm is a RISC architecture.


So you browse the Internet with JavaScript turned off? You're a bigger person than I am ;).

The risk here is that there are more individuals with the skills to take this type of attack and bring it to a browser near you.

One apps data is another apps code.


I use uMatrix and allow first party JS. When some sites break I open the matrix look at what they would like to load and allow one more origin and reload. An example: chatgpt works by allowing JS from the first party domain, *azure.com and oaistatic.com, which looks like something from OpenAI. It would like to load JS from three other domains but it works even if I don't allow them, so there is no need to let that code run.


This is the way.

Unfortunately, I've had no luck getting others to buy into the idea that they should understand this level of detail so they can make these calls. Quite frustrating and depressing, since companies will relentlessly exploit their indifference.


If other people buy into this idea, then every site will begin proxying third-party javascript.

If the only way to get trackers on the average person is to serve it from the same first-party domain, or to bundle it in with the giant 'app.js', you better believe they'll do that.

Right now, the fact that only a small fraction of people run adblockers, and an even smaller fraction block javascript, is what allows it to work.


Not many developers do that. The general population won't even understand what they are looking at. If you are good at teaching you can give them an idea and a few of them maybe will do it, but the time invested in the allow/reload loop is probably too much unless one is really committed.

In my case when every attempt fails I know it could be the side effect of some other privacy add on. If it's a random blog/news, that's the end of it. If I really have to use that site I open Chrome, do what I have to do, close it. Of course given a choice I pick sites that work well with only JS from a few inevitable sources.


Indeed, the scary thing is that there is no theoretical limit to how sophisticated a side channel attack could be. Imagine all the timing data that could in theory be gathered from html layout engines and css, even without javascript, just by resource loading times.

I would like to salute my shitty ISP for keeping me safe from timing attacks using their unreliable network infrastructure.


This attack is now why browsers segment caching into a combination of requesting domain and asset URL, rather than just caching the asset on its own. It slows down for example Google Fonts, but means that a site can’t check to see that you’ve visited a competitor by timing an asset load from their site to see whether it’s in the cache.


The entire point of this chip is to keep secrets hidden from local processes.


For cryptographic applications yes. That is why people have spent significant effort to implement constant time algorithms to replace standard math and bitwise operations.

At the hardware level any optimizations that change performance characteristics locally (how long the crypto operation directly takes) or non locally (in this case the secrets leak via observation of cache timings in the attacker's untrusted code) are unsafe.

Intel DMPs already have a flag to turn off the same behavior that was exploited on the M1/M2. Which may suggest that the risk of this type of optimization was understood previously.

Mixing crypto operations with general purpose computation and memory accesses is a fragile balance. Where possible try utilizing HSMs, yubikeys, secure enclaves - any specialized hardware that has been hardened to protect key material.


> Where possible try utilizing HSMs, yubikeys, secure enclaves - any specialized hardware that has been hardened to protect key material.

Are there any circumstances where this hardware is accessible in the browser? As I understand, it is not generally available (if at all) for any cryptography you might want to do in the browser.


The browser doesn’t have direct access for JavaScript but can use those for supported features. This already happens for FIDO/WebAuth using a hardware root such as a Yubikey or Secure Enclave, and I believe SubtleCrypto uses hardware acceleration in some cases but I don’t remember if it makes it easy to know that.

One thing to remember here, though, is that there isn’t anything special about key material in this attack other than it being a high-value target. If we move all crypto to purpose-made hardware, someone could just start trying to target the messages to/from the crypto system.


> If we move all crypto to purpose-made hardware, someone could just start trying to target the messages to/from the crypto system.

This is one of the technical advantages of a blockchain-based system. As long as the keys are protected and signatures are generated in a secure environment, then the content of the message doesn't need to be secret to be secure.

It's not a solution to situations where privacy is desired, but if the reason for secrecy is simply to ensure that transactions are properly authorized (by avoiding the leakage of passwords and session information) then keeping the signature process secure should be sufficient even where general secrecy cannot be maintained.


In the browser is the primary use case for fido keys.


Is the untrusted code observer able to see cache timing implications of fetches to addresses that the MMU considers off limits for the process? This is what keeps surprising more, it does not align well with what I think I know about processors (not much)


> For cryptographic applications yes.

Why only cryptographic applications? What if I'm writing a very sensitive e-mail, for instance?


For this type of attack to work, the algorithm being run needs to be very well understood, and the runtime of the algorithm needs to depend almost entirely on the secret key.

In contrast, the timing of virtually any email operation is not dependent on the contents of the email, other than the size. That is, whether you wrote "my password is hunter2" or "my password is passwor", the timing of any operation running on this email will be identical.


> In contrast, the timing of virtually any email operation is not dependent on the contents of the email, other than the size.

What about spell checkers etc? Or even just whatever runs to figure out where to break the lines?


Perhaps those could be attacked. It's possible though that it's not feasible, that the possible inputs leading to a certain timing signature are just too many to get any data out of it.

Consider that those programs are not making any effort whatsoever to run in constant time, and yet no one has shown any timing attack against them. OpenSSL has taken great pains to have constant execution time, and yet subtle processor features like this still introduce enough time differences to recover the keys.


> It's possible though that it's not feasible, that the possible inputs leading to a certain timing signature are just too many to get any data out of it.

That's plausible, but a very different argument from the original, that read:

> In contrast, the timing of virtually any email operation is not dependent on the contents of the email, other than the size.


I'm imagining the email program using a formatting and rendering engine, both predictable.


Check out https://www.qubes-os.org/ for an operating system that tries to put as many layers of defense as possible between an end user's applications.


It's infuriating that all modern computers have a secure crypto TPM, but you're explicitly not allowed to use it for your own important keys, it's only for securing things against you like the DRM in certain drivers.


“Only for DRM” isn’t accurate.

I’ve been using the TPM 2.0 chip on my ASUS based Linux box to store various keys. Tooling for this on the Linux side has improved significantly [0] and it’s been supported since kernel 3.20 (2015) [1].

How effective this is at improving one’s security posture is another question and it’s probably not a huge security upgrade, but it does mitigate some classes of attack.

I’m curious why you’re saying it’s explicitly not allowed? At least for standard TPM 1.2/2.0 chips, that isn’t the case.

- [0] https://wiki.archlinux.org/title/Trusted_Platform_Module

- [1] https://www.phoronix.com/news/Linux-3.20-TPM-2.0-Security


All Apple devices allow you to use it for important keys:

https://developer.apple.com/documentation/security/certifica...


Android has user visible APIs to interact with secure crypto hardware.

https://developer.android.com/privacy-and-security/keystore#...

But I agree in general with your point


Is that not what e.g. this project allows you to do?

https://github.com/tpm2-software/tpm2-pkcs11


Absolutely critical for performance, though. If there's a way out of this it might have to be better virtualization.


It is bad, it is required for performance reasons.

The questions is what could be the solution going forward, which is going to be a huge change anyway. I do not see a way out of this with our current architectures.


Neither block ciphers, nor stream ciphers, nor common public key algorithms (RSA, Ed25519) need or even profit from this. They just need fast access to the register-register math, maybe loop sequentially through all members of a fixed sized array a fixed number of times. The only thing those implementing such algorithms would probably like having is a few kiB of safe to access scratchpad memory for code and data. On entry to the crypto code copy the code and data there, enable a constant time mode for compute instructions and run the algorithm at full speed without worrying.


Should we make a petition for apple to make lockdown-like mode that disables speculative execution? I'll sign up for that.


From the gofetch website, apple has already done this for m3 chips.


Speculation is required to get even close to the single-thread throughput expected of any modern CPU for anything worth running on a general purpose CPU. The problem is that there is no formal specified model to reason over side-channels not even just for timing side-channels. Most ISAs doesn't specify the time it takes execute instructions.

Lets assume I multiply two 64bit numbers. The CPU could just do it the same way every time and the worst-case has 4 cycles latency. It may also track if one of the factors is zero and dynamically replace the multiplication with a zeroing idiom that "executes" in 0 cycles when the scheduler learns that that either input is zero as an extreme example.

Less radical it could track if the upper halves of registers are zero to fast-path smaller multiplications (e.g. 32bit x 32bit -> 64bit) and shave off a cycle. IIRC some PowerPC chips did that, but for the second argument only. The ISA allowed it.

A realistic example are CPUs with data-dependent latency shift/rotate instructions. What do you do if an ISA doesn't specify if shift/rotate is constant time, but every implementation of it so far did it in constant time? Do you slowly emulate it out of paranoia that a future implementation may have variable latency? An other real-world example of this would be FPUs that have higher latency for denormalised numbers its just not relevant to (most) cryptographic algorithms.

How the fuck are you supposed to build anything secure, useful, and fast enough from that?


Isn’t the issue more akin to use after free? If the instructions and memory were wiped on prediction path failure wouldn’t that help?


nope. the side effects (which can be access times in case of non-failure, or voltage changes, or or or...) would still happen


This isn't speculative execution.

EDIT: The downvotes make no sense. What this bug has in common with Spectre is that it has to do with cache timing. But in Spectre, the cache is affected by speculative execution; with "GoFetch", it's the pre-fetcher pre-fetching things which look like memory addresses. Pre-fetching is not speculative execution.


Unless the comment was edited, the person you are responded to did not use the phrase "speculative execution"


You're right, it says speculation. I read it as speculative execution, I have never ever heard the term "speculation" applied to prefetching...

But if they did mean to include pre-fetching in "speculation" then I retract my comment


Perhaps we should do the crypto in constant time, and run all other applications using homomorphic encryption?


FHE is unusably slow even for the simplest operations, and there is no reason to be sure it will ever be fast enough for any normal computing.


Disabling all hardware optimizations becomes an option long before homomorphic encryption becomes an option, performance-wise.


[flagged]


Did you actually look at the paper? It’s extremely technical. I can’t imagine the logo took even 1% of the time they spent on this.


Yes. I merely meant that I believe blowing those things out of proportion on purpose isn't helping anybody. The reality of vulns like this is that they don't affect the majority of normal people. Everybody wants to be the next cloudbleed.


As usual nobody cares about the "Average users". This is a flaw, this is a very high risk issue for everyone and should be threaded as a big problem by Apple but as the "average user" is not important anymore...


The average user is compromised by social engineering, password reuse, or not installing updates. If you’re trying to improve matters for them, put your energy into getting them to adopt passkeys and patching promptly, and asking regulators for stricter penalties for phone number spoofing and delivering spam calls. I would wager that there are more people compromised every minute that way than will ever be compromised by this bug.


if this is confirmed I'm really interested into how exactly Apple will somehow deflect this and make it vanish like they somehow always manage to do with the myriad of issues they're facing over and over


It’s a total non issue for the majority of folks. It requires local access and takes hours under very specific conditions that don’t apply to most people. How often do you run a server that will run arbitrary crypto operations on attacker controlled inputs?

Plus all the secrets in the Secure Enclave are immune to this attack, so your FileVault keys and your Apple Pay cards and all that jazz are completely safe.

It sucks that it exists, and crypto libraries that run on the platform outside of the Secure Enclave will get slightly slower, but no one will notice.


> It’s a total non issue for the majority of folks

People said the _exact_ same thing about Spectre/Meltdown. Then the JS PoCs came out


Isn't the lesson here that scripting in the browser needs to die. Letting untrusted code run on your computer is always a bad idea, no matter how much you try to sandbox it.


I would also love to see the API surface of the browser come way down.

If people knew just how widespread and effective browser fingerprinting is they would be shocked. It's Cambridge Analytica on steroids.


Yes, and now browsers have mitigations which make timing attacks harder. This bug also has a key dependency on being able to trigger a crypto operation in a local process, which isn’t easy to do from a browser sandbox or in general on a Mac.

The angle I’d worry about is something like a password manager, but most of those already have an authentication step and I’d be surprised if they didn’t have rate-limiting.


The SpectreMeltdown mitigations have caused me more grief than the problem themselves to this day.

These vulnerabilities definitely exist, that much is a matter of fact. But whether it's something someone should consider in their threat model is a different matter.


>It requires local access

What does this mean? All I read is access to user space. Wouldn’t any web browser be enough?


It definitely supports my belief that centralizing identity, payment, and apps into a single device is a fundamentally flawed security model.


Can you elaborate what you mean?

I may be misunderstanding what you intended but how do you use traditional means of payment (credit card) without an identity? How do you check your email without identity?


What I said was that centralization is the problem.

Different tasks require different levels of identification. Cash (a traditional means of payment) requires no identification. I only carry up to about $200 in cash on me, which an amount I'm willing to bear if my wallet is stolen.

When I use chip&pin ("what you have and what you know") for small payments, I rarely need any authentication, and when I do it's the PIN. My wife can and has used my card, with my permission. I have my email password written down for her so she can access it, eg, if I die and my computer dies.

The banking system probably factors in my usual payment locations to make the choice of when to ask for a PIN, combined with the trust experience with the vendor.

My card, even with a PIN, has a spending limit. Years ago I had to authorize raising the limit because my client was willing to reimburse me for a business class flight across the Atlantic. Of COURSE I want more friction in the system when doing something riskier. If the way to authorize a $60 dinner and a $60,000 car are too similar, then it's easier to fool you.

For a higher amount, I can go to the bank and carry out a transaction in person, or I can authorize it through their online banking system.

"But wait, how?" you might ask. The bank figured this out years ago, when people started going online, using unpatched Windows PCs without virus scanners.

The system - whose security I trust much more than a phone's - uses a small device with a camera. The login screen shows me a pattern with colored dots. The camera reads those dots, decodes the message (and probably also validates it cryptographically) and displays a message asking me to verify I want to log in.

I enter the PIN, and it generates a response code, which I enter.

If I make a payment, or add a new recipient, or a few other things, I am required to use the device again.

This device stays at home, because I don't expect to make $10,000 payments while out.

I can use it on any web-enabled device, because the security is in "what I have" and "what I know", in a device which cannot be hacked, does not require any physical connection, and does not require network accessed.

I like this system more than a Yubikey because it does not require a hardware attachment, which isn't always possible.

Yes, Yubikeys feel like a step backwards compare to my bank's security practice. I don't understand why there is no provision for cable-free/wifi-free/mobile-system-free validation in this supposed privacy-oriented switch to passkeys, when I know such a system exists.

Furthermore, the bank has the legal obligation to ensure the system works. If the encryption system is somehow broken, they are required to update the hardware. Apple is not. Yubi is not. The cost is all on you. My bank has even shut down mobile phone banking for older hardware/OSes, claiming the security isn't high enough. But they have not needed to update my security device.

If you expect your phone to be able to do anything, and authorize anything, then I see it as a giant risk. You can be at the bar, drank to much, and be convinced to make a payment or authorization that you shouldn't of. There's no real, physical way to change your risk level depending on the circumstances if you always have your phone with you.

Centralization of identify, payment, and apps is fundamentally flawed.


> The system - whose security I trust much more than a phone's - uses a small device with a camera. The login screen shows me a pattern with colored dots. The camera reads those dots, decodes the message (and probably also validates it cryptographically) and displays a message asking me to verify I want to log in.

> I enter the PIN, and it generates a response code, which I enter.

How do they protect against phishing? That sounds like the weak MFA where attacks spoof the login page but make their own connection to your bank and pass through the challenge and response, which means that a user who doesn’t fastidiously check the hostname can be convinced to enter a TOTP, SMS, etc. code.

Phone-based WebAuthn systems are immune to that because they incorporate the hostname into the signing process so even if they convince you that they’re hugeb<cyrillic a>nk.com there’s no way for you to override that and give it a response which works for hugebank.com.


> How do they protect against phishing?

The same way they minimize the vulnerability when running on an exploited Windows machine?

Even if I log in, via a MITM attack, all they can do is read my account history.

Actually making changes requires further authorization. When I make a payment the screen asks me to confirm the amount I'm about to pay. The same applies to other security sensitive changes.

Still, you make a very valid point, and I thank you for pointing out the flaw in my understanding.

I still have a very deep distrust about centralizing identification, payment, and apps on a single device, and strongly dislike the inability to have physically very distinct trust levels.


> Actually making changes requires further authorization. When I make a payment the screen asks me to confirm the amount I'm about to pay. The same applies to other security sensitive changes.

That’s a good answer, too, especially of that custom message can be large enough to display the name & amount. Anything to jar people out of the “I thought I was sending $100 to the cable TV company, not $6,000 to someone in India” autopilot state.

I generally agree with your larger point and wish that banks would make it easier to do things like setup a Yubikey and require it be used on any transaction over a certain amount. I’ve never in my life needed to make a large transaction where I wouldn’t have been able to grab a token from my safe to approve it, and at some point delay becomes a security feature since it give the bank staff time to do things like call you and make sure you really intended to do something.


Thank you for your supportive words.

Certainly seeing articles like this about possible flaws in a centralized system, and the last of economic responsibility for fixing issues in affected customers, ... about people losing their Google id, the monopoly abuses of Google and Apple, and the e-waste issues of depending on apps which don't support old-but-working phones (sometimes in the name of security, but more often because it's expensive to maintain old phones) ... really gives me a bad feeling about this brave new world.


> Phone-based WebAuthn systems are immune to that

Do they assume the OS is locked down and secure?

I mean, clearly if someone has a remote desktop view for my machine, then they can act as me, including any check for available hardware. The same should apply for a phone, yes?

If so, that sounds like my bank will never formally support running on a PinePhone or other user-inspectable/modifiable system - they will simply say they require a full chain of trust for the OS.

I'm glad the (relatively) open arenas of macOS and Windows existing, and that people have 10+-year-old machines, forcing my bank to support alternate login methods for less-trustworthy systems.


The majority of ecommerce purchases made today are done on mobile.

And majority of these would be secured via on device biometrics.

The fact that this is all happening with approval of credit card companies, banks, regulators etc means that the idea that centralisation is fundamentally flawed is simply wrong.


My bank's terms of services say they are not responsible for flaws in the mobile phone or privacy issues in the store.

If there is a vulnerability, who pays for fixing it? Who pays for the new phones?

Of course the credit card companies and banks don't want to be responsible. They currently aren't, and they don't want to take it on.

Why should the regulators care yet? Using a mobile phone is still under the fiction that it's an optional functionality, where the user has agreed to take on the risk themselves, in exchange for better convenience and non-essential services.


Single point of failure?

If a single code gives access to everything, loosing that code to a malicious actors is really bad.


The premise is that you keep a separate device with the sensitive stuff on it (e.g. the chip in your physical credit card, a physical ID badge), and then you can't click on a link in your email or go to the wrong web page and compromise that data because the device you use for email or browsing never has it to begin with.


As opposed to what else?

Please elaborate.


Using multiple devices: Credit cards for payment, instead og apple pay. A camera for taking pictures, instead of a camera app, a notebook to write notes, instead of an app.

From my perspective the original comment is not rocket science?


I learned from another commenter on HN when a post asks low-effort questions where the answer is common sense or implicitly understood, it’s most likely a bot or troll or shill. Best not to engage with them.


that makes sense. within the past couple of months the value of comments, especially before community cueation, has gone very much down.

Maybe hackernews is mostly LLMs speaking at this point.

what a shame.


What an awful and counterproductive experience.

If you require someone to enter their credit card number every time they make a purchase they end up doing dumb, insecure things like storing it as a text file on their desktop.

And having external hardware just means more cables, batteries, updates to keep it secure etc.

Initiatives like PassKey, ApplePay, TouchID etc. have been a huge win for security and privacy.


Hey, this is completely up to you. nobody is trying to steal your apple pay.

this is merely the parent commenter's belief that they would should use different apps to reduce the risk vector when secrets are stolen.

no reason to completely ballistic.


Thanks, though I would say "different physical devices" not "different apps".

This also gets into e-waste issues. I really don't want multiple phones, and in any case, consumer phones are expected to be replaced every few years. A friend has an old iPhone which still works, but the local transit system's app no longer supports it. My bank's mobile app requires Android 7, etc.


I don't want to require everyone to do that. I'm fine if others are willing to accept the risks of centralization of identify, payment, and apps into a single device. I understand the benefits you say are true for many people.

I don't want to be forced into that model because I think it's too risky. I think YubiKeys and other devices which require physical device attachment to use to be too risky when I know other solutions exist (see a parallel comment, or below).

I want to teach my kids that if a phone is asking for permission, asking to verify id, asking to read complex terms of services, then you must be careful, and preferably not trust them. The current model is "identify yourself whenever the computer asks" and "click I Agree", which seems open to all sorts of abuse of power and trust.

How many people know they are giving up their rights to a trial in favor of forced arbitration? How many read the license which says to email 'law@example.com' to not waive that right? How many are able to understand the relevant issue? Effectively zero. Are high schools going to start teaching contract law so students are able to understand what they are expected to sign? No.

I also think switching to 2FA and passkeys empowers the Google and Apple duopoly. What happens if you lose your phone? How do you reestablish your passkeys?

"Simple. In your new phone, log back in to your Google or Apple account", right?

And if Google or Apple shuts down your account for some reason?

"Umm, make backups? Also have a YubiKey?"

That's a huge lock-in for the sort of people who would otherwise store their passwords in a cleartext file. At the very least everyone should be able to let their bank or other trust third-party store a copy of the end-to-end encrypted database, and that bank or third-party should have an enforceable legal obligation to store, maintain, and provide that encrypted database, and that new phones can restore from this database without needing Apple or Google authorization, after the person has physically visited the bank or police station and identified themselves sufficiently.

And if I say I want something more secure and more portable than a YubiKey to log in?

My bank login device has had one change of AAA batteries in about 7 years. It has had zero updates, because it is not programmable. It uses a camera to read something like a QR code, a screen to read the requested task, a number pad to enter a PIN, and I can read the response code on the screen, to enter into the computer. It can work with any device, even ones without available plugs.

(And my bank is legally responsible for ensuring the security level is enough, and updating if it is not. Apple is not. Google is not.)

While I love it for bank login (it's the orange one at https://www.sparbankenskane.se/privat/digitala-tjanster/sake... ) I also don't want that for all my services!

If I need to pull it out, I know that I'm doing something that requires extra attention and care. The rituals needed for different levels of authorization should be very different, to make it hard to get confused about what you are doing. I also have the ability to physically leave my higher authentication devices at home while I'm out for the day.

(UPDATE: acdha correctly pointed out the phishing attack possible in this approach. I do not know what the bank does to protect against phishing. But since this is a low-use service, which is unlikely to be targeted, and I am fastidious about double-checking, it seems like a low risk issue. Security through diversity.)

Why should I trust ApplePay's privacy more than I do my bank's? Is ApplePay required to follow the same Swedish privacy restrictions that my bank does? Is ApplyPay equally liable in case of errors? Can the Swedish government audit how ApplePay works and confirm it complies to the same level of privacy as my bank?

Every time I look into it, it seems the answers are always "no." Maybe it's changed?

Basically, I trust international companies who have abused their monopoly position with my security and privacy far less than I trust the Swedish government.


I imagine something like a software framework that can be called if properly secure crypto is needed at the expense of performance


Like a TPM?


I'm sure Apple will provide a patch in the next few days. Mr Tim Cook will take care of the share price.


Security through obscurity is really a bad idea, and Apple is no exception. In the long run, this will likely drive the adoption of RiscV as a better alternative.


This RISC-V evangelism is worrying. Using RISC-V doesn't make your system secure; Good ISA implementations do. The ISA has no bearing on security vulnerabilities. Perhaps a faulty decoder could be a vulnerability vector, but a faulty RISC-V decoder wouldn't be compliant, and neither would a faulty ARM decoder.

If I add a custom crypto extension to a RISC-V core and implement it badly, is that the fault of RISC-V? No! It's my own. And RISC-V doesn't help anyone here because their license allows me to keep my extension completely closed source - no different than Apple is today with ARM.


My comment was not about the ISA implementation or specification, It's about the TCB (trusted compute base), which in Apple (like intel and AMD) is closed. In RiscV is open. I would recommend you to educate yourself on any topic before lecture others.


>The ISA has no bearing on security vulnerabilities.

Complexity leads to bugs, some of which are going to be security bugs.

ISAs impose complexity upon implementations. To claim they do not matter would be disingenuous.


What does this have to do with security through obscurity? This is an issue with cache prefetching.


It has to with the secure processor. Although you seems to ignore what is the TCB.


No, this only works on the regular processor cores. It's a cache timing attack that depends on the attack code and the targeted cryptographic code running on processors that share cache.

See the FAQ at https://gofetch.fail/


Yes. This is good for Bitcoin.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: