Hacker News new | past | comments | ask | show | jobs | submit login
ZombieLoad: Cross Privilege-Boundary Data Leakage on Intel CPUs (cyberus-technology.de)
854 points by Titanous on May 14, 2019 | hide | past | favorite | 308 comments



Apparently Intel attempted to play down the issue by trying to award the researchers with the 40,000 dollar tier reward and a separate 80,000 dollar reward as a "gift" (which the researchers kindly denied) instead of the maximum 100,000 reward for finding a critical vulnerability.

Intel was also planning to wait for at least another 6 months before bringing this to light if it wasn't for the researchers threatening to release the details in May.

Source in the dutch interview: https://www.nrc.nl/nieuws/2019/05/14/hackers-mikken-op-het-i...


Intel has abused the responsible disclosure process for economic gain. Their Leadership was not interested in a repeat of the spectre and meltdown impact on their stock price and made the (most likely accurate) assessment, that recurring news of intel vulnerabilities would harm their stock more than delay and cumulated release. As a result Academic Researchers were denied some of the credit they would otherwise have rightfully earned, because their individual contributions are buried in a sea of similar publications. Research efforts were thus needlessly duplicated. Research which could have formed the basis for subsequent research was unavailable and (publicly funded) researchers wasted time duplicating results. If two researchers discover the same vulnerabilities independently, there should be no embargo on disclosures because it has to be assumed with a high likelihood that third-parties might already be actively exploiting it. The public has to be warned, even if no effective mitigation is available. If for a subset of the vulnerabilities, AMD and ARM are not affected then security conscious users could have been reducing their exposure by utilizing competitors chips.

In this case the practice of responsible disclosure has been turned on its head. There should no longer be any responsible disclosure with Intel as long as they do not commit to changing their behavior.


> The public has to be warned, even if no effective mitigation is available. If for a subset of the vulnerabilities, AMD and ARM are not affected then security conscious users could have been reducing their exposure by utilizing competitors chips.

The way Intel has been handling these security issues, I am going to avoid buying Intel whenever possible moving forward, regardless of if they have slight performance or power gains over competitors. The way to speak negatively toward corporate governance in this case is to vote with my wallet.


It's worse than that. Some of these flaws have been known for over a year already. Many vendors with implementation details of the fixes have said "full mitigation may require disabling hyperthreading".

Wtf does that mean exactly? Do the patches and microcode work or do they not? I expect the truth to come out as OSS maintainers come out of embargo and others analyze the patches. But it sure looks like VM's on your favorite cloud provider will still be vulnerable in some ways because they're not turning off HT.

Wired has many details of your Dutch link in English. https://www.wired.com/story/intel-mds-attack-speculative-exe...

Intel pressuring vendors to not recommend disabling hyper threading? Apple has added the option to MacOs, so presumably the mitigations are not completely effective: https://www.theregister.co.uk/2019/05/14/intel_hyper_threadi...


That speaks volumes to the integrity of the researchers. Similarly, it speaks to a lack of the same @ Intel. Bribing for silence is not the way to deal with vulnerabilities. I’m glad the researchers are getting some recognition.


> Intel was also planning to wait for at least another 6 months before bringing this to light

Of course, until the legally agreed date when they can dump shares so there’s no obvious proof that it’s insider trading. Isn’t that what (then) Intel CEO Brian Krzanich did after Meltdown/Spectre?


Not sure why down voted, this sounds like the most logical reason


Because they’d eventually have to disclose when the vulnerability was discovered and that’d be extremely obvious what they’re doing?


It might be obvious, but that's exactly what Brian Krzanich did in 2017. He didn't get in any actual trouble for it even though the timing smelled blatantly like insider trading.


Is it obvious if they have an existing plan to sell shares and are simply waiting for it to trigger? They can reasonably claim they took this action to protect consumers until they had a better fix


Executives are allowed to sell at pre-agreed dates. If they do it on those dates then there’s nothing to prove, they just postpone the disclosure under any reason. Doesn’t make it smell less like insider trading, you just can’t prove it.


I don’t think he did. He got out and had a prior years notice. Then he made a comment about not losing over 20 percent datacenter to amd. Dude definitely seen what was coming


Seemed to work out okay for the Equifax executives who sold stock before they publicly disclosed they had been breached.


No. CPU vulns don’t affect Intel’s stock price.


History would appear to prove you wrong[1]. Yes, Intel's stock price rebounded that doesn't change that their stock price changed when the vulnerability became public.

[1]: https://qz.com/1171391/the-intel-intc-meltdown-bug-is-hittin...


Yes. Gamers like Intel because it's faster. Benchmarks are clear. :P


You have to admire the complex complicity. Someone smart enough to understand the depths of the problem had to guide that conversation


Or just someone paranoid enough that this would be their standard response if they poll the researchers and one of them says "this could be New York Times big"


What is the recommended course of action? Stop buying Intel products, and devices which contain them?

What about devices with older processors? I'm still running a Sandy Bridge rig and it works fine, except for the side channel vulnerablities. It's probably not going to be patched. I also have a cheaper computer with a Skylake processor, which is newer yet still vulnerable!

It's only a matter of time until something really nasty comes along, making all these PCs dangerous to use. What then? Lawsuits?

My questions are only partially rhetorical.


The stream of critical CPU vulnerabilities starting with Spectre/Meltdown last year are related to speculative execution, not just Intel. (AMD and ARM CPUs are also vulnerable to Spectre, for example.) Intel CPUs are sometimes vulnerable to additional attacks because they speculate in more scenarios than other designs. But fundamentally, as long as multiple different trust domains are sharing one CPU that speculates at all, or has any microarchitectural state (e.g., caches), there are likely to be some side-channel attacks that are possible.

The important thing to realize is that speculation and caching and such were invented for performance reasons, and without them, modern computers would be 10x-100x slower. There's a fundamental tradeoff where the CPU could wait for all TLB/permissions checks (increased load latency!), deterministically return data with the same latency for all loads (no caching!), never execute past a branch (no branch prediction!), etc., but it historically has done all these things because the realistic possibility of side-channel attacks never occurred to most microarchitects. Everyone considered designs correct because the architectural result obeyed the restrictions (the final architectural state contained no trace of the bad speculation). Spectre/Meltdown, which leak the speculative information via the cache side-channel, completely blindsided the architecture community; it wasn't just one incompetent company.

The safest bet now for the best security is probably to stick to in-order CPUs (e.g., older ARM SoCs) -- then there's still a side-channel via cache interference, but this is less bad than all the intra-core side channels.


That's not exactly true. Broadly speaking, there have been two very different kinds of speculative execution vulnerabilities with different security implications and workarounds. Spectre and its relatives are an attack on trusted code that process untrusted data using certain code patterns guarded by conditionals that can be speculatively executed; they're inherent to speculative execution past branches, but they require specific code patterns that can be avoided to work around the issue.

These vulnerabilites and Meltdown allow untrusted code to speculatively access data that it shouldn't be allowed to access at all, and use that speculative access to leak data itself. Unlike Spectre this can be (and to some extent has to be) fixed at the hardware level, because the hardware itself is failing to protect sensitive data. This class of vulnerability seems to have been mostly Intel-exclusive so far (with the main exception being one unreleased ARM chip that was vulnerable to Meltdown). There's nothing inherent about modern high-performance CPUs that requires them to be designed this way.

Edit: This slipped my mind, but Foreshadow / Level 1 Terminal Fault was yet another similar Intel-only processor vulnerability that allowed speculative access to data the current process should not be able to access. It's definitely a pattern.


Yes, that's true, several of the vulnerabilities involve checks that are performed late (not at time of speculative access, but at some point before instruction commit). Not excusing the design choice at all, but it's conceivable that an engineer could make this choice if (i) side-channel effects of the speculation are not considered at all, and (ii) the postponement of the check allows the load latency to be reduced. Again, not justifying, and the vulnerabilities are terrible, but there does seem to be a rational-given-some-assumptions way to reach such a decision.


I’m not a CPU architect, but it seems like Intel saved a couple gates by putting garbage instead of zeroes in the pipeline.


After reading more of the (limited, publicly known) details, it looks like the data leaked isn’t, strictly speaking, total garbage. But I do wonder whether Intel got a meaningful latency improvement by putting potentially wrong data into the pipeline instead of using zeroes or stalling. Zeroes or a stall would require knowing that the data is invalid before continuing with execution, which could be a performance issue.


Your not, but the gentlemen your replying to is. Bet you can’t guess where and what he worked on?


Not sure how to read this, but if you meant it as a personal attack that's totally not ok here.

https://news.ycombinator.com/newsguidelines.html


Not sure how to read this either, but as a moderator if you strive to warn people on ambiguity you can’t discern I can assure you no harm intended per the rules cited


Yup, I worked for a bit at Intel, but I don't speak for them, I wasn't involved in any of the designs under discussion, and everything I'm saying here is public knowledge in the computer architecture community. I figured that the perspective from the academic comparch world might be interesting.


Hehehehe I love it! Thank you much. I was being a bit rousing/ambiguous as your commentary caught my attention and was a bit excited when I checked out your background.


> There's nothing inherent about modern high-performance CPUs that requires them to be designed this way.

Assuming by "designed this way" you mean: to speculatively execute past security checks, I'd disagree.

I'd say the relevant performance measure for CPUs (as opposed to other kinds of processor) is the speed at which they can execute serial operations. As electronic performance improvements offer increasingly marginal gains, we need to resort to improved parallelism. When operations are needfully serial due to dependency, as are security checks, the only way to accelerate that beyond the limits of the electronics is to make assumptions (speculations).

It's not inherently wrong to do this, but requires speculations never have effects outside of their assumptive execution context.


I think there's quite a big difference between leaking information between security contexts (Meltdown) and leaking within a security context (Spectre). The later is a problem but it's not a the same magnitude of failure as the former is.

Also, no need for older SoCs. High end ARM chips have both in-order and out-of-order cores but the cheaper ones have in-order ones only. A Snapdragon 450 is pretty modern and doesn't speculate deeply enough to be vulnerable to Spectre.


> Spectre/Meltdown, which leak the speculative information via the cache side-channel, completely blindsided the architecture community; it wasn't just one incompetent company.

In the x86 space, Meltdown absolutely was down to one company apparently deciding to over-optimize for performance.

I can't find it now, but I remember reading a thread from (I think) the OpenBSD devs about how the Intel MMU documentation described fairly sane behaviours and how far the reality deviated from the documentation.


> In the x86 space, Meltdown absolutely was down to one company

Serious weasel wording, outside of x86 space every other high end architecture also had Meltdown issues, ARM, and IBM's POWER and mainframe designs.


Erm, no? Meltdown was intel only. Spectre affects absolutely every architecture with speculative execution, but Meltdown (which allows crossing process and security boundaries) are absolutely unique to Intel.


Erm, check out Wikipedia https://en.wikipedia.org/wiki/Meltdown_(security_vulnerabili... and follow the links? Meltdown is CVE-2017-5754, see: https://developer.arm.com/support/arm-security-updates/specu... where as I recall ARM initially described the Cortex-A75 as having a "variant", but now just lumps it in with the CVE. And the IBM info is also there, POWER7+ through POWER9, and per Red Hat, mainframe/System Z.


Everyone who does speculative execution had Spectre issues, but Meltdown-style vulnerabilities have been mostly Intel-exclusive. These new ones are too.


Maybe because Intel has shipped a thousand more SKUs and millions more CPUs with Meltdown than ARM, for which the Cortex-A75 was a new design, and IBM, which doesn't ship huge numbers of either POWER or mainframe CPUs??


Why would that make a difference? We're not talking about manufacturing defects, every single unit they sell has the problem, doesn't matter if they sell 10 or 10 million.


It makes a difference because Intel is a more attractive and more consequential target for researchers. AMD's market share in servers is minuscule and even declining a bit as of 19Q1 (?), and modest but increasing nicely in notebooks and desktops https://news.ycombinator.com/item?id=19916279, while IBM's POWER and mainframe systems are expensive to very expensive to access.

ARM is actually a good target with a number of their newest designs using out-of-order speculative execution with Spectre vulnerabilities and their owning mobile space outside of notebooks, one of the newest even being vulnerable to Meltdown, but the significant headline worthy instances tend to come in much more locked down devices. Speed is also an issue, everything else being equal, the faster the chip, the faster data exfiltration proofs of concept will work.


Yes, the vulnerabilities are not just Intel, but they're mostly limited to Intel CPUs. Why is AMD less prone to these mistakes? Perhaps there are simply fewer researchers looking into AMD processors?


This happens only because Intel went much further than AMD and other companies in exploring these effects. If other companies used speculative execution as much as Intel, the result would be the same. It is not a flaw of implementation, it is a flaw of basic design.


They have different cache architectures; Intel uses inclusive (i.e. all levels contain keys from previous levels), AMD uses exclusive cache (each level contains unrelated entries to any other level). This might have different effects on classes of vulnerabilities they are prone to.


I know that some AMD CPUs a long time ago had exclusive caches, but for Ryzen, I'm pretty sure that both L1 and L2 are inclusive, and L3 a victim cache.


the inclusivity types don't really play a role in these types of attack (until you get to a very practical stage where this might matter), not least because there are other sidechannels that can be exploited. an inclusive llc is just convenient.

that said, some newer Intel LLCs are non-inclusive, and amd changed its cache relations as well in ryzen


The focus is on Intel partially because it's the most valuable target, so we might be looking at selection bias.


That's really not the case. Once you know of an attack of this sort it's pretty easy to test everyone's chips for it. Bascially every Intel chip since Nehalem is vulnerable to Meltdown, as are IBM's POWER chips, as is one out-of-order ARM core but not any of the others. And we know that AMD chips are safe from that vulnerability.

Whether you run security checks in sequence or in parallel with a data access is pretty fundamental to the design of a core. Doing the later is has performance advantages but it's really hard to verify that won't be any architectural leaks as a result, let alone these micro-architectural leaks that nobody was thinking about.


I agree with everything you say, but security researchers will still prefer searching for new vulnerabilities in Intel (which is hard, I hope we agree on that) and verify they don't work on AMD, rather than the other way around.


We were talking about alternative CPU designs and a guy commented on how much of the volume of a CPU is memory these days.

I wondered aloud if it wouldn’t be better for use to embrace NUMA, make the bigger caches directly addressable as working memory instead of using them as cache.


I’ve heard somewhere that some or all versions of intel atom are immune to both spectre and meltdown attacks. The bonnell architecture being just a supercharged 486 has none of those fancy features that are being exploited. Not sure about foreshadow since HT is present on Atom processors.


No out-of-order speculative execution, no bugs of this exact sort, expect maybe you'll find something in the odd corners of special features. As far as I know (don't follow them much) Atoms are based on the Pentium super-scalar architecture, which is a "supercharged 486" that allows the CPU to do more than one operation in parallel on different execution engines.

Per Wikpedia, "The Pentium has two datapaths (pipelines) that allow it to complete two instructions per clock cycle in many cases. The main pipe (U) can handle any instruction, while the other (V) can handle the most common simple instructions." (https://en.wikipedia.org/wiki/P5_(microarchitecture) )

HT uses two instruction decoders to keep more busy a set of execution engines, suppose you had the above set of 2 execution datapaths (that may be improved on these Atoms), and couldn't do two operations at a time from one instruction stream, but the other could use the unused datapath.


> The safest bet now for the best security is probably to stick to in-order CPUs

out-of-order execution != speculative execution.

It is possible to have OoO without speculative execution. On the other hand they do tend to come as a pair since they both utilise multiple execution units, for instance the Intel Pentium in 1993 was the first x86 to have OoO or branch prediction (486 and those before were scalar CPUs).


> It is possible to have OoO without speculative execution

That's technically correct (the best type of correct!), but without speculation, the CPU can't see beyond a branch, so the window in which the machine can reorder instructions is very small. (For x86 code, the usual rule of thumb is that branches occur once per 5 instructions.) So in practical terms, an OoO design (in the restricted-dataflow paradigm, i.e., all modern general-purpose OoO cores) always at least does branch prediction, to keep the machine adequately full of ops.

An interesting question, though, given that we're going to do speculation, is what types of speculation the core can do. The practical minimum is just branch prediction, but modern cores do all sorts of other speculation in memory disambiguation, instruction scheduling, and the like, some of which has enabled real side-channel attacks (e.g., Meltdown).

Also, FWIW, Pentium was 2-way superscalar, but not OoO in the fully general dataflow-driven sense; the Pentium Pro (aka P6), in 1995, was the first OoO. (Superscalar can technically be seen as OoO, I suppose, in that the machine checks dependencies and can execute independent instructions simultaneously, but it's not usually considered as such, as the "reordering" window is just 2 instructions.)


OoO without speculation is completely pointless: the tipical reorder window is orders of magnitude larger than the average number of instructions between conditional jumps. I don't think there have ever been an OoO CPU without speculation.

OTOH speculation without OoO Is not only possible but in fact very common. For example the majority of non ancient in-order CPUs.

In fact the original pentium, contrary to your statement, was an in-order design (the pentium pro was the first Intel OoO design). Also I believe that 486 already had branch prediction. Pentium claim to fame was being superscalar (i.e it could execute, in some cases two instructions per cycle), a better FPU and better pipelined ALU (I think it had a fully pipelined multiply for example).


> I don't think there have ever been an OoO CPU without speculation.

To the extent I've looked at it, without reading original documents, the original OoO design that current systems are based on, the IBM System 360/Models 91 and 95's floating point unit using Tomasulo's algorithm https://en.wikipedia.org/wiki/Tomasulo_algorithm didn't extend to speculative execution.

No doubt because gates were dear, implemented with discrete transistors, and that processor was a vanity project of Tom Watson Jr's. And memory was slow, core except for NASA's two 95's with 2 MiB of thin film memory at the bottom, and cache was still a couple of years out, introduced with the 360/Model 85. OoO becomes compelling when you have a fast and unpredictable memory hierarchy, as we started to have in a big way in the 1990s when we returned to this technique.


Interesting. It seems that the machine did have some limited form of branch prediction, but probably the expectation was that FP kernels would be optimized to be mostly branch free, and, as you say, transistors were a premium.


> Also I believe that 486 already had branch prediction

I can't find any evidence to support that [1]

I suppose it's technically possible to have branch prediction on a scalar processor, but I imagine it would not be hugely beneficial.

https://books.google.com.sg/books?id=QzsEAAAAMBAJ&pg=PA59&lp...


So it seems that the 486 had a trivial not-taken predictor; but that's still different from stalling on each conditional branch and does require checkpointing and rollback on misprediction (although with a pipeline only 5 deep that's probably also not very complex).

Edit: pentium did have a significantly more sophisticated predictor of course, although not without flaws.


Can we say that, without speculation and caching, and just throwing more and more cpu to work, we would have slower single app performance but better parallel execution ?


Sure -- for a good example, GPUs go partway there by not speculating (they still have cache hierarchies though). It works because GPU workloads have massive data parallelism, so while one group of threads (a "warp") is stalled waiting for data, the cores can just execute other threads. Sun/Oracle had built a number of Sparc chips along this line too, e.g. the Niagara (Sun UltraSPARC T1) tolerates memory latency by having a bunch of SMT threads (8 per core, IIRC?) rather than OoO scheduling.

The problem is that single-thread performance is really important for a lot of workloads, because (i) parallelization is hard, (ii) even for parallelized workloads, serial bottlenecks (critical sections, etc.) still exist, and (iii) latency is often important too (one web request on one core of a server, or compiling one straggler extra-large file in a parallel build, for example).


Really, different trust domain cannot seriously benefit from a common cache: their datasets are by definition disjoint.

So it would be safest to execute them on separate CPUs not sharing a common cache, e.g. pinning them to different CPU sockets on a multi-socket machine, or to different physical machines altogether.

This may be still faster than running on old ARMs.

I wonder if dedicated cloud boxes, where all VMs you spin on a particular physical machine belong only to your account, will become available from major cloud providers any time soon. In such a setup, you don't need all the ZombieLoad / Meltdown / Spectre mitigations if you trust (have written) all the code you're running, so you can run faster.


Scaleway is one such provider. They call it not virtualization but physicalisation.

I should add that AWS has "dedicated instances" which are exclusive to one customer. They are more expensive than standard instances but they are used by e.g. the better financial services companies for handling customer data.


> different trust domain cannot seriously benefit from a common cache: their datasets are by definition disjoint.

Datasets, yes, but what about instructions from shared libraries?

I'll make a wild guess this being seriously beneficial would be limited to uncommon server configurations....


Let's dissect.

- Same OS, server: you likely control all the processes in it anyway, no untrusted code.

- Different OSes in different VMs, server: they likely run different versions of Linux kernel, libc, and shared libraries anyway. They don't share the page caches for code they load from disk.

- Browser with multiple tabs, consumer device: likely they might share common JS library code on the browser cache level. The untrusted code must not be native anyway, and won't load native shared libraries from disk.

- Running untrusted native code on a consumer device: well, all bets are off if you run it under your own account; loading another copy of msvcrt.dll code is inconsequential compared to the threat of it being a trojan or a virus. If you fire up a VM, see above.


There's also Same OS, operating system level virtualization, like Docker, it's the less expensive default for https://www.ramnode.com/vps.php for example. Not at all familiar with this technology, but scanning Wikipedia if you used Docker would the libcontainer library be shared between container instances?


Likely it would be. But really Docker is more about convenience of deployment and (much) less about security. I would not run seriously untrusted code in merely a (Docker) container; I don't know much about the isolation guarantees of OpenVZ.

In any case, containers share OS kernel, OS page cache, etc. This can be beneficial even for a shared hosting as a way to offer a wide range of preinstalled software as ro-mounted into the container's file tree. Likely code pages of software started this way would also be shared.


Reading about speculative execution exploits makes me wonder whether it would be useful to have a speculation register, and an instruction prefix that claims one bit of that register, much as a mutex, while the prefixed instruction is being speculated about. Assuming a compiler intelligent enough to key a bounds check and later array access to the same speculation bit, or a language exposing low-level functionality to the programmer, Spectre could be defeated with minimal performance impact. Perhaps it could also be used to work around many other sorts of speculative execution flaws that might turn up in the future.


In-order does not mean no speculative execution


Yes, but it typically means a significantly smaller speculative window -- rather than a ROB that can fill with several hundred instructions beyond a mispredicted branch, there is just the pipeline depth from fetch to commit worth of speculative work.


The easiest prevention is to stop running untrusted code, or don't start doing so if you're not already.

The "elephant in the room" with all these attacks starting from Spectre/Meltdown is that an attacker has to run code on your machine to be able to exploit them at all.

To the average user, the biggest risk of all these side-channels is JS running in the browser, and that is quite effectively prevented by careful whitelisting.

As you can probably tell, I'm really not all that concerned about these sidechannels on my own machines, because I already don't download and run random executables (the consequences of doing that are already worse than this sidechannel would allow...), nor let every site I visit run JS (not even HN, in case you're wondering --- it doesn't need JS to be usable.)


I think that's a cop out. A system should be secure enough to isolate untrusted code.


The problem of running untrusted code is that the whole stack is a potential attack vector. From CPU to Javascript JIT compiler. Systems will never be secure enough to fully isolate untrusted code, because (1) people make dumb mistakes; (2) the incentives of most hardware/software vendors are profit, not security; (3) people have other priorities than security, e.g. performance.

At any rate, this is the world that we live in. Advise your non-tech friends to run updates to get the latest microcode and software mitigations. Install uBlock for them and block possible attack vectors aggressively (ads, trackers, etc). As a technical user, it's best to disable JavaScript completely by default and enable trusted third party JavaScript using e.g. uMatrix. Of course, this has other benefits too: creepy companies don't get to follow you around.


A friend and coworker of mine had a saying "Speed Kills" about trying too hard to make things fast while keeping the system robust. You most certainly can make more secure systems, as the merely superscalar and less intense CPUs are, but they're significantly slower.

All depends on your threat model; to take one extreme, if you're crunching numbers with your own optimized numeric code, the issue of untrusted code of "random JavaScript off the net" is not an issue. And I doubt many mainframes are used to casually browse the net ... I certainly hope not at the same time they run the night's financial transactions!


With any cloud computing, they very well could be.


Good point, I was thinking in the context of owning your own (super)computer or mainframe and having control over what ran on it. In the cloud, which to us implies running in a VM with other tenants vs. for example dedicated machine(s) with other services bundled in, anything goes.


> A system should be secure enough to isolate untrusted code.

In a dream world. It's become obvious that our general-purpose systems are far too complex to be proven secure, and trying to run untrusted code on such a foundation is going to turn up vulnerability after vulnerability after vulnerability. It is madness.


For some systems that's definitely the case. Not all systems require this (not all companies can afford to replace/patch their current hardware)


> What is the recommended course of action? Stop buying Intel products, and devices which contain them?

There is absolutely nothing to be done on our level about this.

I'm fairly convinced this is systemic issue that can only be solved by redesigning almost entirely modern cpus and computers architecture.

I can draw a parallel to approximately all Intel cpus which are know to have a dedicated "mini cpu" called "Minix" which is an absolute "black box" and have been found to be vulnerable for to a wide variety of attack for nearly decades...

Not only we need to redesign computers and cpu architecture but we desperately need to make that entire process and knowledge open source , available to all and more transparent.

Today this entire knowledge is the hand of few gigantic corps whom are keeping it to ensure their monopolistic position.


> Not only we need to redesign computers and cpu architecture but we desperately need to make that entire process and knowledge open source , available to all and more transparent.

Here's hoping OpenRISC takes off!


HN has a broad audience.

What exactly is "our level"?


Me being a caveman who believes that technology is a hammer that still can be used to drive nails into building materials instead of our own heads, I'm longing for an Intermesh of simple networked 8-bit machines running on vanilla (no ARM core bonuses) FPGAs, with CPUs simple enough to be predictable and understood not by a select few superhumans in the world, but by a critical mass of people, and communication protocols built from scratch with privacy and security in mind, so that such device could represent a social node: here's some info that I chose to make public, and there's my protected data. A single pedestrian FPGA could host a swarm of such tiny CPUs, each dedicated to a single task or security domain: a CPU to talk to shared filesystem, a CPU to talk to the private one, one more to process input, one to Tx/Rx bytes over the wire etc..

Or, if that is too complex, there could just be a bunch of breadboarded ICs capable of networking. There actually are real-world examples of such machines exposed via telnet, e.g. http://www.homebrewcpu.com

And all the performance beasts, while surely indispensable, could just enjoy their air-gapped solitude. Are there any massively parallel supercomputing tasks whose results couldn't be summarized and reduced to mere text, which is not too hard to move over the airgap manually?


> Are there any massively parallel supercomputing tasks whose results couldn't be summarized and reduced to mere text, which is not too hard to move over the airgap manually?

Video decoding? Games? You know, the major use cases for custom highly patented hardware design?


Step one: switch to in-order cores. Step two: cheekily author a Medium article titled Tomasulo's Algorithm Considered Harmful


Are in-order cores even fast enough to render a Medium article?


Yes, but software needs to be rewritten.


Maybe be more alliterative and title it Tomasulo's Algorithm Considered Tempting? Because as far as I have found without reading original documents, its development and use in the IBM System 360/Models 91 and 95 was only but fully out-of-order on the floating point engine, they didn't have the gates, implemented with discrete transistors, to go further to speculative, or include any scalar hardware.

Hitting diminishing returns on caches with an ever increasing gate budget is I believe prompted this second generation starting in the 1990s which adds speculative execution, which the algorithm with all those extra hidden registers really invites you to do.


I think it depends very much how you use your computer and also how much comfort you need. Therefore the answer is very individual. If you want to go totally hardcore secure, you might consider OpenBSD on some obscure architecture like RISC-V or Power - the Talos II workstations are really powerful. (Power was AFAIK originally vulnerable to Spectre or Meltdown, but there are mitigations and it's 100% open source) Probably it's smart to use 2FA on separate hardware (Smartphone, Yubikey or Smartcard for instance) and make it a habit to delete data and apps that you don't need. Oh and also installing only software from trusted sources - whatever that means for you - and an adblocker might also help to prevent malicious JS code. Also for many people iPads serve all needs they have and by default all the native apps have been reviewed.

Probably it's smart to not see a computer not like a walled garden but more like a sieve.


> Stop buying Intel products, and devices which contain them?

None of that helps with your public cloud workloads.


If only you could target the processor type for your cloud instances...

https://aws.amazon.com/ec2/amd/

https://azure.microsoft.com/en-us/blog/announcing-the-lv2-se...


Vulnerabilities are everywhere, in everything. They just haven't been discovered yet.


From what I have heard, these flaws mostly affect Intel because they are the largest CPU manufacturer. They also dominate the datacenter and cloud-compute industries for now, which are by far the highest value targets.

But as an Intel consumer, I am not happy. My understanding is that more stuff can be fixed in microcode, but I suppose a bug could show up which was not practically fixable. If that happened, I would certainly sue or join a class-action lawsuit. Probably the class-action route, because even if I didn't get any thing, I would be just mad enough at Intel to want them to suffer.

Of course, we do have consumer protection agencies; it is possible that they would step if Intel had sold what would effectively be a defective product.


> these flaws mostly affect Intel because they are the largest CPU manufacturer

It doesn't have anything do to with how large Intel is. They have clearly made a more aggressive hardware design which has more corner cases to break. The designs are broken and microcode can patch some variants of these side channels but the overhead is becoming a problem.

In this case it's not certain if microcode can address the problem but if it can't, disabling of SMT (hyperthreading) can be a significant cost for some workloads (well above 10% for things that haven't been specifically tuned to avoid cache misses, which is most software in my experience).


It's undeniable that they've made aggressive hardware decisions, but... Intel dwarves their second largest competitor (AMD) in revenue by an order of magnitude... and remember, AMD sells GPUs as well.

They are BY FAR the largest producer of CPUs for laptops, desktops, and servers. Note that on each of these platforms, arbitrary code execution is an issue.

Now for phones? Less so. Aggressively locked down software can help.

So, as a researcher, who are you going to research? AMD, who has negligible market share? Apple, who completely locks down their platform? Qualcomm? Well, that's an option, but Intel still makes them look small. Vulnerabilities in Intel CPUs affect the most people and the most money... You're naturally going to put more research into Intel.

It absolutely does have something to do with how large Intel is.


While it’s undeniably a more lucrative target, I still don’t think that is the explanation for the differences in vulnerability. Researchers have been working on the x86/amd64 family of devices for a very long time. Looking aside, many of the published exploits in the past have been against Via and AMD (older architectures).

Like others pointed out, the portability of an attack is usually tried shortly after a successful attack is found. In this case, the attacks have not been found to work elsewhere yet. I won’t count it out that it’s more effort but we’re looking at a timeline of research that spans a year after first reports were made to Intel, which is plenty of time to consider other chips. AMD’s specter problems are very real but much narrower while an entirely separate architecture like ARM shared a lot more of the attack surface with Intel, including Apple chips which you cite as locked down.

Your logic makes sense but the actual historical log of exploits I’ve seen does not seem to line up to explain the result. It only guarantees that researchers will try things against Intel chips first, but nothing about the exclusion of other chips.


Note: “Dwarves” is the Tolkien variant spelling of a mythical race of beings. The normal English word is “Dwarfs”.


In short:

* Core and Xeon CPUs affected, others apparently not.

* HT on or off, any kind of virtualization, and even SGX are penetrable.

* Not OS-specific, apparently.

* Sample code provided.

https://www.cyberus-technology.de/posts/2019-05-14-zombieloa...


And here's the mitigation in NetBSD: https://github.com/NetBSD/src/commit/afab82aeafd0c51afc036a8...

Essentially: Intel released a microcode update which makes the `verw` instruction now magically flush MDS-affected buffers. On vulerable CPUs, this instruction now needs to be run on kernel exit; the microcode update won’t do it automatically on `sysexit`, unfortunately.


Hopefully with this patches for other OSes should appear soon.


many Pentiums, Celerons and Atoms are also affected.


Pandora's box was opened with the public disclosure of Spectre and Meltdown. Security researchers will continue to find new and better ways of attacking the security boundaries in processors, and there's unlikely to be an end to this any time soon. Exciting time to be in security, not such an exciting time to be a potential victim.


It reminds me of when the first buffer overflows were disclosed and they were followed by a massive rash of buffer overflow vulnerabilities that continued for over a decade.


Not arguing just asking, but how has Pandora's box been opened with the discloser of Spectre and Meltdown? We've had security researchers discovering and reporting vulnerabilities since there were computers as far as I know?

I do agree that this won't end soon though. It appears to me that many of the methods CPU's use for better performance are fundamentally flawed in their security, and it's not like we can expect the millions of affected machines to be upgraded to mitigate this.


Before Spectre and Meltdown were disclosed publicly, very very few security researchers were looking at the CPU level, beyond things like attacks against hardware virtualization functionality, TrustZone and friends, ME, etc. Those bugs had existed for ages and could've been found through thorough examination of the processor manuals, but nobody had really looked too hard there. These new bugs were found independently by many different researchers/groups, simply because their attention was drawn to looking at this stuff for the first time.


> and could've been found through thorough examination of the processor manuals, but nobody had really looked too hard there.

I don't think you're giving enough credit. The actual microarchitecture isn't documented much in those manuals, so looking hard at those wouldn't help without making a series of assumptions of how it all works. The authors of recent exploits have been diligently reverse engineering and making sensible guesses.


this person seems to think that security researchers are the only people looking for vulnerabilities when in fact the people who stand to have significant profits are apt where the vulnerabilities were probably known long before researchers found them


But like, those bugs were there. They might have been being exploited and not been caught.

Your argument sounds like an argument against responsible disclosure totally.


Opening Pandora's Box is a good thing. It gets the issues out in the air and visible so you know what more to look out for and what needs to be fixed.

Don't forget that Hope was also released when the box was opened.


I'm not reading anything against responsible disclosure in deaken's comments. How are you getting that?


The whole "since they published that it happened, we've had a bunch of disclosures" which is a typical "I don't feel safer when people talk openly about unfixed vulnerabilities" argument.


Err no, no it's not. It's that there's been a ton more attention there. We're no more or less safe than we were before, we simply didn't know about the bugs that were there.

(FYI, I've been a security researcher for 15+ years and work as the head of hacker education for HackerOne; I am very, very pro disclosure. :) )


Another security principle is involved: assume the worst case. If CPU vulnerabilities are a popular subject, they get fixed to some extent: it's much better than letting them be as a tool in the hands of private and government black hat hackers.


Security by obscurity is no security


That's cryptography at scale in a nutshell, thus far it seems to be a fair enough countermeasure. Cryptography on an individual item with an unknown cipher/salt/hash can be treated as security via entropy - but with a big enough data set and some idea of the target content, things quickly devolve into security via obscurity since the target content is discoverable with enough time and computational resources. Security via "untamperability" (quantum bits/state) is better, alas we're not quite there yet.

My biggest worry is that all currently known classical "secure" data sets, including encrypted but recorded internet communication, will become an open book a few decades from now. What insights will the powers that be choose draw from it then, and how will that impact our future society? Food for thought.


This saying rubs me the wrong way, because confidentiality is 1/3 of security. Obscurity is critical or this wouldn't be a vulnerability.


Confidentiality is a valid layer of security, however security solely by obscurity is wrong.

You can have unintentional exploits/vulnerabilities in free/open source software or hardware too.


The critical part is understanding that confidentiality is temporal.

All “secrets” are eventually revealed, security is about managing the risks and timing associated with this revelations


The question is: how much needs to be kept confidential?

"Obscurity" general refers to situations where "everything" is confidential. And when everything is confidential priority one, nothing is, since people can't work like that.

Cryptography attempts to sequester the confidential data into a small number of bytes that can be protected, leaving larger body of data (say, the algorithm) non-confidential.


_Please_ stop parroting this line incorrectly.

Security _ONLY_ through obscurity is not security. Obscurity is a perfectly valid layer to add to a system to help improve your overall security.


> macOS performance: Testing conducted by Apple in May 2019 showed as much as a 40% reduction in performance with tests that include multithreaded workloads and public benchmarks. Performance tests are conducted using specific Mac computers. Actual results will vary based on model, configuration, usage, and other factors.

from here: https://support.apple.com/en-us/HT210107


Yeah, if you choose to turn off hyperthreading. Pretty expected tbh - hyperthreading helps quite a bit for some things.


but I don't see Intel mentioning this 40% anywhere... by Intel words, the worst degradation is 9%, and it creates an impression that it's with HT off.

If you choose not to disable HT, you stay vulnerable even with updated microcode, right?

In any case, Apple's stats are much more gruesome...


It really is going to depend on what test you are running. HT has the greatest effect when the running process has a lot of "downtime" for things like memory retrieval or any I/O as it allows for other processes to make use of this downtime. So if your tests are just doing calculations with very little file/network I/O it could very well be in the 9% range.


The best of the worst case. Gotta love that Intel spin machine working in overdrive


So at what point do we start producing CPUs specifically aimed at running a kernel/userland? Why don't we have a CPU architecture where a master core is dedicated to running the kernel and a bunch of other cores run userland programs? I am genuinely curious. I understand that x86 is now the dominant platform in cloud computing. But it's not like virtualization needs to be infinitely nested, right? Why not have the host platform run a single CPU to manage virtual machines, which each get their own core or 20? Would the virtual machines care that they don't have access to all the hardware, just most of it?


> Why don't we have a CPU architecture where a master core is dedicated to running the kernel and a bunch of other cores run userland programs?

How will your "userland core" switch to other userland programs safely? A pointer-dereference can be a MMap'd file, so its actually I/O. This will cause the userland program to enter kernel-mode to interact with the hardware (yes, on code as simple as blah = (this->next)... the -> is a pointer dereference potentially in a mmap'd space to a file).

So right there, you need to switch to kernel mode to complete the file-read (across pointer-dereference). So what, you have a semaphore and linearize it to the kernel?

So now you only have one-core doing all system level functions? That's grossly inefficient. Etc. etc. I don't think your design could work.


> How will your "userland core" switch to other userland programs safely? A pointer-dereference can be a MMap'd file, so its actually I/O.

User PU would stall on the "outermost" return and wait for another dispatch by kernel PU; it would also stall during context switches.


Sounds like we would need a new paradigm for how to handle that. But it seems to me that x86 is in now way the panacea of COU design. Wouldn’t you gain some good trade offs by changing up how things are done?


MMap'd files, and... in-demand paging... are pretty much on every CPU architecture worth making an application for. ARM, POWER9, X86, SPARC, MIPS, and more.

In-demand paging is another situation where a simple pointer-dereference can suddenly turn into a filesystem (and therefore: kernel-level / hardware-level) call.


What the commenter above you was describing about mmap'ed files and dereferencing invoking the kernel implicitly -- that's true on all current CPU architectures (everything from x86 to SPARC to ARM and back again.)


The hit is in IPC between the kernel and userland processes. If you really want to pay it, then just go microkernel. You can do that today.


You cannot give each VM their own core. The business model of the cloud is that multiple VMs with virtual cores run on a single real core.


At the low end, sure. At the medium-to-high end, each VM is bound to one or more physical cores of the host, or sometimes an entire host ("dedicated instances.")

I don't know enough about the IaaS market to know what the relative revenues of low-end compute vs. medium-to-high-end compute are for your average vendor, though. Is most of the profit in the low end?

I'm also curious on what the impact on margins would be if IaaS vendors decided to switch away from serving the low-end compute demand with "a few expensive high-power Intel cores per board, each multitasking many vCPUs", to serving the demand with "tons of cheap low-power ARM cores per board (per die?) with each core bound to one vCPU."


The low end compute must be a substantial amount of revenue.


You'd also need to duplicate the whole memory hierarchy of CPU caches to prevent cache attacks against your "kernel CPU".


Would the kernel even need RAM access for its internal operations? It seems like today’s CPU caches are so large that a kernel could safely operating without ever leaving the chip, aside from anything that the userland asks it to work on. So in that case you wouldn’t need to ever run userland code against kernel CPU’s caches.


The pagetable itself can be very large; disk cache and network buffers can also take a huge amount of memory and are probably great targets for data exfiltration.


> Why don't we have a CPU architecture where a master core is dedicated to running the kernel and a bunch of other cores run userland programs?

Sounds a lot like IMB’s Cell architecture.


9% hit potentially on performance in data center. Add in all the Spectre and meltdown mitigations and we have potentially lost nearly two generations of Intel performance increases.

Just shows the hoops and tricks needed to keep making, on paper, faster processors year on year but without node shrinks to give headroom.

14nm++++ is played out.


I wonder at what point the hardware fix for these issues stop becoming worthwhile and if we'll see a resurgence of processors without speculative execution or any of these other speed ups.


Ironically, a high performance, general purpose architecture without speculative execution might require a deep reinvestment in SMT. Instead of trying to speculatively make one thread fast to mask IO stalls, run a large pool of threads that can stall frequently but still keep the execution units and memory channels busy.

To avoid reintroducing these spectre like bugs, you'd have to conservatively design the per-thread execution to avoid those covert channels. Not only synchronously enforcing all logical ISA guarantees for paging and other exception states, but also using more heavy-handed tagging methods to partition TLB, cache, etc. for separate protection domains.



SUN tried the flock of (in-order) chickens in the past, targeting JVM workloads. It wasn't great.

On the other hand GPUs are pretty much what are you describing: excellent for some specific workloads, terrible for others.


> Instead of trying to speculatively make one thread fast to mask IO stalls, run a large pool of threads that can stall frequently but still keep the execution units and memory channels busy.

Isn't something like that done for GPUs? They have the advantage of having a massive number of threads to execute. For CPUs, the number of runnable threads tends to be lower.


You can kiss any semblance of reasonable performance goodbye if you eliminate "speculative execution". Pipelining is the most basic tool in the toolbox. Even microcontrollers do it.


To give ballpark numbers: modern Intel processors can retire a few instructions per cycle in tight loops (4 is the theoretical maximum; > 2 is realistic in a lot of high-performance code). A branch misprediction wastes 10-15 cycles.

So getting rid of speculation entirely, and stalling on every branch, would waste time equivalent to dozens of instructions. On typical code that has a branch every few instructions, this could slow down execution by several times.


Can we actually compute without branching? Genuine question.

What architecture would do that, and how?


The simplified SIMD cores in early GPUs had to fake branching to some extent for their virtual threads: every branch in the shader code would be tested for each virtual flag and that thread (really just a vector component) would be masked out for the instructions of the branch that didn't apply. The GPU would run both branches, relying on the mask. It was workable, but very slow.


Old GPUs did that. It wasn't very pleasant to program with. :)



Pipelining isn't strictly the same thing as speculation, though, is it? If I have,

  add %rax, %rbx
  add %rcx, %rdx
I can pipeline those without needing to speculate on anything. If there is a dependency on a previous instruction, then we might have to speculate, but hopefully there is still some case for pipelining?

Have any of these bugs been completely based on speculation, or is it always speculating across privilege boundaries? (Although I feel like even the former isn't same, e.g., if you're in some form of VM attempting to maintain privilege separations.)


It's related. If you want decent performance with pipelining, you're going to want to speculate at least a bit -- assume that FP math doesn't trigger exceptions, assume that you predicted branches correctly, assume that memory accesses don't fault, etc.

Intel does more speculation, but you won't find anything beyond the tiniest embedded CPUs which don't do any.


If ARM and AMD CPUs are not affected, then speculative execution in general is not the issue.


But they are, ARM to both a Meltdown variant and Spectre, as well IBM's POWER and mainframe chips, and AMD to Spectre.


They are not for all kind of attacks.

E.g. this new one can only be reproduced on Intel, not on AMD and ARM.

If you want to ban speculative execution for everything, you need to make the case that it's a fundamental issue and not an implementation specific issue.

Right now, that's not the case for many of these vulnerabilities.


As I understand it, the Intel only vulnerabilities, Foreshadow/L1TF, and this set which I've not looked at the details of yet, are targeting specific Intel features, and there's no reason to believe a similar focus on the other companies' products wouldn't also find unique problems.

For example, the first version of Foreshadow went after the SGX enclave. Given how widespread Meltdown and Spectre bugs are, there's absolutely no reason to believe that the other vendors don't have similar unique problems.


As you say, only the first Foreshadow attack went after SGX - it turned out to be a broader flaw that also affected OS page table protections more generally and could be used to attack process-OS and VM-hypervisor isolation. Those variants relied only on Intel's implementation of standard x86 paging, and they don't exist on AMD because they didn't implement it in the flawed way Intel did. That is, Foreshadow/L1TF is Intel-only not because it relies on an Intel-only feature, but because it's an Intel-specific implementation flaw. (Linux had to substantially rework its paging code to work around this.)

AMD don't seem to have commented on ZombieLoad yet, presumably because it's much newer and they didn't have pre-announcement info about it, but they've commented on the other two vulnerabilities announced today and explained that the reason they're not vulnerable is because the corresponding units in their CPUs don't allow speculative data access unless the access checks pass and their whitepaper seems to suggest the same is true of ZombieLoad: https://www.amd.com/system/files/documents/security-whitepap...

SGX does make for an easier and flashier demo for Foreshadow, though, so it makes sense that the researchers went after that target. They managed to recover the top-level SGX keys that all SGX security and encryption on the system relies on, something that I don't think anyone had ever managed before.

Also, as I've said elsewhere, Intel seems to speculatively leak data that shouldn't be accessible pretty much everywhere in their designs where memory is accessed.


" and there's no reason to believe a similar focus on the other companies' products wouldn't also find unique problems."

Sure there is. Just like the first round last year, intel totally through AMD under the bus to save face and stock price. That is the reason to mention AMD literally, to keep their stock price from crashing.


The industry will probably get dragged, kicking and screaming, into using tagged pointers. CPU could then use the information to put safe lid on speculative execution.

And it will be tough as no compiler supports it, moreover C/C++ are architected from the beginnign to not bother with runtime information about object types/sizes.


That sounds interesting. Is there some analysis out there which shows tagged pointers to be superior to the status quo?


Ultimately, if we do transparent per process memory encryption, then we can let the CPU do all the speculation it wants, but the result will be gibberish. And it's a lot easier to do a simple key switch than doing a full TLB flush. Of course, it probably doesn't do much against/for the timing attacks (side channels).


Do these really matter if you're running a compute farm? If you're already locked down couldn't you use that as a way to mitigate risks?


I can't imagine that it would. As long as no one else can execute code on your CPUs, you should be safe


My guess is that the performance loss from removing these features would make such CPUs less economical than strictly enforced separation between security domains on a hardware assignment and scheduling level. That is, just forget about having the same server run stuff from different contexts at the same time.


Or open source RISC-Vs


At hyper scale? Yes please!


Some information for Linux, from LWN.net (https://lwn.net/Articles/788381/): "See this page from the kernel documentation (https://www.kernel.org/doc/html/latest/x86/mds.html#mds) for a fairly detailed description of the problem, and this page (https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/m...) for mitigation information."


It takes one rouge/unpatched VM to run and scan threads randomly, undetected over a longer period of time, if not patched. With HT disabled potential hits become less likely, but still possible given time. Is virtualization on Intel dead now? Perhaps not. But, it's increasingly dangerous to use Intel for cloud services.


Interestingly AWS released a bulletin about MDS vulnerability but nothing about ZombieLoad yet. https://aws.amazon.com/security/security-bulletins/AWS-2019-...


CVE-2018-12130 is in the list of CVE's in your link. That is the ZombieLoad CVE. I hate these stupid names, they only confuse as shown by your comment.


What impact does this have in a multi-tenant cloud environment? I'm legitimately considering moving my security critical EC2 instances over to AMD-backed instance types right now.


I doubt that you both manage critical infrastructure on AWS and haven't read the AWS security bulletin.

https://aws.amazon.com/security/security-bulletins/AWS-2019-...


So I'd love to post an Ask HN: Which AMD Laptops would you recommend for work, alternatives to Thinkpads?

I've noticed some Thinkpads with AMD CPUs but I feel like I'm on virgin ground when it comes to AMD and their integrated GPU offerings.


I've been eyeing more release details on the ThinkPad X395 which was recently announced. "Coming Soon" is probably means early June for some select configurations. I think these will fit in the premium/professional laptop space better than some of the bargain laptops that carried AMD chips in the past.

I believe others OEMs are developing similar offerings as well but I can't find any quick links for newer SKUs like the Ryzen 7 3700U which offers the improved Zen+ revisions which will specifically improve battery life and heat issues.


Yes the T495 and the X395 ought to be the best ones.


If you don't need a dedicated GPU, the APU offers from AMD are great. They have native linux drivers for everything (on the AMD side, double check the nic/touchscreen/touchpad). I'm using an HP envy x360 15z with a AMD Ryzen 2700u running gentoo and love it. The HP envy has a weird keyboard, but it was a good tradeoff for the AMD setup when I bought it last year.

There is a much larger market in 2019 for AMD laptops, so you should be able to find something to suite your needs.


It's not that large yet. There are sadly still no options with HiDPI screens (200+ dpi) or discrete AMD GPUs.

Next year with the 7nm mobile chips will probably be much better.


I've just received in some Thinkpad E485's which have the Ryzen 2500U CPU. They seem pretty nice, with a 1080p screen that's non-glare. The documentation says they support Redhat Enterprise Linux and Ubuntu. So far they seem nice. I was thinking of trying PopOS on it and see how it runs.


Honor Magicbook looks interesting:

https://www.huaweicentral.com/honor-magicbook-ryzen-7-versio...

The new 3700U model will probably be available on Aliexpress next month or so. I would consider it except Linux support is unknown and only 8GB of RAM.


You can't really find a good AMD workhorse laptop; all of them end up with some outdated 1080p displays at best :( There is like only one notebook with a decent HiDPI screen with Ryzen (some HP, but not a workhorse). There was also terrible problem with Mobile Ryzen drivers resolved only like a month ago.


1080p strikes me as the pinnacle of a workstation display for laptops right now since I highly doubt any productivity improvement from higher dpi is gonna beat out battery life in the vast majority of cases. Even in the one use case where it matters, 4k video production, theres a reasonable argument for using screens similar to what your consumers will use at least some of the time. Especially if it saves you money and you get to use discounted top performing cpus to boot.

Realistically the AMD laptops lack of color accuracy compared to shit like the trutone Intel macs, and lack of display connectivity especially compared to a thunderbolt outfitted x1 extreme that can connect to four 4k displays is a bigger problem. Especially since external displays actually need the resolution.


Counterpoint to battery life argument against HiDPI displays are obviously retina MacBooks since 2012 (7 years ago). In addition I have a ZenBook with 3200x1800 and Core-M and its battery life is great. By outdated 1080p displays I also meant color accuracy, no HDR, bad viewing angles, slow latency etc. Seems like most AMD laptops are a dumping ground for old tech that can't be marketed as premium any longer. I'd rather suggest AMD to start manufacturing/contract somebody to develop high end models with their own brand to showcase what they can do.


Yeah but MBP aren't running 4k. 3k/2560p screens seem to have a lot easier time with battery life at the moment. I actually think Apple is being more sensible than say, Samsung when it comes to the resolutions they choose.

I looked at reviews of several laptops with a 1080 vs a 1440 vs a 4k option and 1440 > 4k was the biggest drop in battery life.


>1080p strikes me as the pinnacle of a workstation display for laptops right now since I highly doubt any productivity improvement from higher dpi is gonna beat out battery life in the vast majority of cases.

Well... I went from a 1080p Thinkpad x260 to a 1440 Thinkpad Yoga X1 a year ago and honestly couldn't go back now.

It's mostly plugged into an external monitor and delivers 1440 resolution to it through DP, but even on a train I can work or 4-5 hours in 1440 resolution with no issue.

And it runs Fedora.


Looks like AMD Cpus are safe again.


Note that Spectre definitely affected AMD chips and in general these sorts of side channel attacks based on speculative execution are extremely likely to be effective against any chip (including AMD manufactured ones) that employ speculative execution though the precise implementation might have to be jiggered a bit.


Not necessarily. This is more like Meltdown in that it involves one context just outright accessing data in a completely different context, and AMD chips seem to be totally immune from that attack. Any chip with a microarchitecture that actually enforces the architecturally-guaranteed checks, rather than ignoring them and fixing the results up later, can avoid such attacks.

Spectre, on the other hand, is harder both to fix in hardware and to attack because the victim context is itself tricked into speculatively executing code using attacker-supplied data that leaks information - it uses inherent properties of speculative execution rather than any kind of hardware bug, but it's only exploitable if there's some victim code that does exactly the right kind of processing on attacker-supplied data.


> This is more like Meltdown in that it involves one context just outright accessing data in a completely different context, and AMD chips seem to be totally immune from that attack.

No, AMD has been largely immune to bugs involving speculating past a page fault. Both Meltdown and L1TF involved speculating past page faults, and the ZombieLoad paper also mentions exploiting bad behavior during page faults (but, disclaimer, I haven't read in enough detail yet).

AMD was not immune to, for example, spectre variant 2, which very much did allow reading from other address spaces (even other VMs): https://www.amd.com/en/corporate/security-updates

In general it doesn't make sense to expect that any brand of processors might be vulnerable or invulnerable to all illegal memory access. There are many different components involved in handling memory access and many different ways they could go wrong.


Spectre variant 2 was a little more interesting in that it can trick the processor into speculative execution of something that's not a valid execution path for the victim context (by confusing it about the target of an indirect branch), but it still relies on the speculative execution happening in a context that is meant to have access to the targetted data. It's sort of halfway between this and the other Spectre variants; it can be fixed in hardware or software.

All three of these latest vulnerabilities are like Meltdown and L1TF in that they just allow speculative execution to completely ignore hardware-level access protections and read data from a completely different process, and there's nothing that can be done about it at the software level. (Originally, the comment you're replying to was posted on a discussion about all three, if I remember rightly.) All of these affect Intel but not AMD. It's not like modern AMD chips are exactly poor performers either.


> All three of these latest vulnerabilities are like Meltdown and L1TF in that they just allow speculative execution to completely ignore hardware-level access protections and read data from a completely different process

Not really. These attacks are completely different from Meltdown and L1TF in that they don't involve reading from memory at all. They involve hidden processor state that contains values that were recently used in another context. The attacker never explicitly specifies an address they are interested in -- they just get whatever's floating around.

A comparable (but much more obvious) bug would be if the OS failed to clear registers when switching contexts. Although the values in those registers may have at one point been read from memory, the attacker recovering those values isn't directly accessing the victim's memory.

Your comments seem to be arguing that AMD isn't affected by these bugs for the same reason they weren't affected by Meltdown. But these bugs operate in a totally different way and exploit totally different components. There doesn't appear to be any reason to believe they are related.

I think the reason AMD isn't affected likely has to do with the fact that these attacks are targeting specific implementation details of Intel processors, which AMD processors probably just happen to implement differently. (Indeed, Fallout appears to be attacking outright unintentional behavior -- it would be surprising if multiple CPUs had the same bug.) It seems likely to me that AMD has different bugs which haven't been found yet, perhaps mostly because researchers haven't focused on them.

(Disclosure: I own some AMD stock. I don't own Intel stock. I have no other affiliation with either company.)


The "hidden processor state" is the memory contents belonging to other processes, stored as part of the CPU's memory access machinery. Every single one of these vulnerabilities involves Intel speculatively filling memory read requests from one process with data from another process without doing access checks first, and it turns out they do this all over the place: they do it from L1 cache, they do it if there's an L1 cache miss, they do it when the actual desired memory is uncacheable, they do it with store-to-load forwarding... almost every conceivable method of fulfilling a memory read on modern Intel CPUs is happy to speculatively leak secret data from other processes that shouldn't be accessable. More interestingly, AMD don't seem to do this anywhere that anyone's been able to find. (Well, technically that's not quite true... there's a speculative bypass of x86 segment limits on AMD which no-one cares about because no-one uses those anyway.)


And ARM had one design that had a Meltdown variant, and both of IBMs', POWER and mainframe as well. Every vendor with speculative out-of-order designs is vulnerable to Spectre.


Sure, but please don't downplay this, so far Intel CPU's are affected by way more vulnerabilities that could be exploited much easily. It is a no brainer to pick if I have to choose between AMD and Intel today.


Way more known vulnerabilities. They also have a much smaller share of the market. I'd be surprised if researchers were targeting it as much.


Yes, but also I would be surprised if attackers were targeting AMD as much.


OpenBSD was right and disabled HT for Intel CPUs in June 2018 ago due to concerns of more such CPU bugs coming up. There we go ... https://news.ycombinator.com/item?id=17350278


This. I remember people laughing at this decision back then, and flaming on OpenBSD policies to handling security vulnerabilities, to the point where they aren't informed anymore. Yet OpenBSD is the only major OS taking these issues seriously instead of believing whatever Intel marketing department yells every other day.


Why doesn't this type of news cause INTC to tank - they're up today. I know the market is up today, but (and it's probably my innate overreaction) I would think this sort of news would cause its stock to suffer.


I think there is an expectation that Intel's new generation CPUs wont have these vulnerabilities and they will sell these a lot more to replace the piece of crap they have sold for ridiculous prices. Intel is actually probably happy about these, because no one cares.


Intel have not been able to produce 10nm chips for 2 years now and they don't expect them until 2020. If the Ryzen 3000 leaks of a 15% IPC gain are proven to be true in a couple weeks then Intel is in real trouble. Add on additional performance losses with this mitigation and Intel is very likely to lose the top end of the CPU market for at least 2 years. INTC might look very different come the end of the month.


I follow the market and tech stocks pretty closely and it is extraditionary rate for breaches, vulnerabilities, or exploits to affect the stock price of companies despite the outrage from the tech community.


Because it depends on customer behavior. Intel has a strong name and people know their CPUs are fast. We see various IT security problems almost daily and most people don't care... It would probably require some massive exploits, data leaks, identity thefts in cloud providers and following lawsuits againts Intel to see some significant stock price change. :)


It should take a couple of days, also Intel is coming off continuous losses.


Do cloud providers commonly float cores between VMs? I could see instances like the AWS T family (burstable) sharing, but I had always assumed that most instance types don't over-provision CPU.

If that's the case, my CPUs are likely pinned to my VM. I could still have evil userland apps spying on my own VM, but I would not expect this to allow other VMs to spy on mine.


Sharing CPUs is not the point, as long as you are sharing physical memory with other tenants, you are vulnerable (although exploits are much harder when attackers have to cross privilege boundaries).


> Sharing CPUs is not the point, as long as you are sharing physical memory with other tenants, you are vulnerable

Not to these vulnerabilities. These are attacking memory that is "in flight" within a processor.


I don't think many cloud providers explicitly pin the VMs to the cores even if they don't over provision the servers.


I really hate these descriptions of SMT as some kind of violation of the natural relationship between CPU frontend and backend. The idea that there is a “physical core” and a “logical core” does not map to reality.


The idea that there is a “physical core” and a “logical core” does not map to reality.

This is the terminology that Intel itself uses in its documentation to describe its products, though. To be fair, they say "physical processor" and "logical processor", not "core".


I'm sure I remember a post on here (or possibly /r/programming) a couple of years ago from an Intel employee mentioning that Intel was cutting a lot of QA staff, and that we should expect more bugs in the future. I could be imagining things though.


I remember a leak about a call to become "more agile" like some ARM designers, implying less time spent on verification.


verification wouldn't catch any of this, the processors operate correctly on an architectural level

most of this seems to be behaving as intended, they just didn't foresee the side channels this opens up


This sentence killed me: "Daniel Gruss, one of the researchers who discovered the latest round of chip flaws, said it works “just like” it PCs and can read data off the processor. That’s potentially a major problem in cloud environments where different customers’ virtual machines run on the same server hardware."

What are they saying here?


It should read:

> ...said it works “just like” in PCs

The number of mistakes in the Techcrunch article is atrocious.


I found two and emailed the author. It's my sad little hobby. He replied promptly and they all should be fixed now. In fact, he found a third that I missed.

Keep in mind, this article was posted at 3am pacific, 6am eastern. Assuming the author is in North America, he was probably under a deadline and rather tired.

I have found similar typos from prominent writers. Sometimes they email me back, which I appreciate.

I found one in an article by Cory Doctorow on boingboing. I checked on builtwith.com, and they use WordPress/Jetpack. Jetpack has a feature that will warn you if you try and publish something with spelling mistakes, it is just not enabled by default.

I let Mr Doctorow know all this, in a very polite manner, and he responded with "many thanks". I'm not big on celebrity worship, but it still made my day.


Your 'sad little hobby' is directly addressing one of the biggest cultural problems we face today. For some reason I never thought to approach it that way, I may pick it up myself if you don't mind the company.


Thank you for your service!


I know this comment adds nothing of value but I have to quote it anyway

“Service guarantees citizenship, want to know more?”


Well, it's techcrunch


This is from the Techcrunch article on the story: https://techcrunch.com/2019/05/14/zombieload-flaw-intel-proc...

We merged that HN thread (https://news.ycombinator.com/item?id=19911465) into this one, via this other one (https://news.ycombinator.com/item?id=19911715).


Can this attack allow the attacker to escape public cloud isolation methods and break into the control plane or other VMs?


That depends on what you mean by "break into". If you mean sample data (read) from the control plane or other VMs, then yes; however, the attacker may have difficulty targeting which data is read. The attacker would not be able to write to that memory or gain any sort of execution privilege using this method alone.


It would have, but it's likely the cloud vendors have already deployed defenses.


Today's AWS[1] and Google Cloud[2] security bulletin notes that all their host infrastucture (read: cpu firmware/microcode) has been updated to mitigate the issues disclosed today by Intel[3]. I could not find anything for Azure yet.

I also note that the provided OSes are being updated with mitigations as well, so for complete mitigation of the issue you'll probably need to update your OS.

[1] https://aws.amazon.com/security/security-bulletins/AWS-2019-...

[2] https://cloud.google.com/compute/docs/security-bulletins#201...

[3] https://www.intel.com/content/www/us/en/security-center/advi...


These style of exploits remind me of "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software" (2005) - http://www.gotw.ca/publications/concurrency-ddj.htm

> Chip designers are under so much pressure to deliver ever-faster CPUs that they’ll risk changing the meaning of your program, and possibly break it, in order to make it run faster.

> ...

> applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains that have now started becoming available and will continue to materialize over the next several years. For example, Intel is talking about someday producing 100-core chips; a single-threaded application can exploit at most 1/100 of such a chip’s potential throughput.

It seems the trend in programming languages is towards better concurrency support. But why don't we yet see 100-core chips? If chip makers had to forego all speculative execution and similar tricks, would that push us toward the many-core future?


crucial (for me anyway) summary of relevant events of the day

https://twitter.com/IanColdwater/status/1128395135702585347?...


I just want to plug their course hardware security (at the VU University Amsterdam). It's an amazing course and it costs 1200 euro's for students who need to pay full price. I've learned a lot about Spectre, Meltdown and novel forms of cache attacks and Rowhammer when I took it.


Offtopic: Are you familiar with the AI departments/ courses (master) at VU? I have the opportunity to go but haven't decided yet. (Interested in human-centred and modern ML with neural networks)


Hey, yea I do have some familiarity. Send me an email to have a conversation, it's in my profile.


Is there any clear source of info for sysadmins responding to the many CPU-level vulns in the past year? It's very difficult to keep track of whether fixes are needed at ucode, OS, and/or application level, and what version numbers fix each bug.


According to their blog post[1], there is little you can do against this. Running different applications on different cpus help against them reading each other’s data but an rogue process can still read data from the “super ordinated kernel” or hypervisor.


Of course you can fix it. I fixed it in Dec 2018 for most such attacks in my safelibc memset_s implementation, but nobody wanted to use it, because securely purging the buffers with secrets via mfence was deemed to slow. So everybody can read your secrets via sidechannel attacks. These tiny MDS buffers need to be purged with verw or l1d_flush followed by an lfence. This needs to be added to memset and memset_s variants. This is much faster. But it will not happen, libc maintainers notoriously don't care, even crypto maintainers not. Only Linux does.

https://software.intel.com/security-software-guidance/softwa...


> This is much faster.

Hi. Am trying to understand what you meant here. That both "verw" or "l1d_flush followed by an lfence" are faster than "mfence" which you implemented in safelibc?

If so, why didn't you use these faster options yourself? My understanding was that these faster options needed to be handled at the hypervisor/kernel level, rather than in libc. If so, how is the attitude of glibc maintainers relevant?


verw and l1d_fence have no costs. lfence is a bit costly, mfence is basically an lfence + sfence. it flushes both caches, load and store.

safe libs need to do the right thing, not the fast thing. esp. crypto.

The attitude of libc and crypto maintainers is that you cannot trust them with security. all the memzero's are insecure. besides being overly complicated and slow. Linux is a bit better, but there are still estimated 20.000 security relevant bugs.


What prevents the data being read before the memset is executed?


Nothing but the window of opportunity. A secure lib will zero secrets as soon as they are not needed anymore. On the fly attacks are always possible. But when you securely cleanup, the attackers has less time to extract it. the usual sidechannel attacks leak the secrets only bit by bit and need some time.


but mfence itself also only close a small window of opportunity on the same thread between the zeros being written and the store buffer being flushed.


Another non-issue on non-Intel CPUs, like SPARC. Lovely.


So far there seem to be far more of these vulnerabilities in Intel CPUs.

Is that a reflection of engineering differences or a statistical byproduct of the market share of Intel CPUs?

I run AMD not because of the security implications but because I feel every dollar that goes to Intel competition will push Intel and thus the entire industry forward.


Market share is a good answer, in x86 space alone per https://www.extremetech.com/computing/291032-amd-gains-marke... which I found without putting much effort into it, AMD's share in servers is negligible and even dropped in the last quarter. On the other hand mobile and especially desktop are rising smartly, but still somewhat modest. IoT is excluded, and AMD could be doing well there to the extent anyone's using x86 for that, and there's also (quasi) embedded like network gear.

So the cloud vendors are 97% minimum Intel, they're exquisitely vulnerable both technically and reputationally to these bugs, the stakes are existential for them and they have a lot of money they can throw at the problem, whereas the users of notebooks and desktops are a much more diffuse interest.

As I've mentioned many times in these discussions today, everyone had Spectre issues, and everyone but AMD has Meltdown ones. The more recent vulnerabilities are Intel only because they're using what was learned from those first two to attack Intel specific features like the SGX enclave.


Probably both - AMD chips have lower market share because they have lower performance, and they have lower performance (maybe) because they speculate less aggressively. Intel did these optimisations for a reason after all; the market rewards them.


If using a cloud provider with Intel processors:

> The safest workaround to prevent this extremely powerful attack is running trusted and untrusted applications on different physical machines.

Nope!

> If this is not feasible in given contexts, disabling Hyperthreading completely represents the safest mitigation.

Nope!

Shrugs?


The best defense against all these CPU vulns is to stop running malicious code. And that means getting off of shared VMs (and similar) where someone could run malicious code in your stead. Stop running any script your browser gets handed. Isolation was always a great idea, poor man's isolation (VMs, processes, ...) is only useful for isolation against non-malicios accidental interference. You want physical isolation between applications and services.


An unprivileged attacker with the ability to execute code

That sounds like a contradiction --- if you can already execute code, I'd say you're quite privileged. It's unfortunate that their demo doesn't itself run in the browser using JS (I don't know if it's possible), because that's closer to what people might think of as "unprivileged".

The attacker has no control over the address from which data is leaked, therefore it is necessary to know when the victim application handles the interesting data.

This is a very important point that all the Spectre/Meltdown-originated side-channels have in common, so I think it deserves more attention: there's a huge difference between being able to read some random data (theoretically, a leak) and it being actionable (practically, to exploit it); of course as mentioned in the article there are certain data which has patterns, but things like encryption keys tend to be pretty much random --- and then there's the question of what exactly that key is protecting. Let's say you did manage to correctly read a whole TLS session key --- what are you going to do with it? How are you going to get access to the network traffic it's protecting? You have just as much chance that this same exploit will leak the bytes of that before it's encrypted, so the ability to do something "attackful" is still rather limited.

Even the data which has patterns, like the mentioned credit card numbers, still needs some other associated data (cardholder name, PIN, etc.) in order to actually be usable.

The unpredictability of what you get, and the speed at which you can read (the demo shows 31 seconds to read 12 bytes), IMHO leads to a situation where getting all the pieces to line up just right for one specific victim is a huge effort, and because it's timing-based, any small change in the environment could easily "shift the sand" and result in reading something entirely different from what you had planned with all the careful setup you did.

Using ZombieLoad as a covert channel, two VMs could communicate with each other even in scenarios where they are configured in a way that forbids direct interaction between them.

IMHO that example is stretching things a bit, because it's already possible to "signal" between VMs by using indicators as crude as CPU or disk usage --- all one VM has to do to "write" is "pulse" the CPU or disk usage in whatever pattern it wants, modulating it with the data it wants to send, and the other one can "read" just by timing how long operations take. Anyone who has ever experienced things like "this machine is more responsive now, I guess the build I was doing in the background is finished" has seen this simple side-channel in action.


> if you can already execute code, I'd say you're quite privileged.

I always interpreted "privileged" to mean "superuser". I.e. unrestricted. Or possibly the case of one user and another user. Having a program that can determine the URL you are visiting in the browser from memory when running as the same user is a different class than something that can do the same when run as any non-root user on the system. There's a reason it's common to "drop privileges" in a daemon after any initial setup that requires those privileges (such as binding to a low port).


> That sounds like a contradiction --- if you can already execute code, I'd say you're quite privileged.

If you're in a VM, you have no privileges over the host CPU, you can't switch to another VM or to the host itself. That's what's meant by unprivileged here.


These CPU flaws make it seem as if virtualization in the data center is becoming really, really dangerous. If these exploits continue to appear, the only way forward would be dedicated machines for each application of each customer. Essentially, this might be killing the cloud by 1000 papercuts because it loses efficiency and cost effectiveness and locally hosted hardware does not necessarily have to have all mitigations applied (no potential of a unknown 3rd party code deployed to the same server).


Many years ago, OpenBSD's Theo De Raadt made a sneer at virtualization, saying something the lines of "they can't even build a secure system, let alone a secure virtualized system". I can't remember who he was referring to specifically, but we've certainly been seeing a lot of similar vulnerabilities.


Here's the full Theo de Raadt quote from 2007 [1]:

"""> Virtualization seems to have a lot of security benefits.

You've been smoking something really mind altering, and I think you should share it.

x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit.

You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.

You've seen something on the shelf, and it has all sorts of pretty colours, and you've bought it.

That's all x86 virtualization is. """

[1] https://marc.info/?l=openbsd-misc&m=119318909016582


I feel like people with these sorts of hardline views on security, might just be so concerned with safety that their argument misses the whole opportunity cost of not being 100% safe in our usage of technology. If we needed to make sure everything was safe and perfectly secure, the world would have missed out on a lot of innovative software. Tough thing to contend with is that the security people are hardly ever wrong.


>hardline views on security

The only hardline view on security you'll encounter in the wild is "security is practical in our computational environments"[1]. Only half-joking here.

My reading of Theo's quote is merely "the combination of x86/IA32/AMD64 and virtualization gives little to no factual security benefits, and plenty of pitfals".

I don't see Theo as being a hardliner about security, just meticulous about good engineering practices - as per OpenBSD's usual standards - and facing the problems & risks as they are.

[1] examples: "Rust/Java gives you security", "shortlisting the only allowed actions by end-user application gives you security", "hardcore firewalls give you security", "virtualization gives you security", "advanced architectures like Burroughs' give you security".


Except that's objectively wrong - x86 virtualization breakouts have been extremely rare in practice, and fixable till recently.

The new class of attacks we now see target any type of shared code execution environment. OpenBSD is as vulnerable to this as anything else.


OpenBSD disables hyperthreading, doesn't it? That's a smart defense against at least one of today's attacks. Doesn't help if you're a VM guest, but does if you're the host.


there's a foreshadow-ng variant specifically for vms, and it's arguably the worst


> examples: "Rust/Java gives you security"

Reminds me a friend who worked on Javascript in the early days said it was the only thing that had any hope of providing minimal security at the time. Because Windows 3.1 and 95 +0x86 was a security trashfire.


I believe "they" were all the people poking the project asking when were they going to support virtual X and virtual Y. He basically stated it would never happen on OpenBSD[1] but here we are with [2] (vmm/vmd/vmctl).

[1] http://www.tylerkrpata.com/2007/10/theo-de-raadt-on-x86-virt...

[2] https://www.openbsd.org/faq/faq16.html

[1] probably isn't the best source out there, I was in a bit of a rush to find it but that is indeed the quote! Gotta either love or hate Theo I guess!


Would be cool to see a source for that one. Theo de Raadt is a hardliner whom I don't always agree with, but I'd like to know how visionary he actually was in this case (quite a bit by the looks of it).


I'm not a security person but I wanna practice trying to sum up his points:

1. There's no way in hell that a bunch of VMs running on one physical server is more secure than a bunch of different physical servers each running an OS. If there were architectural hooks for those VMs to provide additional security beyond what the host OS provides, then an OS like OpenBSD would already be making use of it.

2. Running a bunch of VMs on a single physical machine is certainly cheaper.

3. People who are in favor of the cost-cutting are claiming that there's a security benefit to sell more stuff.

Am I right?

If so, how does that stance jibe with the research that Qubes is based on?


I think the argument VM-sellers make is that it's more secure than running a bunch of colocated code on the same machine without VMs, not that it's more secure than distinct physical systems.


That is their claim. Theo is pointing out that the security is an illusion. Either the OS is secure and so you may as well just run everything in the OS without the VM in the way (ignoring issues of different operating system), or the OS is not secure and now you have to hope the VM is secure because otherwise you just exploit your VM to get out of it and then exploit the OS. The second level attack is more difficult, but that is all.


Almost right, except one thing: I think Theo de Raadt wrongly did not acknowledge the valid point of his opponent: in practice, separating applications into virtual machines does have some security benefits, when compared to running them on single OS.

I think security guarranties are better if you follow practices of a little selfcentered project such as OpenBSD (run only trusted code) than if you follow practices of QubesOS (running whatever untrusted code you desire in Xen domains and relying on VM separation).



Interesting to note that AWS has been working on their own custom silicon, such as the announced Arm based AWS Graviton powered machines.

We will most likely see a continued divergence between "consumer silicon" which is designed for speed in a single tenant environment on your local desktop or laptop, and "cloud silicon" which is optimized to protect virtualization, be power efficient, etc. I'd predict that this will actually lead to increased efficiency and lower prices of cloud resources rather than the "death by a 1000 cuts" that you are proposing.


Except most companies don't care about security of the cloud apart from the magical "compliance" and that they are on the same cloud as everyone else.


There are bare metal "cloud" providers such as Packet.net where you get the click n deploy convenience of the cloud but a physical machine. They have quite small machines in the inventory that are close to price competitive with VMs. Even Amazon has this bare metal capability FWIW, but afaik only for big expensive machines.


it increases cloud revenues because of slow downs in CPU, and people can't move off cloud because they're locked in and can't hire datacenter engineers anyway


Ultimately cloud providers don't want revenue, they want profit. Except in some perverse cases (like cost-plus-percentage contracts), it's not generally in a business's interests for their costs to go up.

Even if there's no opportunity to switch away, eventually you bleed your customers dry and put them out of business. You will typically always aim to price your offering at the equilibrium point where loss of custom increases faster than the increase in profit, and vice versa.

One situation in which an increase in your costs can be good, is if the same increase applies more to your competition. But, in this case multi-tenant cloud is hit harder than the competing alternative of private infrastructure.


This is an important point that I noticed around the time of Spectre/Meltdown as well. The mitigation for those bugs caused an average of 30% CPU slowdown, meaning it took 30% more CPU cycles to perform the same work as it did prior to the mitigation. If a cloud provider rolls that out to every server, then every customer's bill for CPU usage should increase by roughly 30%.

Am I totally misunderstanding this? Someone please correct me if I'm wrong.


Well, dedicated machines for each security domain for each customer, a lot of the time it's fine for many applications to be in the same security domain.


Even this isn't enough. Sometimes mutually untrusted parties must exchange data (say you're running a trading platform, or a social network). You have to ensure every point of interaction between such parties is immune to timing attacks.


In theory, yes. But getting statistically meaningful data on sub-ms timing variations on a jittery connection with both round trip and jitter orders of magnitudes larger is hard... it would be a very, very slow attack and probably impractical in most cases.


> dedicated machines for each application of each customer.

I don't think you don't need to go this far. You can probably get away with circuit switching small blocks of hardware, and fully resetting them between handovers. Although you'd have to ensure sufficient randomisation / granularity to destroy side channels in the switching logic.


You should use large instance that don't share the same CPU socket, on AWS for example it would be c5.9xlarge and above.


The point of virtualization isn't to add security. It gives you functionality you just cannot have otherwise, and the cloud enables you to scale in a way that is impossible otherwise. If there are security holes, they get patched and the market moves on. It's not just going to abandon either the cloud or virtualization.


I don't agree with that statement. As a stand-alone physical machine is expected to be secure in its own enclave an I think it's within reason a virtual one would have the same expectations.


Sorry for being naive. Are these kind of CPU Securities vulnerabilities new? Why it is in the past 20 years we have had close to zero in the news ( At least I wasn't aware of any ) and ever since Spectre and Meltdown we have something new like every few months.

And as far as I am aware they are mostly Intel CPU only. Why? And Why not AMD? Something in the Intel design process went wrong? And yet all the Cloud Vendor are still buying Intel and giving very little business to AMD.


Running many instances of various untrusted code on the same server is "new": it came with the cloud infrastructure.

Running many instances of various untrusted code on the same client machine is "new": it came with web apps, and with mobile apps.

Before several years ago, it was sort of a non-issue, because to exploit such a vulnerability one would need to write a virus or a trojan, and with this approach, there are many easier ways of privilege escalation.

Something like "cloud" existed likely on IBM mainframes under OS/VM [1] but System/370-compatible CPUs likely lacked all these exploitable speculative execution features.

[1]: https://en.wikipedia.org/wiki/VM_(operating_system)


https://en.wikipedia.org/wiki/Time-sharing

Time sharing was very big in the 1970s, and non-OS/VM methods of sharing mainframes for batch processing were also big at times I'm less sure of.

Inviting complete randoms to routinely run untrusted code in your own security domain, as we do with browsers, that's "new". And thus the popularity of NoScript and uMatrix.


Indeed! Though time-sharing was more like a terminal server, or shared hosting, while OS/VM was more like a modern VM host.

It's interesting though why cross-process data exfiltration based on speculative execution was not tried with any success in the shared hosting environment of 1990s and early 2000s. I suppose it has something to do with the use of non-JIT-ted interpreted languages, like PHP, Perl, or SQL, on such hosting; you could not run an arbitrary native executable like you do in the cloud.

Another factor is that though speculative execution was first implemented in 1950s [1], it was either mainframes or RISC machines, and neither was used by the Intel-dominated shared-hosting environments.

[1]: https://en.wikipedia.org/wiki/Branch_predictor#History


> It's interesting though why cross-process data exfiltration based on speculative execution was not tried with any success in the shared hosting environment of 1990s and early 2000s.

According to several of the researchers who found Meltdown and/or Spectre, they'd always assumed Intel et. al. were too careful to let this happen, at least at useful data rates. But when they looked for reasons I forget, Katie bar the door!


A lot of reasons - one, we only recently (in academic research time) started using single servers to host services from multiple customers, so the value of these sorts attacks only recently became apparent.

Second, as I understand it, Spectre and Meltdown really started this whole parade because prior to those vulnerabilities, speculative execution attacks were something only academics ever talked about - everyone assumed it would be too difficult to pull off in the real world. When that received wisdom was proved wrong, it probably opened the floodgates for researchers - both in terms of intellectual interest and money.

Also, re: why Intel and not AMD... I think Intel is probably a higher-dollar target due to their dominance in the server market, but also probably because they have been neglecting QC for years... see, e.g., http://danluu.com/cpu-bugs/


Dan Luu didn't note that Meltdown goes all the way back to their first out-of-order speculative execution design, the Pentium Pro in 1995. I note that ARM, and both of IBM's architectures, POWER and mainframe, also had Meltdown issues, and everyone including AMD "enjoys" Spectre bugs, so named because they'll be haunting us for a very long time.


I think it is definitely worth introspecting about the history. It has been known for over 20 years that sharing pretty much anything creates side channels but nobody knew how to reliably exploit them and it was assumed that side channels might never be exploitable. In recent years there has been massive progress in practical data extraction using side channels.


Theo (of OpenBSD) famously ranted about Intel's implementation of SMT/hyperthreading ~12 years ago https://marc.info/?l=openbsd-misc&m=118296441702631&w=2


you sure about that link? he's talking about a core that didn't have SMT and is ranting, in general, about errata existing and wildly misrepresenting their impact

never mind that most errata are conditional until the ucode patch load, but that particular rant has nothing to do with HT


It have always been known how to exploit them. But doing so used to be slower and there have been fewer opportunities for attacks. OS kernels used to have Big Locks (AFAIK, OpenBSD still does), that significantly deterred programs from messing with kernel code and CPU caches.

Things have changed a lot since then: OS kernels became faster by eliminating a lot of unnecessary (?) cross-process overhead; browser makers made a number of potentially problematic decisions ("let's allow Javascript to create CPU threads — what could possibly go wrong?"); Linux kernel developers made few potentially problematic decisions ("let's allow unprivileged processes to invoke arbitrary BPF bytecode — that worked for Java, so what could possibly go wrong?")

A lot of small security lapses added up until it became viable to use CPU flaws to actually target ordinary users. To add insult to injury, certain corporations started spreading myth, that well-known insecure practices — such as knowingly running local software from questionable authors — are "safe enough" for general population. Topic web page even talks about running untrusted Android software, as if Android had some kind of impenetrable security boundary around untrusted apps.


> Why it is in the past 20 years we have had close to zero in the news ( At least I wasn't aware of any ) and ever since Spectre and Meltdown we have something new like every few months.

It's a new vulnerability class. Prior to Spectre, nobody thought that code which didn't execute (and couldn't execute) could affect architectural state in an observable way. It's hard to overstate how bizarre the vulnerabilities from the Spectre family are from a software point of view: it's leaking data from code that not only didn't execute yet, but also can never execute, and in some cases doesn't even exist! It's like receiving a packet your future self sent to the past, except that your future self had been dead for two years when he sent the packet, and for some reason he's actually a parrot.

Once a new vulnerability class is discovered, researchers will start looking for new bugs in and around that class. Which is why we have seen lately so many issues disclosed around speculative execution and data leaked through shared microarchitectural state.


This is a common pattern for new bug classes. Nobody thought to look at this, and when they did, the rabbit hole went deep. We likely haven’t seen the bottom.

AMD are not better. They’re probably worse. They’ll be looked at when the Intel tree stops bearing fruit. But finding an Intel bug is higher impact, so that’s what researchers want to look at.


> AMD are not better. They’re probably worse.

This argument makes zero sense and fails real world inspection. You see, researchers did publish the fact that it affected AMD on Spectre.

For this vulnerability, they were unable to reproduce the bug on AMD (their words). Which means that they tried.


Intel and AMD don't have to share the same bugs for AMD to be worse.

Consider you've got two sets of vulnerabilities: [1, 2, 3] and [2, 4, 5, 6, 7, 8].

If I label set 1 Intel, and set 2 AMD, then you can see how doing your research on Intel first will make it seem like Intel has 3x the vulnerabilities as AMD - even though it actually has half.


By your logic both of the sets could be infinite and we will never find out who has the more vulnerabilities.


Possibly.


Bullshit, So far most vulnerabilities only affected Intel. Because of their choices in their design. AMD is way better, so far.


Since there is relatively little AMD running, you would expect relatively little investment in attacking it.


> We were unable to reproduce this behavior on non-Intel CPUs and consider it likely that this is an implementation issue affecting only Intel CPUs.

The attacks are being attempted. AMD just didn't screw up as badly.


Trying Intel attacks on AMD just in case is cheap and easy, and in this case fruitless. It doesn’t shed any light on how much effort is being put into finding AMD’s own specific screw ups.


I think it's because they have designed automatic vulnerability detection devices that are efficient at finding even very obscure issues.


This looks like it is from the same TU Graz people who also worked on Meltdown & Spectre

https://meltdownattack.com/


Url changed from https://zombieloadattack.com, which points to this.

There is a home page about today's vulnerability disclosures at https://news.ycombinator.com/item?id=19911715. We're disentangling these threads so discussion can focus on what's specific about the two major discoveries. At least I think there are two.


I think there's two seperate branded annoucemenst of three or four different vulnerabilities depending on how you count. (There are four CVEs and Intel lists four, but the researchers announced three.) Haven't seen much discussion of the specific differences between them, probably because they're subtle and not terribly relevant to most folks - they all involve one process speculatively reading memory it shouldn't be able to access via the memory access buffers within Intel CPUs, they just vary in which parts of the memory access machinery they use and how exactly they're exploited.


At what point do we simply revert to using typewriters for authoring sensitive documents, and pneumatic tubes (couriers for WAN) for networking?

https://www.theguardian.com/world/2014/jul/15/germany-typewr...


Long ago? https://www.theguardian.com/world/2013/jul/11/russia-reverts... (also https://www.cia.gov/library/readingroom/document/cia-rdp78-0...)

But assuming a typewriter has no attack vectors is just as foolish as insecure networks IMO.

https://arstechnica.com/information-technology/2015/10/how-s...

Also: detecting text through keystrokes previously discussed here https://news.ycombinator.com/item?id=7448976 (https://people.eecs.berkeley.edu/~tygar/papers/Keyboard_Acou...)

Heck while I can't find a quick source, I remember a story about how the CIA designs rooms/walls and buildings to prevent sound from predictably bouncing through rooms in ways that could be captured from afar.

Spooks are usually 10 steps ahead of the public common sense this sense.



We don't need to revert to typewriters. We just need computers designed with a real security model in mind, instead of piles of ah-hoc mitigations. However, I bet no one will invest in it until one of these exploits bring down AWS, take over Google's crawlers or something else of that sort.


There's smaller companies that keeps designing them. Nobody buys them for the most part. One example that can handle lots of security policies is CoreGuard. It's based on work at crash-safe.org.

https://www.dovermicrosystems.com/

Academics keep coming up with stuff for timing channels like partitioning, masking, and randomizing components. Personally, if not physical separation, I'd just do SMP with secret parts on different CPU that untrusted parts. Both memory safe on a separation kernel to isolate them. One design used different DIMM's, too.


>> We just need computers designed with a real security model in mind

These are already options, as another commenter pointed out. If you need this kind of protection, it is available, at significant cost.


[flagged]


Downvoted for the frothy edit. Chill out there ya toughy. Someone clearly hit a soft spot on you.


People should realize that ancient Chinese were оnto something when they told that all phenomena shall evolve only so much before they tip over the peak of maximum development and inevitably rumble downhill into overdevelopment.

P.S. the Holy Church of Progress keeps flagging the herecy of I-Ching out of existence, may it prevail in its glorious ways. Curious fact: expressing your disagreement in written form takes more neurons than flagging reflex does. Try and ye shall succeed!


I like the I Ching too but could you please stop posting these and then deleting them? It's an abuse of the site.


I'll stop as soon as dysgraphic flaggers stop. The true abuse is muting a comment that doesn't offend anyone, just calling to contemplate a philosophy so different from the current mainline it hurts. Or does it? Are they triggered by 'people should'? Yeah, they should, but don't have to. Is the very notion of impossibility to improve things forever without changing their essence offensive? Any other reason, could you explain it, please?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: