Hacker News new | past | comments | ask | show | jobs | submit | tryp's comments login

I found this an interesting look into The US Forestry service's philosophy of road management through the lens of Washington State's Olympic National Forest.

"The Forest Service – a branch of the Department of Agriculture whose original purpose was to “furnish a continuous supply of timber for the use and necessities of the people of the United States” – classifies their roads into five buckets based on their method of construction and maintenance needs. And the agency seems to be highly tuned to the lifecycle of their road infrastructure..."

(Discovered through https://kagi.com/smallweb/ featured on HN a few weeks ago)


Some folks are just not TradeMarksters. They'll teach you that Intel makes processors that implement x86 and amd64 instruction sets. You can Google things on DuckDuckGo. They'll blow their nose on the Kleenex that they bought from GenericPaperProductsCo. At some restaurants in the US South you'll have to let your server know what kind of Coke you want: cola, lemon-lime, or the Doctor one.


BIOS is often able to set mask bits in the CPU or chipset's pci bridges that prevent the devices from being reported to the OS. Before loading additional code like user DXEs or bootloader, BIOS commands a one-way sealing operation that prevents modifications until reset.


I love that the instruction decode unit's ability to command a flush on branch mispredicts is represented in the diagram as a toilet.


As a longtime browser of cross-referenced source generated by LXR [0] I found this article on their new implementation of the backend interesting.

[0] http://elixir.free-electrons.com/linux/latest/source


The researchers' writeup, in a very fun form, can be found at https://rol.im/securegoldenkeyboot/

With text as follows for those whom the joviality of the original presentation is undesirable:

irc.rol.im #rtchurch :: https://rol.im/chat/rtchurch

Specific Secure Boot policies, when provisioned, allow for testsigning to be enabled, on any BCD object, including {bootmgr}. This also removes the NT loader options blacklist (AFAIK). (MS16-094 / CVE-2016-3287, and MS16-100 / CVE-2016-3320)

Found by my123 (@never_released) and slipstream (@TheWack0lian) Writeup by slipstream (@TheWack0lian)

First up, "Secure Boot policies". What are they exactly?

As you know, secureboot is a part of the uefi firmware, when enabled, it only lets stuff run that's signed by a cert in db, and whose hash is not in dbx (revoked).

As you probably also know, there are devices where secure boot can NOT be disabled by the user (Windows RT, HoloLens, Windows Phone, maybe Surface Hub, and maybe some IoTCore devices if such things actually exist -- not talking about the boards themselves which are not locked down at all by default, but end devices sold that may have secureboot locked on).

But in some cases, the "shape" of secure boot needs to change a bit. For example in development, engineering, refurbishment, running flightsigned stuff (as of win10) etc. How to do that, with devices where secure boot is locked on?

Enter the Secure Boot policy.

It's a file in a binary format that's embedded within an ASN.1 blob, that is signed. It's loaded by bootmgr REALLY early into the windows boot process. It must be signed by a certificate in db. It gets loaded from a UEFI variable in the secureboot namespace (therefore, it can only be touched by boot services). There's a couple .efis signed by MS that can provision such a policy, that is, set the UEFI variable with its contents being the policy.

What can policies do, you ask?

They have two different types of rules. BCD rules, which override settings in the on-disk BCD, and registry rules, which contain configuration for the policy itself, plus configuration for other parts of boot services, etc. For example, one registry element was introduced in Windows 10 version 1607 'Redstone' which disables certificate expiry checking inside mobilestartup's .ffu flashing (ie, the "lightning bolt" windows phone flasher); and another one enables mobilestartup's USB mass storage mode. Other interesting registry rules change the shape of Code Integrity, ie, for a certain type of binary, it changes the certificates considered valid for that specific binary.

(Alex Ionescu wrote a blog post that touches on Secure Boot policies. He teased a followup post that would be all about them, but that never came.)

But, they must be signed by a cert in db. That is to say, Microsoft.

Also, there is such a thing called DeviceID. It's the first 64 bits of a salted SHA-256 hash, of some UEFI PRNG output. It's used when applying policies on Windows Phone, and on Windows RT (mobilestartup sets it on Phone, and SecureBootDebug.efi when that's launched for the first time on RT). On Phone, the policy must be located in a specific place on EFIESP partition with the filename including the hex-form of the DeviceID. (With Redstone, this got changed to UnlockID, which is set by bootmgr, and is just the raw UEFI PRNG output.)

Basically, bootmgr checks the policy when it loads, if it includes a DeviceID, which doesn't match the DeviceID of the device that bootmgr is running on, the policy will fail to load.

Any policy that allows for enabling testsigning (MS calls these Retail Device Unlock / RDU policies, and to install them is unlocking a device), is supposed to be locked to a DeviceID (UnlockID on Redstone and above). Indeed, I have several policies (signed by the Windows Phone production certificate) like this, where the only differences are the included DeviceID, and the signature.

If there is no valid policy installed, bootmgr falls back to using a default policy located in its resources. This policy is the one which blocks enabling testsigning, etc, using BCD rules.

Now, for Microsoft's screwups.

During the development of Windows 10 v1607 'Redstone', MS added a new type of secure boot policy. Namely, "supplemental" policies that are located in the EFIESP partition (rather than in a UEFI variable), and have their settings merged in, dependant on conditions (namely, that a certain "activation" policy is also in existance, and has been loaded in).

Redstone's bootmgr.efi loads "legacy" policies (namely, a policy from UEFI variables) first. At a certain time in redstone dev, it did not do any further checks beyond signature / deviceID checks. (This has now changed, but see how the change is stupid) After loading the "legacy" policy, or a base policy from EFIESP partition, it then loads, checks and merges in the supplemental policies.

See the issue here? If not, let me spell it out to you plain and clear. The "supplemental" policy contains new elements, for the merging conditions. These conditions are (well, at one time) unchecked by bootmgr when loading a legacy policy. And bootmgr of win10 v1511 and earlier certainly doesn't know about them. To those bootmgrs, it has just loaded in a perfectly valid, signed policy.

The "supplemental" policy does NOT contain a DeviceID. And, because they were meant to be merged into a base policy, they don't contain any BCD rules either, which means that if they are loaded, you can enable testsigning. Not just for windows (to load unsigned driver, ie rootkit), but for the {bootmgr} element as well, which allows bootmgr to run what is effectively an unsigned .efi (ie bootkit)!!! (In practise, the .efi file must be signed, but it can be self-signed) You can see how this is very bad!! A backdoor, which MS put in to secure boot because they decided to not let the user turn it off in certain devices, allows for secure boot to be disabled everywhere!

You can see the irony. Also the irony in that MS themselves provided us several nice "golden keys" (as the FBI would say ;) for us to use for that purpose :)

About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a "secure golden key" is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don't understand still? Microsoft implemented a "secure golden key" system. And the golden keys got released from MS own stupidity. Now, what happens if you tell everyone to make a "secure golden key" system? Hopefully you can add 2+2...

Anyway, enough about that little rant, wanted to add that to a writeup ever since this stuff was found ;)

Anyway, MS's first patch attempt. I say "attempt" because it surely doesn't do anything useful. It blacklists (in boot.stl), most (not all!) of the policies. Now, about boot.stl. It's a file that gets cloned to a UEFI variable only boot services can touch, and only when the boot.stl signing time is later than the time this UEFI variable was set. However, this is done AFTER a secure boot policy gets loaded. Redstone's bootmgr has extra code to use the boot.stl in the UEFI variable to check policy revocation, but the bootmgrs of TH2 and earlier does NOT have such code. So, an attacker can just replace a later bootmgr with an earlier one.

Another thing: I saw some additional code in the load-legacy-policy function in redstone 14381.rs1_release. Code that wasn't there in 14361. Code that specifically checked the policy being loaded for an element that meant this was a supplemental policy, and erroring out if so. So, if a system is running Windows 10 version 1607 or above, an attacker MUST replace bootmgr with an earlier one.

On August 9th, 2016, another patch came about, this one was given the designation MS16-100 and CVE-2016-3320. This one updates dbx. The advisory says it revokes bootmgrs. The dbx update seems to add these SHA256 hashes (unless I screwed up my parsing): <snip>

I checked the hash in the signature of several bootmgrs of several architectures against this list, and found no matches. So either this revokes many "obscure" bootmgrs and bootmgfws, or I'm checking the wrong hash.

Either way, it'd be impossible in practise for MS to revoke every bootmgr earlier than a certain point, as they'd break install media, recovery partitions, backups, etc.

- RoL

disclosure timeline: ~march-april 2016 - found initial policy, contacted MSRC ~april 2016 - MSRC reply: wontfix, started analysis and reversing, working on almost-silent (3 reboots needed) PoC for possible emfcamp demonstration ~june-july 2016 - MSRC reply again, finally realising: bug bounty awarded july 2016 - initial fix - fix analysed, deemed inadequate. reversed later rs1 bootmgr, noticed additional inadequate mitigation august 2016 - mini-talk about the issue at emfcamp, second fix, full writeup release

credits: my123 (@never_released) -- found initial policy set, tested on surface rt slipstream (@TheWack0lian) -- analysis of policies, reversing bootmgr/ mobilestartup/etc, found even more policies, this writeup.

tiny-tro credits: code and design: slipstream/RoL awesome chiptune: bzl/cRO <3


We should just swap the top-link to this post, thanks for the detailed write-up!


The original is also on the frontpage: https://news.ycombinator.com/item?id=12259911


Ah, thanks, I missed at first somehow.


Toward the end of that slide deck, in the "Slowing Brute Force" section, there the following slide:

> Argon2 ensures that both defenders and attackers hash passwords on the same platform (x86)

Do they simply mean that currently the most cost effective platform to deliver the necessary computation and memory is x86?

I'm having trouble believing that it is either achievable or desirable to design a password hash that's only efficient on one architecture.


  I'm having trouble believing that it is [...] desirable
To make it less useful to build a NSA-style ASIC hash cracking cluster, like https://en.wikipedia.org/wiki/EFF_DES_cracker

  I'm having trouble believing that it is [...] achievable
The point is not to write an algorithm that is only fast on x86, rather than ARM, POWER, whatever; it's to write an algorithm that uses so much memory that it's not orders-of-magnitude faster to use an ASIC that's just a bunch of compute cores connected to fast buses, rather than a general-purpose processor with megabytes of on-chip cache.


The entire point of "memory-hardness" and "anti-parallelisation" and other such properties is to make it such that the CPU is the most desireable platform to for the password stretching function to run.

Why is this desireable?

This is desireable because your server has a CPU. Your attacker has a CPU, a GPU, perhaps an FPGA, and could possibly manufacture a custom ASIC. You do not want the attacker to be able to cost-effectively bruteforce on any of these platforms any more efficiently than a single CPU core can.

This makes password cracking difficult, and it also means we do not need to have absurd CPU runtimes for our hash, a handful of milliseconds is sufficient. (More is of course better, and for a single user system a delay of half a second isn't noticeable.)

What would occur if we did not design password stretching functions that had this property?

We would end up in the situation we have now, where nonstandard hardware computes hashes faster than we can, and thus instead of taking (a few miliseconds * number of possibilities) it takes (smaller amount of time * number of possibilities). This is undesireable.

This is also why you should never use a typical fast cryptographic hash function like SHA256, SHA3 or RIPEMD160 to hash passwords. This is also why PBKDF2 is considered less secure than say scrypt.


It actually resides in the pch. The processor is called the management engine. In newer platforms, it gets to decide if the processor even gets to see the bios executable code.


I couldn't find much data on the Curie module, but it looks like the only pins required are power and slow io, so I can't imagine it's a super dense bga or anything. Even on the high end Intel socs with over 1k balls a good designer can escape all the signals without microvias.


A few years ago I implemented the storage system for a special-purpose diagnostic camera. The specification defined (very long) filenames for the saved images using a timestamp and some other data. I used a mostly off-the-shelf microcontroller/NAND/USB mass storage reference implementation, hooked it up a side-channel to the FPGA, and had everything working pretty nicely. Until the test harness that just continuously commanded pictures to be taken reached 105 iterations. After that, the camera timed out waiting on the storage subsystem to store the image.

The problem turned out to be the code that found the 8.3 filename: it did the longfi~1.bin, checked to see if that file existed and if so, incremented to longfi~2.bin, then checked that... but never did the checksum trick described here, just kept iterating. (bear in mind this was a tiny 8-bit microcontroller that didn't have the RAM to just read all the directory entries at once and keep them around for comparison) Finding the proper 8.3 filename this way took longer than the timeout period after 104 collisions.

Of course, we only cared about the long filename and never saw the 8.3 filename, so my fix was simply to use an appropriate hash of the long filename to ensure a good probability of uniqueness.


Ah yes, the wonders of accidentally quadratic functions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: