If you're a target of a major intelligence agency, I think that you have to assume that all of your computers are irretrievably compromised. From Vault 7, we know that the CIA has long developed implants to infect both the EFI and hard drive firmware that load before any potential code that could detect them. These could be made arbitrarily hard to detect without physically opening the computer and dumping these flash devices and comparing them against a known good image. Who knows what other embedded processors with a little bit of flash lurk in various peripherals in your laptop that they've figured out how to wheedle their way into... If the flash is integrated into the microcontroller itself, there may not even be an easy way of reliably dumping its contents.
I think you are absolutely correct with your assessment. I recall Alan Cox (welsh bloke, big beard, Linux kernel hacker (well: simply hacker in general will do)) posting on G+ about someone booting enough Linux on a hard disc to get a prompt. No not the disc itself, off the firmware on the controller.
You may also like to consider that nearly all modern server systems have an iLO/iDRAC or whatever that can do all sorts of things, and at least one internal USB interface. PCs can have the Intel ME and other horrors. The best you can hope for is that it is only your local intel. agency that potentially have routine access to your system.
Could all firmware be on WORM chips? Which can't be rewritten, no matter what an adversary does. Updates would require switching chips. But at least driveby implants would be impossible.
Most computers have an Embedded Controller (with integrated flash) that does a lot of motherboard/system specific stuff like power management, flashing leds and even scanning the keyboard matrix on laptops.
If you care about this, then put the laptop in a tamper-evident bag. Those are necessarily imperfect too; but there's work making tamper-evident seals to resist up to state-level attacks, since that's relevant in stuff like enforcement of nuclear weapons treaties. That succeeds to the extent that you can find a physical effect that's easy to create and measure, but hard to recreate deterministically. (In concept, dump a pile of glitter over your thing. The effort to dump the glitter, take two pictures, and compare is small. The effort to recreate a given glitter distribution flake by flake is large. Likewise for laser speckle from random rough surfaces, and many other effects.)
You could check a laptop for malware later by reading out literally every bit of nonvolatile state, including the BIOS and stuff, and confirming that all changes had expected form (to files you meant to work on, etc.). Of course, then you have to trust the equipment you use for that...
A little weird that he ran the experiment. Did he really suspect that malware was routinely getting installed by attackers with physical access to laptops during business travel? If yes, then why didn't someone notice it calling home or whatever?
> If you care about this, then put the laptop in a tamper-evident bag.
How does this procedure work for multiday evil maid situations? The first day while you're out the maid replaces your collection of plastic disposable tamper-evident bags with faulty ones that open with a particular chemical but otherwise look identical. The second day the maid tampers with your laptop and you don't notice. Do you just have to take the whole box of additional bags with you everyday? That seems prohibitively inconvenient.
If this were a job interview, then I'd say "put the spare bags in with the laptop"...
Or the "bag" can be the laptop's existing case. You can put seals (stickers, or the sparkly nail polish trick mentioned below) over all the fasteners and seams of the laptop, fill all the non-power ports with epoxy, etc. None of these make tampering impossible, but they can make it uneconomic.
I don't think anyone at serious risk of these kinds of attacks lets computers out of their physical control. I've seen agencies that do the seals/epoxy even for computers inside their secure facilities, presumably to give their guards more time to catch an inside tamperer.
Professional poker players have been dealing with this type of risk for years. Major poker tournaments present a juicy target for organised hacking gangs. A high-stakes pro might have tens or hundreds of thousands of dollars deposited in their PokerStars account. Hundreds of professional players in a tournament cardroom means hundreds of very valuable laptops left in hotel rooms.
The most sensible precautions seem to be a) full-disk encryption with a strong passphrase, b) hardware 2fa using a token that is stored separately from the computer, c) physically securing the machine whenever possible and d) tamper-evident seals covering screwholes or seams.
If your adversary is capable of beating these precautions, you're probably screwed anyway.
So part of the point of tamper evident seals is that they are difficult to duplicate. The bags themselves should be readily verifiable, eg with a serial number for basic verification at least. The sort of seals used for nuclear treaty enforcement actually most commonly use little fibers that are randomly mixed in with plastic and their pattern is photographed, making them extremely difficult to duplicate.
A border agent would just open that bag right in front of you, making it not a particularly useful measure of being tampered with.
I suppose a large amount of the problem could be solved just by taking checksums of all non-volatile memory on the device - however that doesn't check for, for example, hardware keyloggers which might be inserted without your consent, and then a thorough evaluation of the hardware would be necessary. However that still doesn't tell you if somebody has simply tried to copy data off of your device - so maybe in this case you need something which physically marks the device in the case that the hard drive is removed and presumably accessed outside your computer, like those dye traps they use in banks and when transporting money.
Did he really suspect that malware was routinely getting installed by attackers with physical access to laptops during business travel? If yes, then why didn't someone notice it calling home or whatever?
I doubt it but his job is to suspect all sorts of things. If you are going to attempt to quantify risk then some experimentation is in order rather than simple speculation. As to "notice it calling home", it is surprising how much is missed. For example, Meltdown n Spectre were predicted many, many years ago ...
Tens of millions of laptops have been exposed to at least as much evil maid opportunity as the author's. That's either a much stronger natural experiment than his artificial one, or the biggest sleeper attack in history. Like, the Israelis still had to blow up the centrifuge eventually...
Pure handwave. Let's call an "exposure" a one-way trip plus half the hotel stay. So the author's laptop got (3+5) * 2 = 16 exposures.
Air passengers make about 3B trips per year. A laptop lasts about three years. I said "tens of millions", so if I'm right then we have at least 16 * 20M exposures on existing laptops. That would mean at least one passenger in 28 travels with a laptop and is as careless as the author was.
That seems high to me--like, it's not too common to check your laptop (unless you were flying from an Arab country last year...). On the other hand, that ignores opportunities before the laptop's first retail sale. Those seem more attractive to me--more time to work, less diverse hardware, etc.--and almost every laptop sold is exposed that way.
So my comment above was probably too flip. His experiment still seems pointless to me, though.
I don't think he thought he was targeted, if he thought the hacker stickers would make a difference. But if I had an exploit like that and was targeting a security-conscious journalist, then (a) I'd be mystified when he checked his laptop, and probably unprepared to take advantage, and (b) I doubt I'd risk my >$1M exploit--even if I could hide it perfectly in some firmware, it still has to communicate out to the world somehow, and that's where it's likely to get noticed.
I thinking of the David Miranda case, for example. If you talk to the Intercept at all in any way prior to going through customs I think you can expect to be delayed while they take a closer look at you. Its not really paranoia if they probably are out to get you ;)
It builds up a concept of "Colour" as describing information about a thing (distinct from metadata / tagging) which is not necessarily derivable from the thing itself. Most frequently it uses the term to describe provenance, but is careful not to limit the concept. To quote the ansuz' essay above in relation to the linked article:
When we use Colour like that to protect ourselves against viruses or malicious input, we're using the Colour to conservatively approximate a difficult or impossible to compute function of the bits. Either our operating system is infected, or it is not. A given sequence of bits either is an infected file or isn't, and the same sequence of bits will always be either infected or not. Disinfecting a file changes the bits. Infected or not is a function, not a Colour. The trouble is that because any of our files might be infected including the tools we would use to test for infection, we can't reliably compute the "is infected" function, so we use Colour to approximate "is infected" with something that we can compute and manage - namely "might be infected". Note that "might be infected" is not a function; the same file can be "might be infected" or "not (might be infected)" depending on where it came from. That is a Colour.
Once you've left your computer alone with a potential adversary, it has the might-be-compromised Colour. Proving whether it definitely has or has not been compromised is easy for devices which do not have this Colour, but as described in the linked-to article, very difficult or impossible once it has this Colour.
Let's be honest here. None of the more cutting edge attacks are going to be risked by attacking. as hard a target as this guy. The level of sophistication of attack the author is starting to reach is going to be reserved for state-level persons-of-interest.
Espionage is a game of judging capabilities, and cracking some security researcher's laptop telegraphs to the rest of the world that you can. As a national actor you don't actually WANT to flex your spy muscles in obvious ways unless the payoff is JUST THAT CRITICAL. It removes the veil of the unknown, and gives potential adversaries/persons-of-interest that much better a chance of successfully applying tradecraft to hide what you actually want to monitor because they have more accurate knowledge of what your capabilities are. Contrary to popular belief, most organiztions capable of pulling an evil maid attack simply won't because of the revelation of capability already mentioned, and the PRISM problem. Too much information/access in general lends itself to becoming useless due to the difficulty of separating the tasty bits from the mundane.
Kudos to the guy for actually trying the experiment, but it doesn't really tell anyone anything we didn't already know 20 years ago.
Computers are inherently insecure. Every form of "security" is insecure at some point. Computers haven't changed anything except for making a person's computer a juicy target to get some juicy financial information/passwords for non-state actors, or making surveillance potentialities so much more horrifying on account of the ubiquity of networked cameras, sensors, and microphones on the ground waiting to be exploited.
Forget about laptop evil maid attacks. Start thinking about the ticking time bomb of 'poisoned' hardware rife with 'tailored access' whereby state actors can push a button and have every device with a camera/microphone within a certain set of GPS coordinates start silently acting as an input sensor. Combine that data stream with the right neural networks, and you'll see a world that no one in their right mind wants, but is well within our manufacturing capabilities to create.
Or stop worrying, go outside, and make a friend. It's way better for your mental health.
While this is focused on hardware and physical access, it would seem that it's the same for software. You don't know if someone has control over it remotely, through any number of means (browser, downloaded software, installed professional software with backdoors, software with unreleased vulnerabilities, etc.). Even airgapped machines can be compromised (Stuxnet, TEMPEST).
Even if you built all the binaries from scratch from the official repos, you'd still be at risk of security bugs like heartbleed, or a compromised compiler.
In the end, I think security is always a numbers game. Someone can always get to your protected resources, it's just a matter of how much the attacker wants it.
"But given that current defenses against detecting processor-level backdoors wouldn’t spot their A2 attack, they argue that a new method is required: Specifically, they say that modern chips need to have a trusted component that constantly checks that programs haven’t been granted inappropriate operating-system-level privileges. Ensuring the security of that component, perhaps by building it in secure facilities or making sure the design isn’t tampered with before fabrication, would be far easier than ensuring the same level of trust for the entire chip."
They admit that implementing their fix could take time and money. But without it, their proof-of-concept is intended to show how deeply and undetectably a computer’s security could be corrupted before it’s ever sold. “I want this paper to start a dialogue between designers and fabricators about how we establish trust in our manufactured hardware,” says Austin. “We need to establish trust in our manufacturing, or something very bad will happen.”
Thesis: it is possible that someone may access your laptop without you knowing if you leave it unattended.
Experiment: after having gone through a number of - some meaningless[1] - attempts to be able to proof that this happened, there was no evidence it happened.
Doubt: did it happen nonetheless without leaving any trace
ot it din't actually happened at all?
Bonus: the experimenter learned that NVRAM exists in the stupid UEFI firmware
Conclusion: None worth mentioning, but be very aware of what the terrible evil maids can do, and do use the recommended Android app to defend against them.
[1] Hashing a whole hard disk is only a "positive" proof, if the hashes correspond nothing changed, but it is very possible that the hashes change because of any filesystem or disk issue if the system is used, so the method is pointless in the real world, where people bring with them a laptop in order to use it.
>Thesis: it is possible that someone may access your laptop without you knowing if you leave it unattended.
This is known to be true, this experiment was about seeing if anyone would access this laptop. Which also addresses what you view as meaningless, real world scenarios are trying to avoid their laptop being compromised while the author was hoping that it would.
>The "experiment" has too few data points to be meaningful, and the proposed way to verify remains meaningless, two simple cases:
This isn't science, we know this is possible and the "experiment" was to try and find examples of it happening.
A false negative is always assumed, it is impossible to know you haven't been compromised. A false positive is meaningless as finding a change is only the first step. You then need to analyze what the change is, and if you can't pin down what has been compromised you're just back to the default state of unaware.
This is a honeypot. If you leave your honeypot and return to an empty one, you're pretty sure a bear is around but can't do anything. If you find a bear with their paws in the pot, you don't need to run the experiment again to prove there's a bear.
A forensic image of the encrypted disk is useless (assuming strong passphrase, no bugs, etc.). The article is about the risk that an "evil maid" injects software/firmware into his laptop that presents the expected UI but secretly logs everything he types, and does something bad with that information later.
I run a dual-boot Debian + Windows 7 laptop, but my default position is to assume the Windows partition is exploitable, so for secure activities I boot Debian.
That boots using an unencrypted /boot partition, but everything else running on luks (one big partition, LVM'd down). I have a VeraCrypt partition which is for files that I want to work on from both operating systems. Works really well, crypted disks doesn't materially impact performance, and gives peace of mind.
The most likely scenario for theft is someone after the hardware, and they'll not spend much effort trying to break into the file system.
I'd be wary if the machine was stolen and then returned, but restoring mbr & /boot partition should be sufficient in that instance.
I've travelled to regions that I considered dubious, if not especially technically sophisticated. I haven't done this, however research suggested the best way of confirming your laptop hasn't been opened is to use a sparkling nail varnish. Dab a small amount on some or all of the case screws, take a close-up photo, store that photo somewhere safe. After the event, take photos of the screws again, and compare. The random patterns are effectively impossible to replicate.
Combined with disabling USB booting, and BIOS admin password, and keeping the OS in sleep -- it should be possible to prove your laptop hasn't been hacked via physical intrusion.
A lightly paranoid setup might actually only boot from a removable USB disc or SD Card instead of from the fixed HDD. The removable disc is kept on a necklace or whatever. The HDD is totally encrypted. A seriously paranoid setup might include manufacturing your own USB thumb drives.
I do this with the ExpressCard slot on my ThinkPad, not because I'm paranoid but because I like to experiment with many OSs and it's easier to manage than messing with partitions and bootloaders. I appreciate how the ExpressCard SSDs don't stick out of the laptop at all. I keep all my user data on the internal drive though (encrypted), so my risk to unattended laptop hijinks isn't much mitigated.
So: Your user data is on the HDD and encrypted AND you use a removable disc to boot your machine AND you have a "something you know" (password)
That looks quite secure to me, provided you look after your removable disc and password. I'm not familiar with IBM gear - is ExpressCard a removable disc? I tried to read the WP page on it but got confused.
I have one of these for my laptop - Dell Inspiron 17. It runs Arch Linux. I don't trust it at all (I'm CREST accredited) but I still use it.
I agree my setup is probably pretty secure, but not any moreso than a single os install with FDE, especially since I often leave an OS ExpressCard in the machine and the other ones I leave scattered around my desk...
The EOMA68 project is putting together an interesting variant on this approach. Instead of taking your hard drive with you, they are building computers with a mainboard on a PCMCIA card, so you can eject your entire system from its housing and take it with you. A small SSD is included, so you could literally bring your entire environment with you if you don't mind being limited to 8GB.
> but my default position is to assume the Windows partition is exploitable
In reality, as the article explains, the windows partition is basically invulnerable to this class of attacks if you take the 5 minutes to enable bitlocker. OTOH Linux systems have no effective defense.
BitLocker stores your encryption keys on a Microsoft server [1] and is a closed-source software, therefore by definition it cannot be trusted for encrypting anything important. (It is wrong even if your adversary is not a state, because that way you are getting used to a false sense of security that you don't actually have.)
It's an option to store your keys on their server, but not a requirement. At least not the last time I set up BitLocker. In fact that computer didn't even have the on board Secure Storage thing that it prefers, so I had to make a note of the recovery key on paper.
"Stores keys on someone's server" cannot be trusted, no matter what the copyright status is of the code running on the server.
Code licensing or copyright status isn't a form of security.
The issue isn't just that the remote server's code is impervious to scrutiny. A locally installed program that you can reverse engineer isn't automatically trustworthy because it is open-source, or even copylefted. Someone actually has to reverse engineer the binary and prove that it matches the source code. Many users of free software trust upstream binaries. (Even if they compile their own programs, they trust compiler binaries at some point.)
My primary concern - I should perhaps have spelled it out more clearly - is that the Windows partition is likely exploitable via Microsoft Windows.
Detecting data at-rest exploits such as described in TFA, and per my mitigation suggestions -- because they don't scale well -- implies that you're already of interest to your adversary.
> The adjective trusted, in trusted boot, means that the goal of the mechanism is to somehow attest to a user that only desired (trusted) components have been loaded and executed during the system boot. It's a common mistake to confuse it with what is sometimes called secure boot, whose purpose is to prevent any unauthorized component from executing.
Computers that support “secure boot” or “verified boot,” such as Chromebooks and Windows laptops with BitLocker, aren’t vulnerable to this. The BIOS can detect if the unencrypted part of your disk has been tampered with, and if it has, it will refuse to boot. MacBooks and laptops that run Linux could potentially be attacked in this way.
Really?
(Search terms used: "secure boot linux" and "secure boot macbook")
The first company that makes a truly open source, security vetted computer will be very rich. When I say open source, I mean open source circuit design, bootloader, OS etc. The complete stack. The surveillance state is here and we need the tools to fight it!
Right now every national "security" agency(usa, china, uk) is racing to create a truly comprehensive suite of tools to monitor its citizens en masse[0][1]. Exploits for every router, iphone built in backdoor, etc. Pretty much anything that would give the government access to the most intimate details of your life. With the current political climate it's just going to get worst.
If you care about your privacy AND security, become informed and vote for privacy advocates. Visit fightforthefuture.org and eff.org to learn more.
DISCLAIMER: I am in no way affiliated with either of these foundations or their members.
The best defense is being someone too uninteresting to bother. Once you're interesting so some resourceful adversary, it's very hard to avoid devices being hacked, and virtually impossible to determine if they've been hacked.
That depends where in the design, manufacture, and provisioning stage this occurs, and by whom. It's quite possible Micah was looking at the wrong signifiers.
We know that the NSA intercepts machines for modification. And it's possible that hardware is generally backdoored. Maybe even by Chinese manufacturers.
But what can one do, if everything is pwned? It's not practical to build machines from transistors etc. There are dreams of open-source hardware. But how could that even be done securely? The NSA can plant agents anywhere, in theory.
I wasn't addressing this. Only the first of your assumptions, that it is somehow possible to escape surveillance.
I'm increasingly of the view that it's not, at least not through individual action.
My interest is, for first strokes, painting an accurate picture of the landscape. Which means discarding inaccurate models and frames.
Among those: that laying low is possible, or a positive (that's precisely the objective of the Panopticon, and self-censorship abd -reegulation are the most efficient), or that individual rather than collective action is appropriate.
It also seems that surveillance itself faces various realities and economies, which can be directly attacked.
Or not be a coward and be interesting, speak up, and not back down. If more of us do that than not then they will at least have a very difficult time in tracking us all.
While I agree with the meaning behind your post, I think that this attitude leads to self destruction in the current timeline.
A much smarter approach in my opinion is to live on two separate layers: a secure private underground and an uninteresting public surface.
This not only keeps the enemy content, it also keeps tracking low; any open confrontation will necessarily lead to harsher measures, which ultimately means violence.
If samizdat worked in the URSS it can work nowadays too.
A simple thing to make evil maids job harder is to just apply plenty of instant glue. This way it takes much more effort to open the laptop or switch components. Also fill in ports you don’t need.
For practical security it is also important to have some physical things on the laptop body that allow you to identify your hardware. Otherwise somebody will just replace it with their own hardware to collect your password. Obviously pretty much anything can be replicated, but absolute security is anyways impossible to achieve so you can only try to make things harder for them.
. . . so, you either leave your laptop at home (assuming you've a sufficient degree of certainty it won't be hacked there) or you keep it with you at all times, with all wireless technologies disabled.
With regards to my checked luggage - no electronics there - when traveling to/from/in the US, I always save those 'Inspected by TSA' placards, and place one prominently atop my clothes prior to closing and locking my bag.
Based on various physical telltales I utilize, the success rate of placing a used 'Inspected by TSA' placard in one's bag to deter searches is 100%, at least in my experience.
Since I started doing this, I haven't received any new 'Inspected by TSA' placards, either. So, that's another indicator of the technique's probable success rate.
I think you have got a bit confused here. For example Fermat's Last Theorem is effectively "a negative": "no three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n greater than 2"
"You cannot prove a negative" basically means that you cannot prove that some claim or statement is, was, and always will be false (without perfect knowledge of the past, present, and future).
That parenthetical is an important and almost always unstated axiom.
The general inability to prove a false statement does not mean you cannot prove that the answer to some equation is a number below zero. I am not really aware of the phrase being used in the context of math, but rather more often with examinations or experiments that are susceptible to evidence.
To be sure, "you cannot prove a negative" is itself unproven. It more a rule of thumb to remind you not to assume that though some statement is false now that it always was false and always will be false.
It's not perfect, but it's also not a law of logic or anything. It's just a guideline.
If you really want to get down to it, math doesn't work either. Godel's second incompleteness theorem means we can never be sure whether any mathematical system is 'fully logical'.
This is true without perfect observation. In math you have perfect observation (sometimes), so you can do something like Fermat's Last Theorem. Once you enter the physical world, not so much.
I don't see a hard line between math, then physics, then the 'real' world. They are levels of formalization.
Both in math and in the real world we don't have 'perfect observation'. There is plenty of conjectures in math and the real world that lack a proof of something being true or false.
I think "can't prove a negative" is one of the least informative ways of trying to say something, I assume he meant to say "absence of evidence is not evidence of absence" or perhaps "absence and evidence don't commute"
[Proof by contradiction doesn't have anything to do with the converse of a statement as andrewaylett incorrectly stated and EGreg is being a little bit difficult here.]
> But due to various time-consuming and annoying issues related to Windows updates, I eventually chose to abandon Windows altogether and just run Debian on my honeypot laptop
You know things are bad when people are annoyed by an operating system they dont even use!
I sometimes think there may be a gap for simpler technology where it's easier to ensure nothing's hacked. Like if you have a raspberry pi zero with software on a flash drive and check the hash on all the stuff in the flash drive.
I wonder if it’s practical to make a laptop suitcase where all external edges are touch sensitive? Have a serial number etched into each surface as well.
Edit-the case could have additional logic and wireless charging for power.
Just get one of those fancy metal attaché cases, attach a capacitive sensor to it with some sort of ground plane inside the case, and wire it to some logging microcontroller. Add a gyro chip to boot. Honestly the embedded gyro should be enough, the MEMS chips in phones are CRAZY sensitive, and would easily detect being shifted by a cm.
He's a board member of the Freedom of the Press Foundation, a position also held by Edward Snowden and Daniel Ellsberg. Additionally, he has helped develop SecureDrop and various other tools to enable the anonymous spread of information. Finally, he works for The Intercept, an organization that has a history of receiving leaked information. Seems like a plausible target to me.
But was there any specific thing during the course of his trial run that would have enticed an attacker? And given the fact that a supposed attacker would ostensibly be able to detect the existence of honey before the attack, would they not also be able to detect the absence of it? (And perhaps its simultaneous absence during the course of the trial)
This is a journalistic experiment not a scientific experiment. Experimental bias isn't a large concern, they're trying to record something happening not prove something.
Or it shows that it was enough, and the tester was not able to detect it. It's also possible he was already compromised in a different manner, making the entire ordeal useless.
>What the author actually forgot to do was to add some honey into the honeypot. I.e. become an attractive target.
Preicsely. All the author had to do was use his best broken English and pretend to be a member of The Shadow Brokers.
The article would have been far more exciting had it involved speculating which three letter agency compromised the laptop the most, or the finer points of writing an article while being physically hunted by a snatch and grab team.
>The article would've been far more exciting had it involved speculating which three letter agency compromised his laptop the most, rather than simply mulling over a bunch of "what if" scenarios.
If you would find this more exciting, there's no shortage of fiction already available on similar subjects. This article was about detailing the current risks, and the author's attempt to catch the attack in action. Not as exciting, but important information for people that may be at risk of a similar attack.
I was trying not to be overly rude, but your post read like "this journalism would be better if it had more wild speculation rather than reporting proven facts."
If your security model relies exclusively on the integrity of information processing systems, it's going to fail. Failure is only a function of how much interest there is, and thus investment in, breaking that security. Therefore any decent security model will rely heavily on non-technological methodologies for preserving security compartmentation and protection of vital assets.
I believe it might be possible to replace the HD firmware that will effectively hide code, even in the HD firmware itself, that can only be accessed via a backdoor. For all we know, it comes that way from the factory. Just thinking, while we're being all paranoid here :)
I would think a fully encrypted OS partition would be harder to sneak a backdoor into? Now infecting everything before the OS boots is outside my scope of knowledge, but if you have your partition unencrypted it's definitely much much easier to hijack your OS with physical access.
Before your "real" encrypted partition boots, you must decrypt it, and the system that decrypts it isn't encrypted itself, because you must run it somehow.
On most Linux setups, that system is the initramfs--if you've ever installed Arch or similar, this is what the `mkinitcpio` step generates--and if you peek in your boot partition, it'll probably be named something like `initramfs-linux.img`.
The initramfs is a (often gzip compressed) ramdisk image for a full-blown tiny Linux system, complete with its own set of coreutils (if you want to see what it contains, run `lsinitcpio -x` on it). It's what handles your boot process, like setting up your keymaps, mounting disks, and of course, decrypting encrypted partitions.
By unpacking, modifying, and repacking the initramfs, it's possible--even trivial--to run whatever code you want as root, or intercept the user's encryption password when they type it in to do the type of conventional unencrypted backdooring you have in mind.
But after doing a standard LUKS install, you can move /boot to an SD card. You can also backup the LUKS header to the SD card, and wipe it from the system.
Now the machine cannot be booted without the SD card. After restoring the LUKS header. And even if an adversary creates a new /boot on the machine, you can check for that, and nuke it before booting from the SD card.
If you're detained, you can just chew up the SD card and swallow it. Maybe a little hard on the teeth, but hey.
But of course, that SD card must never leave your body. Except that you probably want to hide copies somewhere. In case you lose it, or whatever.
So put malware in the BIOS itself, or one of the other chips or ROMs available.
I think I remember reading a story recently about Thunderbolt or maybe USB being connected to an Option ROM over PCIe (must have been Thunderbolt I guess) that allowed an attacker to simply plug in a USB stick and permanently and irrevocably pwn the system - right down to securing the flaw that allowed flashing of the ROM over the PCIe connection. I think the malware overwrote some bit that allowed any further writing, so even attaching physical chip flashing device to the ROM wouldn't clear the malware. The machine was effectively permanently compromised and could only be thrown away.
Yes, it was Thunderbolt.[0] Firewire had the same issue. As does PCIe. But maybe USB 2.0 can be secured. If so, just fill other ports with epoxy. And use metal-flake nail polish to tamper protect seams. If USB isn't securable, give up, I guess.
My Linux laptop is fully encrypted and I’ve signed and enrolled my own secure boot keys. Just following best practices and now wondering where this leaves me vulnerable. I think it should prevent tampering with the bootloader but not sure about hdd/net Firmware etc.
Get some physical tamper detection on opening the case. Something like the old spy trick of 'leaving a hear on the door-handle'.
There are some nice tips here on spray glitter on a seal and nail-polishing it, then taking a photo. That way, anyone that breaks the seal has to reproduce the same glitter pattern.
Because that only works for a small subset of attackers. If it's a state actor that's hacked your computer they're unlikely to be interested in any reasonable sum of money.
I think you are advocating something like putting a diamond ring on view behind one of your house's windows.
If the ring gets stolen by someone breaking in via the window then you know it has gone but you do not know whether the thief say changed the locks in some way. Now they can come and go with impunity.