The M1 boot process is very different from Intel macs. In fact, they do not support external boot disks at all, by design. This is because the built-in firmware is extremely minimal, and does not contain drivers for anything but the internal SSD. It doesn't even have a keyboard driver, or any kind of GUI other than showing the Apple logo and "Entering startup options" text, and some error screens. All of this is by design, for security - UEFI is an enormous hairball of code, especially on Intel macs, and almost impossible to secure properly for that reason.
When you boot into the "Startup Options" menu, you are booting into a special macOS partition in the internal SSD. M1 Macs use macOS as the moral equivalent of the UEFI setup menu. That boot picker that looks like the UEFI boot picker on Intel Macs? Yeah, that's a full-screen app on macOS made to look similar.
So how does external boot work?
The "blessing" process done by the Startup Options screen involves copying the entire macOS Preboot partition - iBoot2 OS loader, Darwin kernel, auxiliary CPU/device firmwares, device tree, and some additional stuff - to the internal SSD. It then creates a local, signed boot policy that allows system firmware to boot this macOS install.
You aren't booting from an external disk. You are booting from the internal SSD, with the root filesystem on the external disk.
Additionally, the integration of the macOS user credentials with the SEP means that you can't "just" take an install made on another machine and use it on a separate one. It involves importing user credentials into the local machine. This process isn't implemented properly yet.
As you might expect, this entire mechanism introduces a ton of corner cases around updates, boot selection, etc., and it is still very buggy and broken. The M1 launch was, when you look at details such as this, very obviously rushed.
I'm actually eagerly waiting for Apple to fix this in future system firmware releases, because my plan for installing Asahi Linux for end users with a minimal amount of fuss is to abuse the mechanism to adopt foreign macOS installations. This elides the current need to have a completely wasted, 60+GB macOS install as a dummy to actually launch Linux (we could clean it up and resize it to get the space back, but it still makes for a very annoying install process that I'm hoping to avoid).
On the plus side, this dual-booting mechanism is very cleverly designed to separately secure different OSes, so you do not need to downgrade security on your machine to install an unsigned OS like Linux. It is a separate OS with a separate security policy. You can keep your macOS install fully secure, and capable of running iOS apps and other actions that require secureboot, and install Linux (or another macOS with a custom kernel, kexts, etc) in parallel, completely unintrusively. So kudos to Apple for designing this whole thing, and for opening it up for us to use :)
> You aren't booting from an external disk. You are booting from the internal SSD, with the root filesystem on the external disk.
So this means that the life of all M1 Macs is locked to the life of the internal storage, as the (inevitable) drive failure will render the whole computer unusable?
Yes. This is also the case for just about every other computer out there, as chances are most PCs will end up writing to UEFI variables on every boot for something or other anyway, and those are stored in Flash memory too, which has a finite endurance. That should be NOR flash, which should have higher endurance than the NAND in SSDs - but we also don't have any numbers on Apple's SSD endurance, so it would be premature to speculate that this is going to be a real problem in non-pathological use cases, and that the machine won't in practice die first of other causes.
Strictly speaking, the things can boot off of DFU (USB device mode) too, but to make that useful for regular boot you need to ask Apple, as currently you cannot boot a normal OS like that as far as I know, only their signed restore bundles (which is how you fix an M1 Mac if you wipe the SSD).
While technically speaking yes, eventually the firmware flash will fail, I'm not sure bad UEFI variables will actually prevent the system from booting, since the rest of firmware should be fine. I've gotten systems to boot with totally borked UEFI flash space.
It is being nitpicky on my part, but I think we're reaching the point where these machines could be expected to last long enough for SSD weardown to become a real issue, even with current flash tech. While this has theoretically been a problem for the past 5 years now, it's a bit disappointing to learn that now you can't use an external drive in the event of the (irreplaceable) internal drive failing.
That depends on the implementation. For example, I've seen many PCs exhibit a behavior where after a major hardware change, usually CPU or RAM, the system boots, shuts down, then boots again automatically. Presumably after discovering the change it wrote some variables describing it, then rebooted. Without the ability to persist that information, I would expect it to loop forever.
A counterpoint would be around 2012-14 (can’t remember exactly), when secure boot was hitting the mainstream, the Ubuntu installer (and systemd) didn’t know that UEFI anything was stored on the internal disk and would wipe it out, causing many machines to hard brick (which then made Lenovo say “uh, we don’t support Linux, tough”, which wasn’t cool).
Raw EDK2 doesn't really need variables at all (in fact if you use it as a coreboot payload, persistentce is not implemented whatsoever, at least as of.. some time ago).
Various crap bolted on by AMI and the OEMs… well, YMMV – remember these laptops where someone did rm -rf with mounted efivarfs on Linux and that bricked it because it just refused to boot without the variables :D
I can believe that sadly :/ UEFI implementations range from pretty good to absolute crap. My newest Dell has at least been mostly decent, but an older Dell has so many issues with the firmware, it's unbelievable.
Not that it (worn out flash) is gonna be a problem in practice, anyway. Neither will the M1's inability to boot without the internal drive. Like Marcan said, it's not really likely to die in realistic use within a pretty long lifespan, I just find these sorts of shortcomings/regressions ... disappointing. I get the various reasons why, it's just annoying.
The 2015 MBPs were the very last with removable storage[0]. The flash controller and NAND chips are soldered directly to the board on every newer MBP and are essentially non-replaceable, short of desoldering the raw NAND flash I suppose.
[0] The low tier 2016 and 2017 touchbar-less MBPs do, quixotically, have removable storage, but using a proprietary format vs the NVMe sticks in previous models. All touchbar MBPs (and M1 Macs) have soldered flash chips, and the TB-less ones were totally discontinued in 2018 IIRC.
Hey, I just wanted to say, I really enjoy your livestreams! Although it's way over my head for most of the time. Beyond the impressive depth of knowledge and insights, it's a very chill vibe and calm talking, nice clickety-clack ASMR-ish soundscape.
If you'd spent some time lewding your *nix (I mean window managers and color profiles and such, not half-naked anime girls), it would be perfect relaxing, nerdy ambient art, all for the format itself. (But I would settle for just the Linux on M1 thing too still ;D)
This may be fine for the current M1 macs, since they are most likely not used by professionals or people that will do potentially installation damaging things, so a trip to the Apple Store will suffice. But this better work once the pro machines come out. It's already hard to recover data from the newer mac's when your system installation has issues, not being able to boot from another drive is going to suck.
You can simply boot to the internet-based recovery image and reinstall or access your data from there, so there is no concern about trashing your installation.
Furthermore I don't see any reason to think that OP's issues are to do with the hardware or that we will have to wait for the next hardware model to have these issues resolved. It seems like a simple OS-level software bug to me.
Oh i did trash my installation. I somehow didn’t have permissions to reinstall from internet recovery. I had to plus another Mac and use Configurator to restore it.
Try erasing all the existing volumes and create a new one before starting the installation next time. That sounds like the message you would get if you are trying to install onto the recovery partition.
EDIT: According to other posts there are in fact some volumes on M1 Macs where if deleted, you will need to do a DFU mode recovery. So perhaps don't actually delete all existing volumes on an M1
Yes, admittedly there do seem to be some recovery concerns on M1 that don't exist on previous platforms.
However I still don't think DFU mode was necessary in your case. If it were necessary, then you would not be able to boot the system at all with the normal method.
Trashing just your OS partition should not require DFU mode. Although yes, it seems like trashing the new boot data partitions might.
M1 Macs don't have Internet Recovery. You need to use another Mac with Apple Configurator 2 to restore the firmware using DFU mode, which unlike with T2 Intel Macs, includes the whole OS.
Parent doesn't mean machines with "Pro" in their branding name, they mean the models actually used by technical/cpu-demanding pros in video/graphic design/audio/3d/etc. (non technical/cpu-demanding professionals, e.g. some office working just needing web and excel, can use Macbook Air or whatever too).
And those aren't out yet, those would be the 16 MBP, the Mac Pro, the iMac and iMac Pro, and so on.
Those kinds of professionals won't be comparing it to other MacBooks, but to other computers of the same form factor. The M1 Macs still aren't the leader in multicore performance or multicore performance per watt.
If you're a programmer that mostly relies on multicore performance, the current M1 Macs aren't the gold standard. Unless of course you are drawn to the OS or something else.
Which computer in the same form factor as the air is as capable or better? All of them are throttling messes. AMD is better then Intel but they still throttle.
The high end Ryzen 5000 series processors even when throttled to their lowest continuous speed still outperform the M1 Macs in multicore. Having twice the cores makes a huge difference.
>Those kinds of professionals won't be comparing it to other MacBooks, but to other computers of the same form factor.
Not really, since most of those kinds of professionals concerned already used Intel Macbooks. It's not like they're gonna win the "build some tower yourself crowd".
The new AMD Ryzen 5000 series laptops are faster multicore than the M1 MacBooks. I'm not comparing towers, a powerful desktop computer would absolutely murder an M1 MacBook in multicore, it wouldn't even be close.
That's not really the same form factor. It's at least 20% heavier and much more thick and larger. The same form factor would be something like a Dell XPS 13 or surface pro. But those are not offered with Ryzen CPUs.
It is less than 10% heavier (1.4kg vs 1.29kg) and is 0.9mm thicker. It is about 22mm wider and 6mm deeper, with a screen 18mm larger in diagonal.
Unless you're really being pedantic the two are of the same form factor. Maybe you are comparing the Intel ThinkBook 14? That one is significantly heavier and a bit thicker.
If an increase in size of around 5% is too much, there are smaller laptops with the same processor family.
But why not call out the specific niche then? Pro does not equal high demands. And audio has very different demands then 3D work for example. In fact for audio work the M1 is perfect hardware wise.
The M1 is fine for a lot of workloads but the current M1 macs are not suitable for heavy work. Regarding niches; video work, audio work, ML, compiling stuff larger than simple projects, etc. The current M1 macs start to break down on a lot of medium sized workloads due a lack of RAM. So in my view, pro machine = >32gb ram, better gpu performance, better multicore performance.
For Apple, the core of "pro" is audiovisual professionals.
I mean, really, it just means "the expensive one", as in AirPods Pro, but sticking to Macs for a second: if your job is video editing, or publishing, the bigger and nicer screen is going to trump a faster processor and better battery life. The (much) better speakers are also a big plus.
Smart users in that category are holding off new purchases, if they possibly can, until the M2 (presumably) 16" MBPs drop. Especially if the rumors hold and it comes with an onboard SD card reader.
I don't see it in mentioned in the article but could be a driver issue for the enclosures of theses disks. Had that happen when I was using shitty cables for an SSD to my Raspberry.
Otherwise it's entirely possible it's for security reasons and somehow related to the secure enclave stuff.
But I might way out of my depth here.
What is the use case for using an external boot disk with a mac?
I once worked with a team that worked three 8 hour shifts and flexible seating, so there were about 3 times as many workers as there were desks and each team member might be sitting at several possible computers. Rather than worry about syncing personal files and profiles across a network, each team member had their own external drive that they used to boot the Mac mini wherever they were sitting. There may have been better solutions for that particular team, but this certainly worked and I’d hate to throw away the option for doing something like that in the future.
And I’ve booted Linux from an external drive a zillion times on a PC. It’s fairly handy to have the option.
Performance kinda sucks unless you pay a lot of attention to buying the right model of drive. Most drives are simply designed for storing a few TB's of photos and videos. As soon as you are doing low latency paging to the drive and millions of flush() operations, things bog down pretty badly, even on a modern SSD.
Simple things like USB not having any decent way to do shared memory with the host are a big part of it...
I don't even think it should have to be justified. Booting from external media is one of the most basic computer maintenance tools we have. It's not like the computing industry has been giving two shits about user freedom anyway. The cynic in me wants to say that the insecurity of modern computing was intentional, so as to eventually require such user-hostile lockdowns. However, the rationalist in me understands that it most likely just ended up happening this way by chance. Either way, it's unfortunate that everything is being locked down more and more in the name of security. At the very least, such walled gardens should be opt out.
One thing that comes to mind is that as a developer you don't have to pollute your main partition with a beta if you can install it on an external drive.
AFS has containers to install betas on the same partition, the main partition is not polluted (and the additional volume(s) can be removed at will). cfr. https://support.apple.com/en-us/HT208891
I'm not saying that it's "normal" not to be able to boot from external devices, I'm just answering your point
This is quite risky - previous beta updates (such as Big Sur) did break installing updates on older OS releases in the same APFS container, and some beta updates have in the past done incompatible APFS updates that broke the ability to boot old OSes.
The fact the errors occur before the "reboot" part of the installation tells me it isn't a driver issue. The "unable to set startup disk, SDErrorDomain 104" suggests the error is a check in firmware preventing this.
It's probably simply unimplemented, and rather than try anyway there is a check somewhere in the firmware that the drive being marked bootable meets some criteria. Clearly the internal drive meets that, and some external drives do. Running a tracer on the commands sent to the drive would likely identify it.
- Booting to recover your data when the system has trashed itself, typically due to a failed software update, to the point that it won't boot off the internal drive but the drive is still working. This sort of thing has been seen with older MBs.
- Continuing to be able to use your laptop when the internal drive has failed. Very helpful if you can't afford a replacement, or if you need to carry on without waiting for a replacement and you have a recent backup.
- Booting older versions of MacOS (that don't know about APFS, or if you have reason not to trust APFS partitioning) for testing, compiling for an older target, running software that doesn't run on the current version of MacOS, or other reasons.
(But old MacOS won't work with the M1 so that doesn't matter at the moment.)
I don't use an external boot disk but I certainly understand why people do. And yeah this needs to be fixed.
But is anyone else shocked at how well the transition to ARM has gone for Apple? Because I am. I figured this would be something I would avoid as long as possible but honestly I'd probably be happy with this year's MBP M1 refresh.
> But is anyone else shocked at how well the transition to ARM has gone for Apple?
I totally get why some folks would want to wait for the next-gen of these initial machines, but I'm not surprised at how well it's gone.
I chose to buy the initial M1 MacBook Air model right-off-the-bat. The reason for this is that I went through the last two Mac CPU arch transitions (68k -> PPC, PPC -> x86), and the way Apple was able to handle those was truly impressive to me. Plus I really needed more built-in storage ASAP :)
Does anyone have any info on why the actual macOS installation isn't bootable by both architectures? I seem to have lots of different universal and x86_64 binaries and kexts on my M1 machine.
Also the kernel (that I thought was the actual kernel in use) in /System/Library/Kernels reports "Mach-O 64-bit executable x86_64" so I'm kinda confused about all this.
On your M1 machine, the used kernel is /System/Library/Kernels/kernel.release.t8101.
(for T810x processors, A14 is T8101 and M1 is T8103, both share the same kernel)
The installation should be bootable by both, but not sure that the flow to bless an install done on another machine to be bootable on an other M1 one is done/fully functional yet. (on M1 machines, you need to bless external volumes before booting from them)
Booting an install done on an M1 on an x86 Mac should work already though.
That's not the kernel. I mean, it's the kernel, but not the kernel that the machine boots from.
The built-in bootloader actually boots iBoot2 from /System/Volumes/Preboot/(UUID)/boot/(long hash)/usr/standalone/firmware/iBoot.img4, and that then loads the Darwin kernel from /System/Volumes/Preboot/(UUID)/boot/(long hash)/System/Library/Caches/com.apple.kernelcaches/kernelcache.
However, the system firmware can only boot this from the internal SSD, not from any external storage. When you choose an external disk to boot from (the "bless" thing), those files get copied to the internal SSD, and it boots from that instead.
But note that Intel Macs use a similar but completely different system, also redone in Big Sur, with a set of "kext collection" files in a different location and format. Because that makes so much sense.
So you literally cannot boot an M1 Mac with a nonfunctional internal drive, even if you’re successfully ’booting’ from an external drive before it fails? The SSD is integral to its functioning?
Good to know- glad I got a terabyte drive then, since it will less likely suffer a wear failure in its usable lifetime, but sad that a wear part guarantees a finite lifetime for a machine with no moving parts.
Slightly off-topic but I remember when system updates on Macs were handled much better than for example Windows. I think that's no longer the case: you can't download standalone updates (the .pkg) anymore, and installing them feels like you're installing the entire OS (even a minor release like 11.2.1 needs a several gigabyte download and takes a long time to install).
I believe you're right that you can't download "delta" updates, but standalone OS installers/updaters can be downloaded.
Go to https://apps.apple.com/app/macos-big-sur/id1526878132 (for Big Sur), click "Get", use Software Update to monitor the download progress, then "rescue" the installer ("Install macOS Big Sur.app") from Applications folder.
I think we're just a few steps away from having our own devices in such a deeply locked state that boot disks will eventually stop working on any sufficiently modern system.
Apple expended a significant amount of resources designing this bespoke boot chain so that it isn't locked down and does in fact let users run completely unsigned operating systems (as long as they have physical access and give consent with their local user credentials). In fact, the design goes further, allowing individual installed OSes to have different security levels, thus allowing users to get the benefits of both a completely opened up OS install and a fully Apple-vetted, secureboot install.
They aren't going to backtrack on all that. That'd be silly.
So please stop the FUD about Apple locking their devices down; this is a Mac, not an iPhone :-)
I can assure you here that it works on some drives but not others because the boot chain on Apple Silicon Macs is written from scratch, not using UEFI.
Bugs are being worked out slowly, and there's no reason to assume bad intentions here. (I have an Apple M1 MacBook Air, and it boots from a Samsung T7 external SSD just fine)
IMO this line of thinking implies a lack of contextual understanding about Apple's culture, strategy, and history. Of course we can never predict 100% what big companies will do, but we can be pretty sure Apple isn't interested in doing that.
SBSA-compliant systems boot with UEFI+ACPI. Server customers care about standardization, so boot was made "boring" and basically identical to x86.
It's just the embedded junk (which includes Apple) that doesn't care. Apple has vertical integration, they only care about the full experience including their own OS. Embedded SoC vendors only care about having some "BSP" fork of Linux because that is enough to make a crappy device with it. There's just, unfortunately, zero motivation for these vendors to embrace standards.
OT: one of the nice little things with MacBooks is that you have a disk cloning thing built in. It takes literally one step and half an hour to clone my internal 512gb ssd to external Samsung t5 and have it booted up. No problems whatsoever and I’ve been successfully copying the same system back and forth about 10 times last year
One thing I do recommend though: remove airpods from the Apple account before cloning and pair again once the same system is running on new hardware. Sometimes this causes problems
External SSD is my default setup. I encourage it in the office as well. Even my gaming machine is setup at home this way. When popping over a mates I often just plug it out and stick it in to their machine.
It's very useful to be able to seperate data from the hardware. If something is wrong with a machine I can just plug straight in to another one. An SSD is much easier to carry than a laptop.
The other problem with internal hard drives is when they get stuck inside e.g. with MacBooks being sealed these days. External SSDs can offer great flexibility and a trade off worth considering. I can imagine people having security concerns etc.
Disappointing to hear this about the new M1s.
I recommend people interested in dipping their toes into Linux Gaming trying this approach:
When popping over a mates I often just plug it out and stick it in to their machine.
This was one of the touted technical uses for the first (Firewire) iPod. Apple had a notion that people could carry an iPod with them, and then boot any Mac from it and start working right where they left off.[1] This is why Xserves had a Firewire port on the front.
[1] As a point of interest, this used to also exist in the radio industry. Certain brands of control boards had "personality cards" that could be tuned[2] to a particular D.J. Then when the D.J. had to work from another studio, or remotely, he could pop his personality card into the control board and it would be preset to him.
[2] And by "tuned," that meant tuned for his voice[3], to make it sound the way he, or his program director, wanted it to sound.
[3] Even more interesting: Though these were circuit boards, the tuning was done mechanically, and by hand. They were a series of potentiometers covering different frequency ranges.
Oh the glorious offline-first software and hardware. I miss it. Today this would've been handled through "the cloud", requiring an account, a consent to tracking and analytics, a monthly subscription, and a reliable high-speed internet connection.
"Home on iPod" wasn't a full bootable OS X, just the user's Home directory. The feature was also canceled before release, likely due to performance or durability limitations of the iPod's micro hard drive.
What about performance concerns? This sounds unbearably slow, especially for someone accustomed to an NVME SSD. Not sure why you would do that to yourself when cloud backups are an option.
I have been using for several years bootable Thunderbolt 3 SSD's (on Linux).
As those are equivalent with 4 PCIe 3.0 lanes, exactly like the internal M.2 type M SSD slots, there is no performance difference.
Only the latest laptops with Tiger Lake, and the laptops with Ryzen 5xxx to be introduced soon, have PCIe 4.0 M.2 slots, so they will be able to have faster internal SSD's.
I expect that the interconnection standards for external devices will also continue to be improved, restoring parity with the internal devices.
I tried this for a bit and noticed a huge performance drop. Also noticed system freezing.
I have an NVME 2.0TB drive in a USB-C enclosure and while fast as an external this is not fast enough for booting and running the dozen or so apps I keep open on my MBP.
Keep in mind that USB-C != Thunderbolt 3. Yes, the latter also physically connects to a USB-C form port, but it's totally different. It's quite confusing and sad.
USB-C is super slow for this. You need to use Thunderbolt NVME enclosures which can run the NVME at PCIe 3.0 x 4 speed. I have 5 and RAID them together, which handily beats the internal drive for speed by about 2x.
I haven't noticed any performance degradation which is counter intuitive because I really expected there to be one. The Samsung T5 - T7s have been great in my experience. As for the cloud .. well that has it's own challenges too. Currently I'm trying to deGoogle myself and not doing a great job at it.
To address any conventional SSD vs NVMe SSDs performance gap maybe consider and external NVMe SSD!
PCIe costs you less than a microsecond of latency. A good SSD has 60 microseconds of latency. You're not going to notice any difference from moving the controller.
Not so long ago SSDs didn't exists and people were using their PC just fine. Today's high performance portable SSD are miles ahead of the 10000rpm "fast" internal HDDs from back in the days
Software has bloated so much that users having an SSD is an assumption. In fact a 2.5 SATA SSD is almost unbearable on a modern machine for gaming, etc. since M2/NVME showed up.
Kinda yeah, but just to set expectations -- if you want to do an external hard drive as your main drive, just get a thunderbolt drive. You might be able to massage a usable experience out of USB, but thunderbolt is more or less equivalent to an internal drive.
The technical details may differ between if my data is stored on an SSD I'm accessing locally that I booted from, vs if my data is stored in the cloud and I'm accessing it via a web browser, but at the end of the day, I have access to my files. The technical differences have many practical ramifications (eg disk speed), but they are sufficiently practically equivalent for many different use cases. We see this with Chromebooks, where they will never be a full replacement for every windows machine. There is a large reason they've gained marketshare, however.
Not that that means anything for performance. NVMe just means that it uses PCIe.
Since the external port is USB, the internal use of NVMe is actually a downside. The drive actually has two separate circuit boards inside. One of them takes up space and power just to convert NVMe to USB, and wouldn't exist in a better design.
Computers run mostly from RAM and network. Unless you are doing heavy disk work, nvme isn't a big deal. You can use internal disk for temporary projects (compiling, video editing) and then copy the snapshot to external at end of day.
Tell that to my work iMac with a HDD (not my choice) and a new enough OS to require APFS. The only programs I typically have open are Chrome and a music player, and it freezes randomly, and Chrome is constantly waiting on cache.
It's not that bad … for what I'm doing, and I can do something else when the computer stalls. I've thought about an external TBT SSD, but doesn't seem worth it yet.
I dunno man, ssds are pretty basic fare these days. They have been standard for about a decade now. If an employer refuses to give their developers ssds to work off, that's a massive sign of disrespect IMO.
I agree with that! But I'm not a developer. I'm doing hardware repairs, and this machine is to read ticket notes and enter repair info and send emails (and play my music :-).
I've never done this, so my concern might be a total non-issue.
But how do you prevent the external SSD accidentally becoming unplugged when you're doing this with a laptop? And what would actually happen if that did occur?
It would be annoying, so you should better avoid that.
Currently this is the weak point of using external bootable SSD's. Because of that, I prefer the SSD's where the Thunderbolt/USB cable is captive at the SSD end, as then there are less connections that can be unplugged accidentally.
What I would like is to have in the laptop a CFexpress memory card slot.
The CFexpress memory cards have a PCIe electrical interface, but they are mechanically identical with the older Sony XQD cards, i.e. they are solid, rugged, not flimsy like the SDXC memory cards, and they allow a very large number of insertion/extraction cycles.
If the laptops would have such card slots and if SSD's would be available at a reasonable price in this format, then there would not remain any reason to use non-removable SSD's. The current CFexpress specification is based on PCIe 3.0, but future versions can be updated to PCIe 4.0, like the internal M.2 SSD slots of the latest laptops.
Not quite the same, but there's a product being developed to adapt M.2 2242 NVMe drives to Expresscard, the other hotpluggable PCIe standard. Most expresscard implementations are only a since lane of PCIe v2, though some of the new "P" series thinkpads support v3.
Being a single lane limits it compared CFexpress, but it's still a bump over AHCI drives.
Thinkpads up to 2013 also had a tool-less removable disk drive that allows for hot swapping 2.5" SATA drives in seconds while keeping them safe inside. This has been abandoned with the obsolescence of optical drives though.
> The CFexpress memory cards have a PCIe electrical interface, but they are mechanically identical with the older Sony XQD cards, i.e. they are solid, rugged, not flimsy like the SDXC memory cards
I agree, though with SDExpress becoming standard soon there is a much realler possibility of future laptops having pcie microsd slots, which might be good enough if you can find a durable model.
We're well past having card slots on laptops in the Apple realm. Rumor mill suggested possibly getting ports back on a refreshed MacBookPro, but I am not holding my breath.
For those not using Apple laptops, this might be a thing, but I still doubt it's something manufactures will see a lot of demand.
My favorite has been the USB memory sticks that are the same physical size as a bluetooth dongle. Since they are so low profile, the chances of them coming unplugged are pretty damn slim. The only draw back was they are pretty small in capacity for trying to run an OS.
> If the laptops would have such card slots and if SSD's would be available at a reasonable price in this format, then there would not remain any reason to use non-removable SSD's.
A gentle reminder that you live in a world where SSDs on some laptops are not removable at all, as are batteries. And I’m leaving out minutiae like RAM.
I’m just not seeing it as possible, how would you plan the obsolescence of such devices?
If it's easier to take your data drive from one laptop to the next, it's no longer a end user problem when the laptop needs to be thrown away at the end of each season.
Yes, it's a shame you can't charge a 10x markup on the storage the end user takes with them, but you can still charge a 10x markup on the built in storage that's non-removable and whose failure still blocks booting (necessitating replacement of the whole thing, naturally)
Wow, I didnt even know these existed, thanks for mentioning!
Having these CFexpress cards as an option instead or next to an SD card reader on laptops would be great, for compact backups alone. But looking up details and pricing I get a huge DAT vibe: excellent product, limited users makes for expensive media and even less adoption...
I’m using a Samsung t5 via usb c on my MacBook and I’d confidently say it’s impossible to accidentally unplug. Not sure if they made the plug like that on purpose or not but it’s definitely harder to remove than most other usb c plugs. Even after more than a year of use
What I’ve also seen some companies do is simply duct tape the ssd to the laptop lid so they can walk around freely to meeting rooms etc.
Are you using it as an external boot drive or just as additional storage? I’ve seen people use it for storage but the parent comment is the first I’ve heard of potentially using this as it’s own OS..
Perhaps useful for dual-booting between personal and work installations, considering MacBooks are sealed with one drive?
Yeah exactly. I had to get my work MacBook repaired a few times last year and since I’m working from home I just mirrored the internal ssd to the t5 and booted from it on my private Mac.
This way I kept all the company settings and programs working in a way that a time machine restore couldn’t do.
Works just the way you would expect from an internal ssd and switching takes less than two minutes if necessary
My companies MDM software would freak out if I did that. So much of it is tied to the specific serial number of the machine, as well as various baked in management of push notifications for profile updates and the like being tied to the device itself through Apple.
Yeah I could only do that because it’s essentially a BYOD setup. Both MacBooks are mine and there is no company MDM involved. Just some software installed & configured by others that would be quite annoying to restore via time machine
"It's very useful to be able to seperate data from the hardware."
I have been doing this for decades, but I do it with USB sticks, not SSDs, with NetBSD, not Windows. After boot I can pull out the stick, freeing up the USB port for something else.
I would be interested if I coul do the same with Windows but it does not seem as straightforward. I know it is possible but I also know there is a good chance I would never get it to work.
For example, the EasyUEFI link you provided suggests I need to purchase software, the software only runs on Windows, Windows 7 is not truly portable and the authors are not clued in to the idea of separating data from hardware:
"We recommend that you use Hasleo BitLocker Anywhere to encrypt Windows To Go drive to keep your data safe. If you want to upgrade Windows To Go to Windows 10 October 2020 Update (Windows 10 20H2), please go to Windows To Go Upgrader."
If only there was a way I could buy Windows on a stick instead of having to buy a computer with pre-installed.
A "miniroot" is stored in the kernel, indeed the entire OS is loaded into memory. Other BSDs and Linux have used, and may still use, similar approaches in some cases. (For example, BSD install media.) This is old hat but reliable. What is amazing is that you still see people in 2021, when most computers have ample amounts of memory, complaning about "disk" issues, e.g., people try to use SD cards as a "hard drive" for RPis.
The Raspberry Pi foundation wants you to use Visual Studio Code, which means you want to free up RAM. A file system on SD that's mounted read only on boot might be a good call, with /home on an external USB.
It can be done with Linux or any OS that has a RAM-based file system. And with machines coming with gigabytes of memory, the installation supported by a RAM FS can be very complete indeed. See Puppy Linux as an example of a distro that can run from RAM.
On x86, that's the reality. I recently went Intel CPU & AMD GPU to AMD CPU & Nvidia GPU and just booted straight into my old Windows system, no problem at all.
But on anything ARM based? The whole ecosystem doesn't give a fuck. Everything is hyperspecific for one platform exactly. At one point, the Linux kernel had thousands of board source files that specified exactly what hardware your cursed ARM board had. Then they invented device tree but frankly the situation has barely changed, don't expect to ever build a generic ARM image for anything.
You are describing specifically "embedded junk" (which includes the M1, sorry Apple) not literally "anything ARM based". SBSA-compliant platforms boot with UEFI+ACPI, using fully generic DSDT descriptions for PCIe (ECAM) and USB (XHCI). Speaking of GPUs, quite common for them to have QEMU in firmware to run the x86-compiled EFI GOP driver straight from the card's ROM and get video output before the OS.
It's cheaper to use custom ARM boards than it is to build hardware that is SBSA-compliant, so consumers can expect that most, if not all, of their ARM devices are not SBSA-compliant.
I actually started doing this on my TVs. They have usb ports. I bought a 16GB usb stick. And I just did youtub-dl to download a bunch of workouts. And then use roku app on the TV to play them. Saves time, and no youtube ads.
I'd only bother to encrypt work machines -- I don't see why it wouldn't work. My gaming setup -- I'd definitely not bother.
Not every machine is a guaranteed boot. It depends on MBR/GPT etc and whether UEFI is disabled in the BIOS and other configurations. However generally yes, they will. I've even have setups where the SSD would would have macOS/Windows/Linux. Much older motherboard pre-2012 generally don't like this setup and it can be v.slow.
I have enough trouble dual-booting encrypted internal disks on the same machine... any changes to the TPM, or say booting one disk as a virtual machine from within the other disks OS running natively, breaks all AAD and windows hello authentication for me. Have to remove accounts and add them back in from settings on windows 10. So I’d be quite surprised if you’ve figured out a seamless way to boot an encrypted disk on multiple machines...
For example, with bitlocker, won’t you need to enter the recovery key when trying to boot from a new machine? And have to sign out and back in to all relevant OS level accounts? Even then I face authentication issues at times
I really would like this to work seamlessly because moving my internal SSD work disk to an external one would be far safer than lugging it around inside my personal laptop all the time. But the work disk has to be encrypted...
Also, for hardware compatibility’s sake, I’d think Linux would be a far superior daily driver OS to ‘multi-hardware boot’, considering relevant drivers are loaded from kernel on boot rather than selectively pre-installed at OS creation time for that one device, increasing plug and play compatibility
If you cannot have fully encrypted setup, then you can consider encrypting at least most of data.
At work due to remote work I cannot have fully encrypted disc with Windows as I have to reboot remotely. So I left a small enough partition for Windows, then created another partition for my data that I encrypted with strong password in Bitlocker. Then I symlinked my user directory from C:\Users\UserName to a directory on D: and created an extra account that I use after reboot to unlock the encrypted disc with my data.
This is not ideal, as Windows still may store my data on C:\, but if one disables virtual memory, it is a reasonably secure setup.
Yeah for unlocking it within a host OS as a data disk, not for booting from it... you can set auto unlock at boot or a pin to boot, both of which use the TPM
But try move a bootable bitlocker encrypted disk to new hardware and you’ll have to enter the recovery key
I would really like to be wrong about this since it would make my life much easier, but this understanding is based on experience using multiple work machines with encrypted boot drives every day :(
Ya, I ran for a long time with a passphrase in a system without a TPM. I recently got a TPM for it so I could have it restart without me being present.
I used the `manage-bde` command rather than powershell:
The GUI for bitlocker doesn't provide access to all the functionality that manage-bde provides (iirc: if a TPM is present, the passphrase options aren't presented in the GUI. And it used to talk about a "PIN" instead of a passphrase/password, but the "PIN" can (with some gpo tweaking) contain letters/space/punct as well as numbers.
Oh wow, thank you! Should’ve known to head straight to the docs instead of searching phrases online. Now to spend the next day backing everything up and configuring passphrase on boot...
A couple of years ago I used to run a qemu Linux-host/Windows-guest with PCI pass-through for GPU/SSD/etc.
I could either boot the Windows install directly from the SSD, or I could boot Linux and then run the same install in qemu, without issue.
I think there used to be more problems if you changed the hardware too much from the original install. Like, if you changed your motherboard (which the above would do of course), then you couldn't run the same install. But I don't think that's as much of a problem anymore?
This was of course not an external SSD, so I don't know how that changes things, if at all.
Cool! I used to run a similar setup, when I started out I passed through the hardware. But when things were running smoothly I never booted back into windows natively (Wasn't planning on this use-case).
Nice to hear that it is possible. I feel like MS essentially gave everyone the wrong idea about OSes dealing with hardware changes.
Folks doing this need to be careful not to plug their disk into a machine with a different OS.
If the disk is plugged into an already booted system and the OS doesn't recognize the filesystem format, I believe macOS and Windows display a prompt to format the disk, which someone may accidentally click.
I would love to do that; but Windows (yes yes I know:) seems to intentionally block normal installation to external SSDs; and all the workarounds I've tried so far have removed all the convenience factors :|
I would have expected that as well but found in general the process to be pretty seamless.
I should probably mention I just stumbled on to this approach rather than having forged it through clever thinking -- it was originally due to a faulty motherboard burning out internal HDDs consitently. So I HAD to opt for an external drive and once I did the going back made little sense as it just imposed restrictions I didn't want anymore.
Between Macs with macOS, this usually works fine (except with older hardware that is not supported by the OS on the external drive). Can't say for Windows.
This might be the case for Windows, but with Linux, most drivers are already distributed with the kernel and can be dynamically loaded and unloaded depending on what hardware is discovered while booting or afterwards.
I've been using external disk as my main macOS too and the only downside is the worry cable would snap out which it did a few times when I moved laptop around and crashed everything. Other than that, great way to upgrade my MBP with 1TB disk without needing to buy new MBP.
It works nicely if you don’t use commercial apps that uses device identifiers for activation.
For such scenarios common by media creators external disks fail to “just” plug.
Some apps uses dongles or allow activating to usb sticks. But still it’s the exception.
One of the major advantages of the M1 hardware is that the storage controller for the SSD is directly integrated into the CPU. You would be losing a big part of that performance advantage by using an external OS volume.
That isn't true. There wouldn't be a performance advantage to that at all (the limiting factors in storage performance have nothing to do with motherboard trace length, which has a negligible impact on performance in general), and the M1 Macs have fairly ordinary SSD performance.
I agree with your reasoning, especially since external SSD have made it safer to carry physically as opposed to external HDD.
But I mirror my internal NVMe SSD to external SSD as the performance benefits of the former matters for me most of the time and when needed bootable external SSD is available at disposal.
On macOS(not M1), Carbon Copy Cloner could create bootable clones of the disks. But I'm out of macOS for good as changes in Big Sur was too much for me, No offense to those who like it; but it seemed like a unicorn poop for me. Mojave was the last version which preserved the essence of macOS IMHO.
Now I'm back to Linux, hopefully btrfs(snapshots) + Timeshift should recreate the same workflow.
Load a Windows 10 ISO and click Windows to Go, wait a couple minutes. Full Windows on a USB and it’ll automatically rediscover hardware if you move it to a new PC.
Look up WinPE. It's a lightweight "live CD/USB" version of Windows that can be run entirely in RAM. A full "portable Windows" install can be made from it. AFAIK all the third-party tools do is automate the process of copying thousands of system files and also act as a boot manager.