Back at university one lecture included an infographic about how CPU and operating system features like MMU, increasing register width and the like all started at mainframe-scale installations and trickled down to desktop scale systems and later to handheld devices at a surprisingly consistent pace. It was the time w2k was trying to make NT features mainstream and J2ME arrived on phones. I extrapolated a little and made a joke about multi-user concepts arriving on phones and a few years later Android was right on schedule (when that happened, repurposing Linux users as units of app isolation was the headline feature in tech news).
By that measure, virtualization is long overdue, but I really can't claim that I'm not surprised.
You can't claim you're not surprised? So you can claim you are surprised? You're surprised by this. I feel like I'm trying to understand double negatation logic in code haha
In English, there is a sentence structure like ‘I ain’t telling nobody’, which means ‘I won’t tell anyone’, but for me it’s difficult to decipher as well. Why it’s not like ‘I’m telling nobody’ or ‘I’m not telling anyone’? Why the double negative — ain’t and nobody — means negative as well.
Same issue is here. I don’t understand whether they were surprised or not. I assume they are not surprised. But the difficulty of the phrasing makes me wonder the meaning behind it.
I think that's saying they're surprised. The saying in other contexts: "I can't say I'm not impressed" -> I'm impressed. "I can't say you didn't try" -> you tried.
I think you mean "I can't say I'm surprised" -> this is not at all surprising. But that's just one negative.
That's a common enough error that it's become well-known slang. People are used to it and can figure out the intended meaning by intonation and context. Although it's still one of those things that can confuse folks who aren't fluent in English.
"I can't say I'm surprised" or
"I'm not surprised"
would be much clearer here if the intention is to say this is NOT surprising. "I can't say I'm not surprised" is confusing enough that the intention is not clear. Logically it implies surprise.
That's an ironic mistake playing with the idea of negation being additive instead of multiplicative that is enabled by the subtle redundancy encoded in no/any/some.
Makes me wonder if it might be a linguistic eddy echoing from the clash of Germanic and Latin-based French, where negation contracted with the word is also very common (no idea if n'est and friends had been a thing in French at the time that clash happened)
The multi-user part for Android OS is not an extrapolation, it is inevitable.
Fun facts, Unix name is a joke to Multics, where Multi stands for multi-user, and everyone know what happened soon to Unix single user name indication.
Since Multics is written for modern or next-generation time-sharing OS at the time hence it must support multi-process and multi-user capability. This capability is represented by the multiplex terminology. In the early days of analog and digital communication, multiplex is the scheme to transmit and receive multi-user information in time or frequency domain.
If you think about it, much of the complexity of Multics come from its multi-user requirement with overly complex access control matrix, multitude of file types including design for multi-user support, etc. Thus the Unix name metaphor or pun is to make it the latter simple by requirement and design. Remember that Unix was started as a skunkwork and even the original PDP-7 that being used originally was donated by other department of AT&T if I remember correctly it was the sound signal processing department [1]. If it is an official project, the multi-user requirement will be there from the start since arguably AT&T is the largest technical company at the time and they will want multi-user from the get-go.
But after some time and considerable success of Unix, the designers probably looks childish due to the naming since they did introduce multi-user at the later stage, and toned down the exact meaning of Unix. What is the opposite of multi, it is uni.
Your reference doesn't claim that the OS was single-user, only that it was developed for a single person (Thompson). There is no evidence that it was ever a single-user system or that it was named after that.
> Because the new operating system supported only one user (Thompson), he saw it as ...
My previous reference is for the skunkworks not for multi-user. Since it's a skunkworks project there is no requirement for the OS to have multi-user so basically Ken is free to design the Unix system as simple as he wished. That's why initially Unix is a single user, flat file, etc [1]. It's not only for single user but also only support single task or process initially. This design of original Unix is the antithesis to the Multics (thus the name Unix) and the latter was designed from the start as per requirements as multi-process and multi-user hence the inherent complexity. Ken is still alive today perhaps you should ask him directly about this fact and I've no reason to believe otherwise.
Looks like something absolutely overengineered and unnecessary. Why do you need a virtual machine with a separate kernel? Why do you need to protect it from kernel? I guess, it is made mostly for playing DRM content?
A use-case I can imagine is e.g. a password vault, a banking app, or a secure messaging app that you want isolated from everything. Even when running. And where "everything" includes infected apps, an infected host or even physical access.
Not sure if this architecture can handle that, nor of it's the best architecture to solve this problem, though.
Yes, yes, yes... those are all "legitimate" and likely good uses of this technology, but they're most likely just additional/bonus tertiary use cases to the main use case which motivated Google to expend effort on this feature: DRM.
It's much like how web browsers' incognito/private mode is really useful for web developers and certain kinds of troubleshooting, but those are tertiary uses to the primary consumer use case for which it was originally built: browsing porn without leaving history behind.
I'd love to be able to use a Qubes like OS on my phones. There's so much vile garbage I need to run on my phone yet at the same time, I want my phone to have access to my passwords and email. Segregating apps is long overdue.
It is. I'd like to believe that the android team is removed enough from Google's shenanigans that they aren't doing it specifically for them, but there are a lot of corporate app developers (including Google) who want exactly this feature. This means much higher difficulty hacking in multiplayer games (yes haha mobile games, but they're huge in china for example), increased DRM for Netflix et al., and I'm sure the chrome for Android team is salivating at the prospect of running your browser in a trusted VM. Your bank obviously would also enjoy the added security but in reality the current safeguards work well enough for these purposes. This is about protecting apps from adversarial users, not protecting apps from unwittingly infected users.
Run an older/newer version of android in the VM, assuming the host is light enough?
Maybe another OS, if someone does the groundwork on that. Or, fully suspend and move running instances across devices, which I think xen can already do.
It looks like the host kernel is not in full control – there is a EL2-level hypervisor, pKVM [1] that is actually the highest-privilege domain. This is pretty similar to the Xen architecture [1] where the dom0 linux os in charge of managing the machine is running as a guest of the hypervisor.
No, KVM is also a type 1 hypervisor but it doesn't attempt (with the exception of pKVM and of hardware protection features like SEV, neither of which is routinely used by cloud workloads) to protect the guest from a malicious host.
KVM is a type 2 hypervisor as the "Dom 0" kernel has full HW access. Other guests are obviously isolated as configured and are like special processes to userspace.
It gets a bit blurry on AArch64 without and with VHE (Virtual Host Extensions) as without VHE (< ARMv8.1) the kernel runs in EL1 ("kernel mode") most of the time and escalates to EL2 ("hypervisor mode") only when needed, but with VHE it runs at EL2 all the time. (ref. https://lwn.net/Articles/650524/)
No, "type 2" is defined by Goldberg's thesis as "The VMM runs on an extended host [53,75], under the host operating system", where:
* VMM is treated as synonymous with hypervisor
* "Extended host" is defined as "A pseudo-machine [99], also called an extended machine [53] or a user machine [75], is a composite machine produced through a combination of hardware and software, in which the machine's apparent architecture has been changed slightly to make the machine more convenient to use. Typically these architectural changes have taken the form of removing I/O channels and devices, and adding system calls to perform I/O and and other operations"
In other words, type 1 ("bare machine hypervisor") runs in supervisor mode and type 2 runs in user mode. QEMU running in dynamic binary translation mode is a type 2 VMM.
KVM runs on a bare machine, but it
delegates some services to a less privileged component such as QEMU or crosvm or Firecracker. This is not a type 2 hypervisor, it is a type 1 hypervisor that follows security principles such as privilege separation.
KVM is pretty much the only hypervisor that cloud vendors use these days.
So it's true that "most cloud workloads run on type 1 hypervisors" (KVM is one) but not that most cloud vendors/workloads run on microkernel-like hypervisors, with the exception of Azure.
Then can you explain how cloud workloads is the revenge of the microkernel, since there is exactly 1 major cloud provider that uses a microkernel-like hypervisor?
By not running monolithic kernels on top of bare metal, rather virtualized, or even better with nested virtualization, thus throwing out the door all the supposedly performance advantages in the usual monolithic vs microkernel flamewar discussions, regarding context switching.
Additionally to make it to the next level, they run endless amount of container workloads.
I don't know about Android, but AMD CPUs support encrypting regions of physical memory with different keys which are accessible only to one particular VM running, but also not accessible to the host:
Bare metal runs a tiny L0 hypervisor making use of hardware support for nested virtualization. In turn, the L0 can run an L1 hypervisor, e.g. KVM or "host" OS, or minimal L1 VMs that are peers to the L1 "host"-guest of L0.
You can inspect their hypervisor code and verify the host kernel can not access the VM after creation but if you are running as root then you can obviously inspect whatever process is under host/hypervisor control.
You make the various hardware modules security context aware. You then give the host a separate security context from guests. You need a trusted hypervisor to bootstrap it.
Possibly one cybersecurity-related thing you could do is run a headless browser inside this VM, and bridge the network requests to the host network (a little bit like Docker).
Using my open-source BrowserBox^0 project then you could have a "bit more isolated" Browser running on your Android device that would add "VM escape" to any zero-day exploit chain that might be a risk.
This is speculation tho, I don't know if it's actually feasible based on the Android reality right now, but assuming the capabilities that are provided are like a regular headless VM, then it should be. :)
Why do you need VM for isolation? ARM architecture already provides tools for isolation, like MMU and privilege levels. Why do you need another kernel, emulation of hardware devices? It is completely wrong method.
For sure, ARM's MMU and privilege levels are solid for base isolation. But consider the VM for a headless browser as an extra layer of defense, especially in the wild and crazy web threat landscape. Yes, it involves another kernel and some hardware emulation, but modern VMs, particularly with Android's AVF, are designed to be lightweight and efficient.
With AVF, we're looking at tailored isolation, where a VM can be as minimal or as comprehensive as needed. This flexibility means we can create a highly controlled environment for the browser, enhancing security against web-specific exploits. It's about using ARM's strengths and adding a VM where it makes sense for focused, web-centric security. The idea's to mix and match security layers for the best defense, especially with Android's new AVF making VMs more streamlined.
I guess you could say the goal here is to tailor the security approach to the specific risks associated with web browsing, making the system resilient against a broader range of exploits.
The kernel simply has too much code in it and too wide of a boundary to consider a good security boundary. VMs have a much smaller surface by comparison. I don't think reaching for a VM would be necessary if you had a smaller kernel.
How does the video get out? That implies a strong connection to the screen, which has a big attack surface.
This is the classic problem with isolation via virtual machines. To do anything, they have to talk to something, and that's where the security breaches occur.
DRM already uses a trusted execution environment (TEE), which provides more robust isolation than a VM. Thus I doubt needs of video streaming apps are the main motivation.
The DRM TEE on Android needs to be baked in at the factory. If an app brings its own DRM then it's not able to use the TEE. If this enabled apps to use TEE like functionality it'd be good for that use case.
The use of the word "privileged" seems to imply that only system apps will be able to use this - i.e. no installing virtual machines off Google Play anytime soon. Bleh.
> On the Pixel 7, the most configuration you'll need to do is similar to Shizuku. You connect to your own phone over wireless adb, configure the maximum container size, and then choose your Linux distribution. It'll download, configure, and then execute the virtual machine.
It is still baffling that root is so shunned upon in the Android communities. Imagine not having root access to your linux laptop. Magisk users are persecuted and punished by Google for getting root access, which is the bare minimun for a device you own.
As a big root user, I don't lay the blame on Google. They don't prevent much on rooted devices, mostly just gpay, but they do provide the tools for other apps to do detection. Google devices are also the best for modding from a technical standpoint.
Sure, you shouldn't always get to be root on other people's computers. But you absolutely should get to be root whenever you want on your own computers.
An example, I could have picked something else, doesn't make less relevant in how normies mishandle their computers, and if being pedantic with this specific example, unless using Firefox, in what concerns Safari and Chrome installing such extensions can be disabled, at least in corporate computers.
No, I think the reason is that not letting user have full control over hardware is more profitable for manufactures, government and economy as a whole.
And what is "getting shown the door" with your own device? A Fullscreen message telling you you are now banned from the device you spent a months wages on?
>> It is still baffling that root is so shunned upon in the Android communities.
I haven't seen this personally (not saying it isn't a thing to be clear, just I haven't seen it).
My major issue is that for some reason there are carriers in the US that seem to think a user having control over their own device is something to frown on. I get the potential support and warrently nightmare involved but on the other hand it's not a super easy process that one would do accidentally.
This is already possible if your phones ship with the KVM kernel module, like on some Pixel devices, but reading the article suggests that KVM will become standard on all Android devices to enable this.
edit: according to this[1], yes, the pKVM functionality that's standard in Android exposes KVM functionality so that you can run VMs on Android.
Depends on the kind of acceleration you want. VirGL is available on Linux host/Linux guest setups with recent kernels, not sure if QXL/SPICE will be available or can be added to the userland. Can't imagine a hardware passthrough situation making sense on a phone/tablet, either.
> pKVM is built on top of the industry standard Kernel-based Virtual Machine (KVM) in Linux. It means all existing operating systems and workloads that rely on KVM-based virtual machines can work seamlessly on Android devices with pKVM.
It sounds like it will become common eventually. I just wish that there were a more supported pathway to running full VMs like that. These devices are powerful enough to do it pretty well now.
So on desktop, if I spin up a VM with networking disabled I feel pretty confident I can run anything safely, even malware is not going to escape.
What's the current state of the art for Android virtualization? Let's assume we're talking about the newest Pixel and newest Android version. Is there any way to safely run malware or the Facebook app in some sort of air-gapped container and throw it away when you're done?
> if I spin up a VM with networking disabled I feel pretty confident I can run anything safely, even malware is not going to escape.
You are putting too much faith in your VM monitor to keep you safe. There's a lot of attack surface in (for example) QEMU peripherals, and there's plenty of examples of VM escape [1]. CrosVM is probably the only publicly available VMM I'd be willing to trust, and even then I'd be nervous running state-sponsored malware on a machine with important data.
While QEMU uses C, which is not great, it has on its side 15+ years of hardening by the KVM developers. The problem with QEMU is not so much insecurity, it's that it contains the kitchen sink.
However, most of the exploits you'll find in QEMU are against configurations that are never used in real world virtualization scenarios where guests are untrusted. You can recognize them because hardware not commonly used with untrusted guests does not get a CVE.
For a while, slirp was the remaining major issue because it was used way beyond the original intention. But now it's been tamed and there's also passt, a much higher performance and much more secure implementation of user-mode networking.
> You can recognize them because hardware not commonly used with untrusted guests does not get a CVE.
This is not true. Even non default configuration of any software or hardware that contains a security vulnerability can get a CVE. It has in the past and will again in the future.
Source: I have assigned over 2000 cves for the kernel.
Yes, and the policy of QEMU is to not assign CVEs for bugs that would generally be hit only when QEMU is used as a development platform, as opposed to using it to offer virtualization services.
QEMU doesn't have to assign CVE's but any other CNA can. I do not believe that its good security or even good practice to negotiate out of exploitable flaws. Its a dis-service to users.
I don't have enough skin in the game to change upstream QEMU's mind on this, systems in exploitable configurations are just as exploitable with or without a CVE assigned. People with exploitable configurations now just can't find out there is a problem.
The question is whether something is exploitable or just a crash. It is also a disservice to user to worry them about having to do an immediate update and evacuation of all hosts because of an out of bounds access in Gravis Ultrasound emulation.
Would any crash in GCC be a vulnerability because compilers are fed untrusted source code? Perhaps, but in practice godbolt.org is going to be the only case in which you care.
Crashes are classifed as a denial of service, which is CVE. Imagine how mad any cloud host would be if they found you could crash the host from the guest.
> Would any crash in GCC be a vulnerability because compilers are fed untrusted
> source code? Perhaps, but in practice godbolt.org is going to be the only
> case in which you care.
"Untrusted" is one those other fine lines that makes assigning and rating difficult and not something that is taken lightly. Compiling software as a user with additional capabilities, could escalate an attackers position assuming they can inject code into the tree to be built. It would be easier to abuse 'make' to execute code, however this is different than the qemu use case.
The QMEU "development" case could (and likely is) someones regular runtime use case. I dont see a clean way for the qmeu team to communicate this, and even if they did, privesc is privsec. Until we as an industry have a clear definition of what we will and wont "support" and users are familiar with the expectations, we're stuck with the hand we've been dealt.
Hopefully that all makes sense, none of it is said to antagonise or draw hate.
Crashing the host kernel is DoS. Crashing QEMU from the guest is bad because a use-after-free could be a possible avenue for privesc. But if an assertion failure can be triggered from the guest kernel, in the end it's just another way for a virtual machine to terminate itself. It sucks but it is not security sensitive.
> Is there any way to safely run malware or the Facebook app in some sort of air-gapped container and throw it away when you're done?
User profiles can be used in this exact way. Guest user if you intend to install+wipe it right away (though this will prove cumbersome eventually due to having to reinstall the app every time, etc). There is a significant isolation benefit to them, though not currently virtualized. With the isolation can come usability issues. Like transferring files from one profile to another.
They can very slow however (slow to load+setup, and switch between, I mean. when you're inside its effectively a separate, fresh, OS install).
You're thinking of ChromeOS I think, which uses a combination of containers and virtualization (via the same VMM in this article) for Linux and Android apps..
But you and I both know that this feature was designed for the DMCA-lovin' Hollywood types and the control-freak enterprise IT BOFHs, not for your cool hack.
Let's use their tools of oppression against them! (fist emoji)
Media and Enterprise already have TrustZone, Knox, Intune, etc. which work "enough".
Newer markets include cashless (CBDC) payments and digital identity anchored in human biology, demanding more security than legacy content.
> Biometrics: By deploying biometric trusted applets in an isolated virtual machine, developers will have the isolation guarantee, access to more compute power for biometric algorithms, easy updatability regardless of the Trustzone operating system, and a more streamlined deployment.
Fortunately, OSS can enable N-party transaction transparency, we don't have to settle for one-way mirrors and WeChat clones.
Although this is very exciting. Surely performance is not the benefit here? It won’t perform better than android app built not on top of the virtualisation tdchnology?
Android apps are already running on top of a Virtualisation Technology", both current ART (Android Runtime) and the previous one, Delvik, runtimes are virtual machines, process level virtual machines, but they do bytecode translation/JIT nonetheless.
If AVF allows running native code, it might actually be cheaper than the current arrangement.
Literally, the title: ‘Virtual Machine as a core Android Primitive’ and the very first sentence: ‘The Android Virtualization Framework (AVF) will be available on upcoming select Android 14 devices.’
I'd love the easy ability to run confidential computing loads with fine grained control over the data it gets access to. You can do this now on the desktop using SGX (etc) but on mobile it's really hard.
As a specific example of this, it'd be great to be able to run Whisper continually and have strong, system level guarantees about what can read the data.
The threat model you have in your head seems to imply that you don't trust your OS to not peek into what Whisper is doing? There are very few workloads that need or can operate under that model.
It's not really a matter of need, more a matter of good hygiene.
Do you trust any modern OS not to accidently include sensitive information when it generates a crash report for an app and sends it off the some remote server in the background?
Isolation is a useful tool. In an ideal world it can be done perfectly at the OS level, but we don't live in that world.
I agree that being able to isolate things that have different security domains is a useful tool. That said, I am not really seeing how pKVM provides useful primitives for much other than DRM, which has historically been the primary usecase for trusted execution that isolated VMs seem to provide.
Consider that, I want to be able to use a regular android OS, but I don't completely trust it, either its purposely malicious or just accidently going to leak info. So isolation is good in this case, its much easier to audit the mechanism of isolation rather than the whole OS.
The problem with DRM and "trusted computing" part is that it's under someone else's control, some central authority etc. From my reading of the docs on this, this is not the case with pVM, from https://source.android.com/docs/core/virtualization/security
> Data is tied to instances of a pVM, and secure boot ensures that access to an instance’s data can be controlled
> When a device is unlocked with fastboot oem unlock, user data is wiped.
> Once unlocked, the owner of the device is free to reflash partitions that are usually protected by verified boot, including partitions containing the pKVM implementation. Therefore, pKVM on an unlocked device won't be trusted to uphold the security model.
So my reading of this is that that it is under the users control, as long as they have the ability to unlock the bootloader, and reflash the device with their own images.
I'd love someone who is more knowledgeable to weigh in, but this tech, to me, doesn't seem that close to TPM/DRM type chips where there is no possibility of user control.
It is control in the sense that you can run your own applets I guess, but it is not control in the sense that you can necessarily inspect what the programs are doing, because once you reflash the device I'm sure the DRM programs will refuse to run.
As I said in my other response, I make heavy use of trusted (confidential) VMs for machine learning in cloud environments.
There are also vendors that are doing smart contract execution in trusted computing devices so you can get the benefits of trusted execution without the overhead of everyone executing the same code.
The issue here isn't the technology though, it's imagination.
Think about gaming in VR. You might want to make a game where the ML can adapt to the physical peculiarities of a person (think like personalized audio for airpods) but want to guarantee it isn't giving the person an advantage. Even simple things like setting up a VR system (or any physical computing device) can give an advantage to someone if corruptible.
At the moment there are lots of "anti-cheat" technologies that attempt to solve this, but really it needs trusted execution.
I'd love for my banking app to be completely isolated from the rest of my phone OS, in case I get malware. I'm sure journalists at risk of targeting by NSO and its ilk would appreciate isolation or their messaging apps
This is an interesting usecase (basically Qubes) but it has high overhead and I don't really see the framework as being designed to support this, at least yet. You'd need to move all sorts of services into the VM to support the app (like, for example, someone needs to pass touch input and network traffic into the VM) and at this begins to look like an entire OS running in there.
AFAIK Qualcomm's implementation does include passing touch input / display into the VM and is marketed in similar term ("Trusted User Interface") to TEE-based techs, except they are not in S-EL0/1.
I've only seen this used in some really obscure scenario (cryptocurrency wallet) though.
So I'm most familiar with using this in cases like machine learning on private data in cloud environments where you want to make it impossible for the cloud operator to see the data you are using.
I think there are usecases like this outside the mobile _phone_ that are interesting. For example on-device learning for edge devices where the device is not under your control.
See the thing here is that if the device is not under "your" control ("you" being a company or something, and the device being owned by a user) I don't think they will really appreciate you using their hardware to train your model in a way they don't get to see. Why would I want to support this on my own phone?
> I don't think they will really appreciate you using their hardware to train your model in a way they don't get to see.
This absolutely isn't the case. I know a number of vendors who are deploying edge ML capacity in satellites where the use case is for "agencies" to deploy ML algorithms that they (the vendors) cannot see.
I used to work at Google adjacent to this stuff and A) you wouldn't boot up a whole VM for this, on a phone, that'd be very wasteful B) there's much simpler ways to provide the same guarantee.
So in general, just would avoid labeling the quality of other people's takes. You never know who is reading yours
I agree there are currently better ways of doing this (because as you mention the resource/protection trade off for this technology on this application is sub-optimal), but the context here is as an example on HN where the data privacy is obvious so I didn't have to write a whole paper explaining it.
Its "not even wrong", if you had a million monkeys on a million typewriters with a million trillion millenia, still, none would come up with a paper long enough to explain how that'd help anything (ex. trivially, microphone)
Trusted computing can be used for DRM. I'm much more interested in it as a privacy enhancing technology: the fact that you can have strong guarantees about what can be done with data in the enclave is useful for a lot of applications where you have sensitive data.
(Putting aside the fact for the moment that most - if not all - trusted computing platforms have some security vulnerabilities. Obviously this is bad, but doesn't preclude their utility)
Not really. ARM TZ has been repeatedly blown open, in part because it’s not really a separate core or virtualized workload, but a different “mode of operation” that the standard CPU cores switch into temporarily. Basically going back-and-forth between TZ and your OS if I understand correctly. Turns out that’s a side-channel attack nightmare.
This seems like an excellent tool for digital ID cards, banks, government authentication apps, maybe 2FA apps, cryptocurrency wallets, you name it. Anything that's more important than a calculator.
DRM and remote attestation already use a separate secure environment, so I don't see what would change by adding virtualisation.
Websites will require digital ID just to use them, along with remote attestation. They will also be able to ban or block you in an actually effective and comprehensive way.
There will be a chilling effect because people won't want to upset their Google/Microsoft/Apple/Meta etc overlords by saying or doing the wrong thing, and then get locked out of services they need to exist in society, do their job, spend money, etc.
Digital ID exists and is widely used, yet I only need to use my digital ID to authenticate with government services. Remote attestation is the norm for many types of apps already yet I can use my bank app on my rooted phone just fine, or use my phone to authenticate with my government's SSO system.
I'm no fan of the modern dependence on Play Services or Google's attempts to kill adblockers through remote attestation, but none of these technologies are inherently bad. Business devices authenticating to business websites should allow remote attestation to verify that their hardware has not been tempered with just as an extra security measure.
Maybe your government is more evil or incompetent than mine, but bad governments aren't going to he limited by technological concepts like these.
I'm not worried about the government, I'm more worried about inscrutable decisions made by companies like Google, where their automated systems decide that you're an anomaly, and thus malicious, and choose to ban you.
Instead of just losing your account, you (or at least both your machine and your digital ID) are banned for good. This already happens with phones, where the entire device gets banned by apps for good, adding a layer of digital ID on top of it worsens the consequences of such decisions by platform owners against users.
> Remote attestation is the norm for many types of apps already yet I can use my bank app on my rooted phone just fine,
Many people can't on their rooted phones, and this cat-and-mouse game will eventually be won by the parties with million/billions to throw at it.
Digital ID is safe from abuse by our ad overlords as long as it only happens in insular implementations for markets smaller than California. Things would look wildly different if digital ID was a thing in the USA (I find it rather amusing how they claim to have no ID at all, yet a decade in Europe seems to involve less presenting of ID than a month in the US involves presenting their driver's license ID substitute)
But I don't disagree, I'd rather have a rooted phone with a few islands out of my (and the software that I run!) control for sensitive authentication use cases than a phone where I'm not in control at all. Or than two phones, because only one of them can be rooted.
In an ideal world, you could opt-out of isolation without giving the code in the container a way to know. You wouldn't want to opt-out your banking TAN generator just like you wouldn't want to put the password in your email footer, but a Facebook client would likely be a popular target (despite the hypothetical risk of an attacker destroying your reputation by posting in your name).
This nonsense means I just use them in the browser. There is no functionality the apps would provide me that makes it worth fighting with their superstitious nonsense.
Why do US banks do that? I’ve never had a UK or EU bank call me to verify a transaction.
Do you have the IdentityCheck/SecureCode/3-D Secure stuff (2FA for online transactions and at certain terminals)? Are these calls for transactions without chip + PIN?
I’ve had some transactions declined while travelling but maybe about 1/1000, and still no call, and nothing the bank support could do to allow them if I called. I’d just have to use a different bank with a vendor. It’s very much a “computer says no” situation then. Otherwise, the payment just goes through in the 99.9% of all cases.
But the banks in central EU, the Nordics, and the UK don’t seem to monitor the transactions I make while travelling to the point that there would be an actual person involved (calling me or reaching out in some other way).
I’m mostly curious about what problem these bank calls are solving. Is it for credit card fraud? In that case, I wonder why this seems to not be a practice in Europe. Is it because we do chip & PIN in physical payments, and 2FA for online/some kiosks?
> I’ve never had a UK or EU bank call me to verify a transaction
That probably just means that you never made transactions that crossed the banks' suspicion threshold. Which might be quite high if the bank is confident that it won't be on the hook for credential abuse and does not care if their customers lose money to identify theft. That confirmation call would be an indication of good service, not of bad service.
I'm not saying that calls would be preferable to better authentication schemes like chip+pin (in skimming is very much a thing though), calls are just another second factor after all. And not even a particularly safe one. But defense should be layered and that layer stack should absolutely contain a form of confirmation call on some level if you are a bank.
Yep, you need only look at the number of server providers offering confidential computing (pretty much only the big 3) and the premium they charge for it (10x, except AWS “trust me bro” Nitro)
Confidential computing is cool and useful when you’re the one controlling the VM, but scary when you’re the one blindly running it on your hardware
Hopefully this gets (publicly!) backdoored like SEV, SGX, etc
> Confidential computing is cool and useful when you’re the one controlling the VM, but scary when you’re the one blindly running it on your hardware
Important point.
> Hopefully this gets (publicly!) backdoored like SEV, SGX, etc
From my reading this doesn't need to be backdoored, if you have the ability to unlock the bootloader, you are not reliant on googles root of trust to be able to use this feature, you can go ahead and become your own "vendor", by signing your own images, or use your choice of vendor, then relock the bootloader and have the same security guarantees.
I'll admit this only from a cursory glance over the documentation and a vague understanding, happy to be corrected, but seems a lot of the arguments in this thread are about your first point, who has control over the OS.
I'll also add that the EU is being quite proactive in people having control over their own device, and who is their 'choice of vendor' so while I understand concerns people bring up, I'm a bit more optimistic that it can be a more useful tool than not.
By that measure, virtualization is long overdue, but I really can't claim that I'm not surprised.