Hacker News new | past | comments | ask | show | jobs | submit login
Qubes Air: Generalizing the Qubes Architecture (qubes-os.org)
183 points by andrewdavidwong on Jan 29, 2018 | hide | past | favorite | 65 comments



I‘m a heavy user of Qubes OS. The “Convert to Tusted PDF” feature is something I use almost daily.

My use case is examining, cleaning and possibly distributing application letters and CVs. If you have to read job application letters, the advice to just open files from people you trust, just doesn’t work. The amount of untargeted malware we receive through this channel is considerable. We had targeted attacks too.

I’ve known about Qubes OS for a long time but interestingly the advice to use it for all processing of application letters didn’t come from my tech circles but from a recruiter.

Given the strict laws about data retention in my jurisdiction (Germany) a cloud solution (short of homomorphic encryption) probably isn’t going to work for me. The idea of using discrete devices sounds interesting though.


Have you considered a jailed pdf reader application instead? I'm curious what the decision factors were important for you.


So everyone downstream of weinzierl has to be aware that (1) the PDFs he hands them may be full of malware (2) have to use VMs (3) must open said malware-packed PDFs in a disposable VM (4) must strictly adhere to D-VM usage protocol.


Or weinzierl could print them and possibly rescan for further distribution.


Which is basically what Qubes "Convert to trusted PDF" does.


Convert to DjVu.


My first solution would be improving reader security by starting with one with decent code (Espie suggested MuPDF), compiling it with something that makes it memory-safe, and running it in a sandbox on separation kernel (eg Genode or Muen). Then, a memory-safe conversion tool turns it into something more trustworthy. This might even be batched on simple hardware which itself has lower attack surface. Later on secure hardware like CHERI CPU albeit that can happen today if you have FPGA board and skills to run their HDL code.

For fun, though, I'll dust off an old concept since you're talking printing. One might start by printing them to a virtual screen like in Nitpicker GUI with the untrusted reader. Aside from isolation, there could be a feature to convert what's on the virtual screen or page into a compressed image. A PDF with N pages becomes a zip of N images or a single image of some size. That itself could be distributed to run in the trusted, safe viewers we already should have, right? ;) It might also be run back through similarly-deprivileged OCR to turn into a safer format. Gotta eyeball it if doing it that way. That said, there are fonts that work well with OCR that it might be converted to as part of image production if OCR is the goal in the first place.

Could be a fun, little project teaching folks about a number of topics at once.


Your "first solution" would be to take a de novo PDF implementation written in C, "compile it with something that makes it memory-safe", and then port it to an L4 microkernel. Maybe bust out some HDL and get parts of it deployed directly on to FPGA.

Got it.


I said a separation kernel like the FOSS projects and commercial products dating back to 2005 I told Joanna about on Qubes mailing list which were compartmentalizing things on security-focused kernels. Aside from small TCB, they have optional mitifations for storage and timing channels. Aside from isolation, a standard practice on embedded side was including safe subsets of Java or Ada running right on the kernel to implement specific components more safely. So, basically just what was standard, deployed practice in high security over a decade ago.

Optionally, I also pointed out people interested in developing solutions have options available now for safety or security on CPU side, too. They can do software, hardware, mix of both, whatever suits their purposes.


Oh, and a safe subset of Java or Ada in the kernel. Well, that clears it all up.


"safe subsets of Java or Ada running right on the kernel to implement specific components more safely. "

"a safe subset of Java or Ada in the kernel."

Done here since you're arguing against points Im not making. For anyone your strawman confused, the specific components are user-mode apps running on a separation kernel to minimize privilege. Not just piles of extra code in some kernel.


> For fun, though, I'll dust off an old concept since you're talking printing. One might start by printing them to a virtual screen like in Nitpicker GUI with the untrusted reader. Aside from isolation, there could be a feature to convert what's on the virtual screen or page into a compressed image. A PDF with N pages becomes a zip of N images or a single image of some size. That itself could be distributed to run in the trusted, safe viewers we already should have, right?

Which is literally what Qubes "Convert to trusted PDF" does.

> My first solution would be improving reader security by starting with one with decent code (Espie suggested MuPDF), compiling it with something that makes it memory-safe, and running it in a sandbox on separation kernel (eg Genode or Muen). Then, a memory-safe conversion tool turns it into something more trustworthy.

It would of course be preferable to have a secure PDF reader to begin with, but the complexities of the PDF format doesn't isn't really conducive to that.


Oh, that's neat it's what they're doing. Far as secure PDF reader, you can definitely reduce risks it poses with mitigations which reduce headaches when they don't reduce attacks. Those I was thinking of are doing it with acceptable overheads these days. On the far end, the CPU solution already compiles legacy C to run capability-secure on FreeBSD with OS and CPU available to download and run. Just gotta buy the board which has other uses.

So, there's more possibilities to explore on top of these existing solutions.


> It would of course be preferable to have a secure PDF reader to begin with, but the complexities of the PDF format doesn't isn't really conducive to that.

pdf.js exists!


I thought pdf.js was a Javascript application in a browser on a full OS with all the risks that come with that versus a memory-safe, native code in a deprivileged partition or container. Web tech isnt my strong area do I could be wrong. Do correct if it's not a browser or JS tech built in unsafe language.

And it's a little strange your reply to memory-safe code for a PDF reader is that an "unsafe one exists, just use it" when you or your colleagues are currently applying my recommendation to the browser hosting it via Rust and Quantum.

You're doing one thing that matches the language part of my recommendation while saying we should do the opposite about a type of program that's similarly high risk. Quite the contradiction.


JavaScript is a memory-safe language!


This is really confusing for me since you keep implying JavaScript is all we need for safe, secure, efficient, and/or low-TCB apps like this one parsing and rendering PDF's. Yet, you arent rewriting Firefox parsers and renderers in Javascript: you are using a new language with the properties I just named. Properties shared with safe C/Java/Ada subsets used in embedded but with even more safety added (borrow-checker). That's probably because you didnt trust Javascript to do the job efficienty, securely, and without leaks.

Now, you do in this thread if it involves a risky format attackers love. I dont. I think complex languages running in large apps increase attack surface. So, I still recommend strong sandboxing whatever parser/renderer one uses plus developers in security-focused projects (eg Qubes) using compilers or languages offering safety if having resources to spare. Everyone contributing a little gives us more building blocks over time.

And far as your other comment, there are always new ways to turn C code safe or secure being developed. C++ might also be able to use them via a C++ to C compiler but has stuff like SaferCPlusPlus to help. For C, options to attempt include Softbound+CETS, SAFEcode, Code Pointer Integrity, and dataflow integrity. At least three are FOSS with one I havent checked yet. So, they exist. They could also be in even better shape if security tool builders put more time in them.

All Im saying on this since you seem set on Javascript for efficient, secure apps. We arent going to agree on that premise.


I'm not saying pdf.js is fast. I'm saying that it's fast enough to be a useful tool to read most PDFs securely (which in fact millions of Firefox users do!), and it has the large advantage of actually existing, unlike complex schemes involving vaporware compilers and Ada in the kernel. (If you care about fast secure PDF viewing, write a new PDF renderer in Rust or Java or Go or whatever. This doesn't have to be a complex problem.)

By the way, SaferCPlusPlus is not memory safe, and porting a PDF rendering code base to use it would be about as much work as rewriting the renderer in a safe language.


> This is really confusing for me since you keep implying JavaScript is all we need for safe, secure, efficient, and/or low-TCB apps like this one parsing and rendering PDF's. Yet, you arent rewriting Firefox parsers and renderers in Javascript: you are using a new language with the properties I just named. […] That's probably because you didnt trust Javascript to do the job efficienty, securely, and without leaks.

JavaScript is a memory-safe language thanks to a well known runtime trick called a «garbage collector» … Until Rust came, GC was the only viable way to have a memory-safe language. Unfortunately, it has important performance drawbacks which makes it unsuitable to write a browser in a GC-ed language. But for 99% of the code written everyday (including a PDF renderer), GC is a good enough solution to write memory-safe code.

Also, Rust has been designed to make parallel code safe, something a GC can't give you.

> So, I still recommend strong sandboxing whatever parser/renderer one uses plus developers in security-focused projects (eg Qubes)

Browsers are probably the most exposed piece of software nowadays, and the vendors already do a lot of work to provide secure sandboxing. When using JavaScript, you're using a memory-safe language, in a sandboxed environment, which mean you need two exploits to get out of it (a bug in the js VM and a sandboxing bug). There's no guaranty that using another sandboxing system instead would offer better security, especially because you'll just have 1 layer of security.

> And far as your other comment, there are always new ways to turn C code safe or secure being developed. C++ might also be able to use them via a C++ to C compiler but has stuff like SaferCPlusPlus to help. For C, options to attempt include Softbound+CETS, SAFEcode, Code Pointer Integrity, and dataflow integrity. At least three are FOSS with one I havent checked yet. So, they exist. They could also be in even better shape if security tool builders put more time in them.

If there's an easy way to give C or C++ code a acceptable level memory-safety, why aren't developers using it ? (Don't tell me people already do, because it would be the proof that those tools aren't able to reach the «acceptable level»). Notice that if such tool was invented tomorrow, it will also benefit browsers, and increase the security offered by JavaScript.


compiling it with something that makes it memory-safe

Isn't it true that there is a bit more work to do to a program to make it memory safe than just recompiling?

Like if the original is in C, recompiling it in C++ won't whisk away unsafe memory access without significant architectural rework, no?


It's nonsense. If it were possible to make C++ memory-safe with a special compiler, it would have been done long ago.


Or, you know, you could use pdf.js, which has two advantages: (1) it already exists; (2) it can exist, unlike your proposal, which involves using a nonexistent memory-safe C++ compiler.


Wonder about their progress of integration[1] with ReactOS.

[1] https://github.com/QubesOS/qubes-issues/issues/2809


I think the Qubes team desperately needs more funding. Give them some of your money. Tell them your priorities.

Qubes + Whonix has been an enormously tremendous success.

I'm so happy to have Qubes as my daily driver at home. However, because of so little money coming in, development seems much more limited than it could otherwise be.

It's a wonder what they've been able to do so far with no budget and few external developers contributing.


Me too


That would be amazing...especially for helping transition people who feel uncomfortable with Linux type environments.


This was an awesome read. These guys are doing some of the most groundbreaking work in computing right now.

The idea of having an "operating system" made up of components dispersed across the globe seems like a fantasy that is too good to be true.

If Qubes can finally provide a method for passing through NVIDIA GPUs with this kind of architecture, Xen or not, that would be incredible. It's the only reason I had to leave Qubes.


It did not help Plan 9 to conquer the world.

I found a good introduction to Plan 9 architecture in this comment:

https://news.ycombinator.com/item?id=15989697#15990077


Why would you want something like that though? What are the benefits that are worth the huge amount of overhead of having to pass through the internet to connect components?

I mean I get cloud computing and such, but this seems to be aimed as a consumer OS, which is very sensitive to delays and whatnot.


One reason: the prospect of an endless stream of unpatchable Spectre-like hardware vulnerabilities.

The "real problem" exposed by Meltdown and Spectre is running untrusted software on the same hardware where sensitive information resides. Moving away from physical coupling defends against potential sidechannel attacks.

The Qubes approach of "careful decomposition of various workflows, devices, apps across securely compartmentalized containers" seems to point a way forward after this sobering assessment:

http://robert.ocallahan.org/2018/01/long-term-consequences-o...


> The "real problem" exposed by Meltdown and Spectre is running untrusted software on the same hardware where sensitive information resides.

Well obviously. Which is why people try to avoid that as much as they can when they are handling actually sensitive information.

> Moving away from physical coupling defends against potential sidechannel attacks.

... is a correct deduction, but using cloud VMs hardly qualifies as following the principle (except if they are only used for lowest-privilege stuff, but even then, the system now requires connectivity). Now you don't know who else is on your hardware, and you don't even control the hardware in the first place.


Qubes aims to provide both consumer and enterprise workflows, and everything in between. People use computers for all sorts of things. A full Qubes system delivered to a tablet/phone interface is also a fine tradeoff for millisecond GUI delays.


Passing through GPUs is problematic, as it's a massive attack surface.


Could you elaborate on the 'massive'? Let's say you let the VM see the GPU. What kind of attack would that enable? Let's suppose that a virus inside VM manipulates GPU outside of what applications are allowed to do. What worst thing could happen?


The GPU is a PCIe/AGP "bus master", so it can usually initiate DMA transfers from host memory and read anything it likes. IOMMU blocks some of this, but is not a perfect defence. https://security.stackexchange.com/questions/150386/does-iom...


Thanks, according to [1], it seems DMA is quite a 'backdoor', bypassing any memory management the kernel would do. But it is not clear to me whether this allows the attacker inside VM also to write into the forbidden regions of memory and thus either modify behaviour of the hypervisor or send information out via Internet.

[1] https://en.wikipedia.org/wiki/DMA_attack


GPUs can definitely write to host memory. In some situations, this is the only way to the results of some operations that the GPU performed (e.g. grab framebuffer for screenshots or video recordings). Usually, it's the job of the driver to check for illegal copy target addresses.


The primary objective would be exfiltration, executive control is only secondary. If you can exfiltrate keys or hashes then you might not even need to use DMA to gain access to a system.


It's unavoidable for me.

It provides no drawback for users who don't utilize it, and the alternative is me remaining with KVM which is a vastly larger attack surface, so your argument defeats itself.


> The idea of having an "operating system" made up of components dispersed across the globe seems like a fantasy that is too good to be true.

Read up on Amoeba and Sprite in the 80's.


Qubes runs mostly on computers with Intel CPUs.

It's good of them to admit that the layers-upon-layers approach just doesn't bring in any additional security if you have buggy/unsecure hardware.


Amen. Huge silver lining to Meltdown has been raised awareness over what a mess our hardware is.

As long as we're in Intel x86 land, the Plan 9 service-per-box approach is probably about the best we can do, and I'm not saying that with any joy, or as an endorsement.

Or, perhaps we can claw our way back to the 1960s and reclaim working memory protection? As obvious as that sounds, I wouldn't take it for granted. People already accept all sorts of half-broken proprietary bullshit for GPU performance, bootloading, AMT, etc. From the mailing lists, looks like Intel is trying to normalize that for CPUs as well.


I think the cloud aspect is also quite interesting in the potential for a much cheaper remote desktop. Most applications don't require a beefy CPU, so just run them on something cheap and then just run anything demanding on a more powerful node.


With the Qubes Air architecture, the unpopular Intel ME/AMT could be repurposed as a VNC server for web browsing on a dedicated device, e.g. old laptop. The AMT VNC client could be run in a thin Qubes VM. This would isolate the web browser (main x86 CPU), VNC server (Intel ME cpu) and VNC client (Qubes device CPU) on three physical processors. Usability would depend on performance of the AMT VNC server.


Qubes would make for a great startup and, given the time, prob a very successful ICO.

I was positively and at the same time negatively surprised reading about the 30k users. All issues/obstacles reported don’t seem so unachievable if one can imagine to focus on one specific hardware platform and with a good marketing team. I understand this is beyond a research project, but it would make for a great startup.


Why would Qubes do an ICO?


For the cloud, it would be an incentive to run the nodes, and you’d pay for the resource you’re using. I’m not saying they should, Im saying it could be a good startup or, in this case, a good model for a blockchain-based company.


I think the idea is you just buck up and pay the cloud fees.

If you follow Qubes / ITL, you would know that they are hardly a "startup" as a decade-old company, and they have been experimenting with enterprise-level support. If they can find a home in the enterprise market, it will at least give them enough cashflow to continue developing Qubes for the foreseeable future.

Besides, I imagine Joanna Rutkowska's opinion on the ICO scene isn't a very positive one, and I don't think she wants to complicate her company by pivoting to a blockchain model that has absolutely no relevance to developing secure operating sytems.


I don’t know how to write it better. I never said Qubes should change their biz model or do an ICO. Full stop.

All I said is that their product would be great in the hands of a startup that focuses on a single hardware platform and does more marketing, and/or does an ico because I see a great incentive to run nodes of the cloud (because qubes utself provides all the building block for trust).

> If you follow Qubes I do, since 2010, when I was doing very similar research on trusted cloud computing.

> If they can find a home in the enterprise market I wish it to them, but this doesn’t mean someone else can try a more aggressive consumer route, or an alternative for their cloud model.

I respect a ton their work, and as I said in my very first comment I think they should focus on the research part, and someone else could provide capital and grow the consumer product.


I don’t know how to write it better. I never said Qubes should change their biz model or do an ICO. Full stop.

All I said is that their product would be great in the hands of a startup that focuses on a single hardware platform and does more marketing

That's strange, because the first sentence in your OP is:

Qubes would make for a great startup and, given the time, prob a very successful ICO.


Ooops, you’re absolutely right and now I understand all the downvotes.

In my mind I was thinking to Qubes as the product(s), that could benefit from a startup. Similarly to as Kafka has Confluent, or Druid has Imply.

And I fully agree that Qubes the company should continue to be focus on research.


FWIW, none of them came from me.

And I see the confusion now: Qubes isn't a company, the company is Invisible Things Lab.


To start, only because it's the best model to fund an open source project, ever?

Of course they'd also have to pick their launch during a bull market, because nobody cares about ICOs in a bear market. If there's one innovation the "blockchain" has brought, is extreme liquidity and high funding potential for any type of startup, including open source projects that wouldn't even be considered by VCs ever, or they'd have to beg for individual donations otherwise.

The blockchain is a P2P Silicon Valley (you heard it hear first).


Ok, I'll bite. You're begging the question, Why is it the best model to fund an open source project, ever?

And don't list "extreme liquidity" as a reason.


Given trade volumes, cryptocurrencies are not very liquid at all.


At the same time I don't want such a project anywhere near shady investors and founders seeking bro get rich and destroy the company like most start ups


This is interesting. However, I don't get how stuff in the cloud can be considered secure. Unless you trust them, anyway. And also, I'm reminded how little privacy seems to matter for Qubes devs.

Edit: OK, I take it back. Replacing VMs with discrete devices on local networks is very cool. I just wish that they'd emphasized that, and then talked about using cloud resources. Indeed, what boggled my mind is that someone would go through the hassle of learning Qubes, and then put some of it in the cloud.


The cloud stuff seems incidental to the article's main point. At least that's how I read it.

Rather, it sounds like they are trying to properly abstract the isolation technology away from any specific implementation. They then realized that this would also allow "Qubes on the Cloud" with relatively little extra effort.

From a personal choice standpoint, it seems we will still have the option of avoiding cloud zones completely if we so desire, so no harm there.

If we think about the sociology of security however, lowering the barrier to entry seems like an overall win, assuming we believe in the Qubes security model.

It's a lot like fingerprint readers on phones. Sure, they're not near as strong as a high entropy password, but they're convenient enough so people who previously never locked their phones now use a fingerprint lock.


I agree. I liked the diagram that showed separate machines on the same local network running qubes. Physical separation is stronger compartmentalization than Xen.


Yes, I agree. And I wonder what a hybrid with Tinfoil Chat might look like. That is, using opto-isolators to make some device-qubes read-only.


You can put the untrusted VMs in the cloud, to get better isolation between them and more important stuff. This, e.g., is a way of preventing two colluding VMs from communicating.


That's a very interesting take, you can run local network for trusted qubes and put more risky/untrusted qubes on "cloud" VMs, that way you strongly mitigate colluding VMs (same-machine & same-network) and VM excursion attacks on your hypervisors & physical machines.


Uh!? What’s the issue with privacy? Qubes is great to make sure you don’t get a malware while you watch “youtube”, and this malware gets access to your bank account. I feel privacy is kind of out of scope here, not that you don’t need it, but you can plug it in with ease. There’s nothing in qubes design that prevents privacy.

Cloud is just a way to distribute computation, and make sure storage is always available to you. Everything should be assumed to be protected — I mean, they protect video memory among processes/apps, you’d bet they protect your data on the cloud.


There is currently no way to keep cloud stuff private. Maybe one day homomorphic encryption will be usable. And without that, the cloud provider can see everything. You can, of course, encrypt stuff locally first. But that's only good for static data.


That's all assumptions thought. I'm assuming they're protecting stuff. I'm assuming they're not looking at stuff. I'm assuming their underlying systems are patched. It's just assumptions that they're doing the right thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: