Hacker News new | past | comments | ask | show | jobs | submit | plam503711's comments login

Xen provides a great security design and a protocol to do hypercall that makes sense (unlike kvm+virtio which is DMA all the way, with all the plus in terms of simplicity but the bad on the isolation aspects).

If I wanted to caricature the situation: KVM is more simple to work with in terms of dev (you have results fast), but kind of "fuck security".

Xen is hard from the dev perspective, because it's a more micro kernel by itself, and you can't cheat to have access to the memory, you have to use grant tables (see https://xcp-ng.org/blog/2022/07/27/grant-table-in-xen/ ).

So if a part of the industry took a shortcut, doesn't mean Xen isn't still relevant :)


(XCP-ng founder here)

I'm more optimistic for RISC-V. For all the devs who worked on it here (at https://vates.tech), they told me it's very easy to work with since it's close to many Arm design principles.

That's why I believe it's important to prepare the platform today for those future machines. I think it's a great opportunity to not only get an alternative both x86 and Arm, but also really opening the choice of the design, letting new players mastering both hardware and software (I have to admit that's something I'm considering for my business at some point).


I think just a lot of the open source community and even ARM itself assumed there would be some inflection point at which point everyone would jump ship to RISC-V. I could be wrong, but at this point it seems more likely to be a gradual change unless there is some key gamechanger piece of hardware like the M1 chip

Most of the big moves in the RISC-V space seem to be coming from the low-end and from China. They might have trouble making the jump to a true x64 M1-like competitor given the geopolitics with cutting edge chip manufacturing (TSMC etc.)

I think other players are going to have funding issues to take on the big players as the open nature of RISC-V seems to mean it's harder to build an IP moat. I've noticed a lot of the Chinese chips come with stuff like NPUs to set themselves apart and presumably get some lock in on their "platforms". But that's just my naiive impression reading some blogs and looking at releases. (ie. I have no idea what I'm talking about)


I don't think it will be "instant change". I agree on the gradual result, but I think it might be faster than we think. Yes, it also depends on the ecosystem, but I think the world is more ready for ISA diversity than ever.


>I think just a lot of the open source community and even ARM itself assumed there would be some inflection point at which point everyone would jump ship to RISC-V. I could be wrong, but at this point it seems more likely to be a gradual change unless there is some key gamechanger piece of hardware like the M1 chip

I don't know why. It's not inherently better than any of the current architectures in meaningful way, just doesn't require license.

IIRC royalty per ARM chip was somewhere in single digit percentages which is basically meaningless for everyone but the chip company, or few big companies making super low margin stuff.


the expensive part of an arm license isn't the money, it's the fact that you have to deal with it (and be highly reliant on a 3rd party that doesn't share your interests).


Unless you design RISC-V core yourself that doesn't change with RV. Still need to find someone that will license you core (remember, ISA is the free part, not implementation).

I guess if using one of the open cores is enough in your use case that would make it easier.


it also means what you know you can switch vendors without rewriting code. you need to license the core, but you have options on who to license it from.


I’m interested to hear what you think the structure of the market for licensed RISC-V cores will be. How many vendors will there be and where will the revenue to support all these vendors will come from? At the moment licensing cores is not a massively lucrative business.


I don't know, but it seems likely to me that it should be able to sustain a few companies. All the really high end stuff will probably be first party developed (e.g. by NVidia) and not sold to 3rd parties, but there are a lot of places where flexibility is more important than raw performance. It's possible that market will be fulfilled by open source cores, but I wouldn't be surprised if there are some medium sized companies that make a business out of 3rd party custom chip design.


It's not a custom Xen: it's *is* Xen (AFAIK, with possibly their own patch queue for some specific needs on top of it). What's custom is the toolstack around it :)


Yeah that's what I thought, but I couldn't find definitive proof besides old posts from the early days.


nitro is KVM based


Xen Project is far being dead (there's a lot of activity in the mailing list, and now, thanks to new contributors like Vates/XCP-ng, there's also more initiatives to have a decent project tracking, see https://gitlab.com/groups/xen-project/-/epics?state=opened&p... for example).

Regarding nested virt, you are mostly right: it's only "working-ish" for basic things, but indeed, it's broken when you start to use anything heavy in your nested VM. The main reason nobody fixed it is because it's not really used: as any other open source project, you find what you need if you contribute. Obviously, as soon someone will need this and willing to contribute, it will change :)


"The reports of my death are greatly exaggerated"

Regards,

Xen.


It must be Delphi's brother then: not quite dead, but calling them alive too would be a stretch. What a wonderful world full of undead technologies we live in.


You can read the original conversation from 2018 here: https://lkml.org/lkml/2018/1/21/192


Previous discussion (2018): https://news.ycombinator.com/item?id=16202205

I get the sense that IBRS wasn't used because Linus 1) didn't want to take on the responsibility of fixing something that needs to be fixed in hardware by Intel and 2) didn't want to help Intel avoid responsibility for shipping insecure hardware. The same way he's been antagonistic towards nVidia for being bad participants.


Sounds like Linux/Linus figured they had enough clout to call bs on the patch, while Xen just focused on doing the best with what they had. I honestly don't blame either.


> Linus (...) didn't want to help Intel avoid responsibility for shipping insecure hardware.

Whatever is the saying about good intentions, you can argue he ended up doing Intel a favour here, by shipping patch that took away only a couple % of performance. Surely, it would've hurt them more, reputation-wise, if default workaround took larger chunk of CPU speed.


Possibly.

It also seems like the changes would've added an amount of technical debt that wasn't acceptable to him ("garbage MSR writes" in and out of the kernel). I could understand why he'd want to avoid a bad solution (that would also have to be maintained long-term) just because Intel is unwilling to fix it on their end.


The real golden-"I told you so" (that triggered the idea to write this very blog post) comes from a tweet of David Woodhouse last July: https://twitter.com/dwmw2/status/1549042968320811008



Hah, I wondered why there was an uptick of attention to that old tweet.

We should be slightly careful — while I can't deny that there's a small element of "ITYS" about that tweet, should it really count if I then left it up to Intel to follow up?

I whined that we hadn't shown that it was safe. The credit really does need to go to the RETbleed folks who put in the work to truly demonstrate that it wasn't.


A spectre silently fixed the spelling, thanks for the feedback ;)


I see you fixed rewrite as well, just as I was getting ready to point it out :P


Yes, sorry for that, I was really more focused on getting the story details than the spelling (plus I'm not a native speaker as you probably guessed).


It wasn't obvious at all when reading - your written English is excellent. Only after revisiting and re-reading more carefully I noticed you used the construction "So ...." slightly more than a native speaker would have.

Thanks for the great article.


Thank you, both for being kind and also providing a constructive feedback. "So" is very common trap for French speakers :D


This. This is VERY true. Pricing is capital if you want to have some respect. The worst "customers" are often those paying nothing or the cheapest price.


Xen Orchestra is not proprietary at all (fully aGPLv3!). It's just a different approach, mainly for historical reasons, when it was only on top of Citrix Hypervisor/XenServer.

So the concept is that you can build it yourself from Github with all features (see https://xen-orchestra.com/docs/installation.html#from-the-so...) OR deploy a turnkey VM with everything pre-installed (targeting companies) with support on it. Yes, the turnkey VM is having tiers, but it's logical since you have various perimeters in terms of supported features. And it's flat priced :)

Obviously, the turnkey VM (called XOA, Xen Orchestra virtual Appliance) is only meant for businesses and not individuals.

I hope it's more clear.


Yeah, but feels like there's a conflict of interest. Xen Orchestra is made more difficult to deploy than it needs to be, since it'd reduce sales of XOA if it was easy. It's also not obvious on https://xen-orchestra.com that it is actually open source, seems like it's intentionally made confusing to get you to use XOA instead.


How difficult is to make a `yarn && yarn build`?

Also this is important to have a validated/tested environment to deploy your code (that's why a VM is really handy).

Regarding XO website, it was again for historical reasons when we sold mostly to Citrix (where the words "open source" was feared by those customers).

Now we got plans to have a better split between org projects and the whole "stack" to be able to compete with VMware (eg more commercial stuff on Vates website).

Again, most of what you see now is in transition before a clearer model since we have the whole stack control :)


> How difficult is to make a `yarn && yarn build`?

Not difficult, but not necessary either. I could also figure out how to build the XCP-ng ISO myself, but I think everybody uses the builds you provide instead.

> Again, most of what you see now is in transition before a clearer model since we have the whole stack control :)

Good to hear, looking forward to see what you guys come up with.


I understand your point. However, anything else might require time to maintain it (a script, a container, a VM, whatever you want: people will expect it to work obviously, so this needs time/effort to test it, unlike a simple documentation). Do we want to maintain that? I'm not sure due to the huge backlog and higher priorities.

But I'm eager to have more resources to change that in the future :) (all things considered, an official "community" container might be the easiest path, but we'd like to have maintainers outside Vates)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: