Hacker News new | past | comments | ask | show | jobs | submit login
Windows 8 to feature stripped-down kernel, built-in virtualization (itworld.com)
97 points by jfruh on Aug 3, 2011 | hide | past | favorite | 63 comments



Where did I hear this before...ah, yes, I remember now:

Stripped-down 'MinWin' kernel to be at the core of Windows 7 and more http://www.zdnet.com/blog/microsoft/stripped-down-minwin-ker...


MinWin is not so much a product as it is a continual effort starting with Vista to untangle dependencies and move unnecessary cruft out. It's still the NT kernel, only progressively cleaner with each additional release of Windows.


Here's a great Coding Horror article on shrinking Windows XP. Jeff got it down to a 243 MB compressed image, for an XP virtual machine.

http://www.codinghorror.com/blog/2006/07/creating-smaller-vi...

Really cool stuff.

edit: s/XP/NT/


Before Vista came out I managed to cut down an XP image to ~200MB in ISO form and less than 300MB on a "fresh" install on real hardware ("fresh" because it was Service Pack 3+ slipstreamed into the install ISO).

That was basically the bare minimum needed to run the OS on my laptop with wifi and sound.

Put something similar on my EeePC 701 when I got it, having a 6/7 second boot time on Windows was quite nice;


Virtualization makes a lot of sense as the basis for an OS - instead of organizing projects into files and directories and trying to manage those with a master OS - just run a new instance of the OS for each project and access just the relevant resources - most of the stuff on a typical user's desktop machine is often irrelevant to the task at at hand. A new VM for a specific project allows the user to save state - work on the draft of the TPS report for an hour, then shut down that virtual machine; come back a week later and everything is just as you left it - same open files, same running programs, everything just the way you want it.

In some ways it's the equivalent of tabbed browsing or application windowing - virtualization of the OS creates efficiency by dealing with multiple divergent contexts at the same time.


Except that, while working on their TPS report they opened a browser window to check some figures. And they bookmarked the site they found.

And then they moved onto a different project, and their browser bookmark wasn't there.

And now they're not happy, because Windows lost their stuff.

(Because very little of what most people do can be compartmentalized that much - and Windows is great because it allows you to multitask, which you can't do if everything is in a different VM.)


Some things will - as they are today - be stored in the cloud [edit or its local equivalent using WCF]. While bookmarks may be one of them - many of the items I bookmark when I'm actually working on a project rather than procrastinating on HN et al., are pretty project specific. And of course, having HN et al. in its own virtual machine would allow all those idea bookmarks to be stored in one place relevant to their useful context rather than relevant to a file system which I have to keep organized (assuming I bother with organization).

Most people don't keep their filing system very organized and when it comes to bookmarks even less so. Context is often a more efficient way of recalling what you did than a directory name - particularly over longer periods of time such as several months.

And of course, you can multitask across virtual machines - I often have two or three open at once because I need access to software which runs on a legacy version of Windows and I run Facebook in it's own exclusive VM.

At the same time I will be working on a project on the host OS. And there is no need for interconnection between any of them.

And VM's solve a lot of legacy issues, cross platform compatibility issues (e.g. windows phone apps) and Microsoft has already developed methods of integrating VM's with the host (see Windows Virtual PC and XP mode integration).


Why are their bookmarks not cloud synced? Virtualization makes even more sense when you push as much as makes sense to the cloud, include things like personalization settings.


I don't know anyone outside of geeks and friends/family that geeks have set up that have Bookmark sync, let alone people in large businesses where their desktops are run by the IS department.


I don't disagree. But as we move to a world with more virtualized environments and multiple devices, I think it will be more common. At the very least iCloud will make it standard in the iOS/OSX space. I can't help but believe MS will do the same for Win8.


I hope MS do that. Although they'd have to allow internal syncing servers for it to be acceptable for IS departments.


we are talking VM's... not multiboot. you can have many VMs(applications) open at once time. those application have access to the shared disk and other resources. have you seen parallels? the user would not change workflow _at all_. they dont know their app is now in a VM.


That's not the way people are talking about it - they're talking about complete contexts.

If we're talking about having them all having access to each other, then I can't see what you gain by putting apps in different VMs.


Again, saved state. In my work projects often go on hold for months or years - I might need to come back to a project and pick up where I left off even though in the interim I replaced my primary computer, browser, and production software.

And in my writing side projects, I may leave a project for several months. So that's where I realized the value of VM's - the one's I use have survived an upgrade from XP to 7 with the same open windows and without any software reinstallation (and of course without any recreation of bookmarks). I'll add that they are also descendents of previous virtual machines used for the same purpose but different projects. It is more efficient from a workflow perspective to have six copies of Open Office each pointing to the relevant context than to reconfigure one copy each time the context switches.

To put it another way, the way in which one develops software projects from a custom starting point and the way in which references persist across IDE sessions during a project due to saved state are not unique to software development. They are indicators of the features which facilitate efficient project execution timelines.


None of those features require OS virtualization.


Exactly. We were sold "process isolation" and "virtual memory" back with the 386 chip and Windows NT. But the actual effective security was squandered for the sake of convenience and compatibility. OSes didn't really want to share the hardware in any meaningful way.

The current demand for virtualization is, to a significant degree, an attempt by admins to get control of their own hardware back from Microsoft. Putting MS back in charge of the lowest layer hypervisor seems like it could sort of defeat the purpose. Or maybe they'll play nice this time?


But the actual effective security was squandered for the sake of convenience and compatibility. OSes didn't really want to share the hardware in any meaningful way.

What do you mean?


>OSes didn't really want to share the hardware in any meaningful way.

Interestingly, I view the current state of the world as too much sharing -- VMs are just super process isolation =D


Computers are designed to do more than one thing, but traditionally many servers were purchased per-role. Mission critical apps would only run on one version of Windows, or apps might not play nice with others or with OS upgrades.

It turns out that one of the apps people really need to run multiple instances of is Windows itself. This is largely Microsoft's fault for bundling every app including the kitchen sink in the OS platform itself. As a condition of using their clean little high-performance kernel, you had to accept a web browser and home-user-friendly userspace.

Little surprise that people are kicking the whole package off of Ring-0 and substituting something like vmware for their $five-figure server hardware.

It's that super-isolation that actually allows multiple apps/roles/data categories to finally share the same hardware.


> but traditionally many servers were purchased per-role.

This tradition started ~98, with Microsoft. Before that, when servers were Suns, IBM and Digital, every server had lots of roles.

Somehow, microsoft convinced the world that it's better to have one server per role (and pay them for some more licenses).


They'll just pay, not play nice.

We should be virtualizing the software, not the machines. Oh wait, we already are: JVM, CLR, Python RT, good-old-fashioned processes etc...

Virtualization is just snake oil. I don't see a real use for it TBH and I work at a place that drinks the VMware kool aid. All it does is cost money, use up resources and excuse incompetant administrators from having to plan properly up front.


OS X Lion (10.7) in fact does implement something similar. http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.a...


No specific requirements for the way in which something is implemented is the very nature of Turing machines.


Would this type of virtualization prevent me from needing to have multiple copies of the os for each VM? I'd love to have multiple projects completely separated, without having 10 ubuntu server VMs. Maybe I'm just better off using virtualenv.


You probably want Linux containers (LXC). Or you can use KVM and get some memory sharing benefit by enabling KSM.

http://www.ibm.com/developerworks/linux/library/l-kernel-sha...


Just use virtualenv. It's what virtualization should be!

Virtualization should not be at the hardware level - that's just retarded and another unnecessary product.


To some extent I agree that virtualizing at the hardware level (particularly PC hardware!) is the wrong level. Ordinary Unix processes are another sort of virtualization, and one which is far more efficient.

However there are three facts that get in the way of doing the right thing:

(1) Does it run Windows?

(2) We've got Intel and AMD making PC virtualization acceptably fast nowadays, so it makes sense to just use that.

(3) Isolation between processes is not very strong [not sure about virtualenv] but between machines is a great deal stronger because people have a lot more experience protecting networked machines from each other.


>Ordinary Unix processes are another sort of virtualization, and one which is far more efficient.

... and backed by hardware!


1. You can virtualize fine on Windows. IIS does a wonderful job of virtualizing network services. (as of 2008 that is). And windows services are a form of secure process-level virtualization are they not?

2. It's acceptable until you whack several kernels and OS' on a machine, all running lots of processes at which point everything suffers. Cache locality goes AWOL, cache utilisation is shared, bus traffic goes up, so does latency and performance suddenly plummets.

3. It depends on the environment. Virtualenv just provides a consistent python software environment which can be isolated from everything else on the machine. As for other things: Look at FreeBSD's jails - that's as far as it should go[1]. Linux has ulimit and decent security. Windows has NT which is actually damn good and provides very good process isolation with respect to memory, CPU and IO no less.

If we needed virtualization, we wouldn't have processes.

[1] http://en.wikipedia.org/wiki/FreeBSD_jail


Not "does it run on Windows", but "does it run Windows". Seriously if you can't run full, real Windows on it, forget it, you'll never get anywhere in the market.

Anyway, you can't compare "virtualenv" which seems to be some sort of Python interpreter hack, with full virtualization. They are completely different things, with completely different use cases.


Qubes OS strives to accomplish something very similar to what you mention. Joanna Rutkowska is one of the project founders.

http://qubes-os.org/Home.html


have you seen qubes os?


The problem with Windows has never been with the kernel. The NT kernel is (mainly) fast, secure, and modular. In a study of pros and cons, it will compare favorably with other kernels of today.

Windows' problem is, has always been, and promises to continue to be, purely in userland. The userland is bloated and without focus; it reeks of design by committee, is riddled with security issues stemming from design of userspace and user interaction requirements, and by-and-large simply lacks a unifying theme/purpose for the OS itself.


If the performance is acceptable, I'd love to do all of my daily Windows work from a VM... on a removable drive I can boot with no fuss on my desktop, laptop, or friends' computer.

If they can really isolate all the hardware this is a game changer for me. I'll just move my SSD seamlessly between various PCs at home and work. Epic win.

Don't dimiss this as a power user feature just yet. Imagine going on vacation and taking one computer for your family, not 4. Get a new computer? Don't "migrate" with current solutions, move everything with full fidelity. Keep your own personal OS for use at internet cafes. Change the form factor of your computer at will.


My development machine is Windows7. I do web and systems development in Python, PHP, etc.

All of my development is done on Ubuntu and CentOS VMs running in VirtualBox.

My actual IDE is often Eclipse (or Eclipse based) and I just make an SSH/SFTP connection to each virtualbox and the coding is treated as a "remote project" in Eclipse.

When I retire this Sony Vaio, I'm really thinking I'll get a 15" Macbook. Just for variety. There was a time when I'd never have dreamed of that. I bill a high hourly rate, I cannot be fumbling around because I'm on an unfamiler OS.

But I could switch to a Mac in 60 minutes. Downnload a JVM and VirtualBox for OSX, move over all my VM images, and I'm done. Back to work.

I chose Windows 7 because IMO it's the best OS available for me. I enjoy it. I never have reliability issues. I find I usually go 30, 40 days between restarts. It's nice I have this option: the chip and OS makers have really done a smashing job on visualization technology over the last decade.


> I'll just move my SSD seamlessly between various PCs at home and work

Even better: you just plug in your phone to the current workstation. Also, your phone has the ability to read all your documents.


I had a dual-boot setup like this on an external drive with Windows and Linux when I was in high school. It was pretty useful whenever I needed to use one of the school computers. Not seriously game-changing, mind you, but somewhat convenient (more pleasant and useful than portable apps).


The cynic in me says that this is just hype that'll go the way of Longhorn/WinFS by the time they ship.


They already have a lightweight core running, they've reduced usage for idle machines, and they've had vx in server versions for ages.

Hopefully it will stay, at least in the higher end consumer packages at least. Definitely looking forward to Windows 8.


The cynic in me says: See XP Mode.


I wonder if more pervasive os level virtualization for windows would allow them to 'virtualize' some of the legacy APIs and provide a means for moving their apis forward/cleanup/etc while still maintaining their backwards compatibility.


This is explicitly mentioned in the article:

> In theory, by running all legacy applications virtualized you'd get rid of legacy components and security issues that plague Windows OS today

(There's no mention of the 'but' that one seems to sense coming when a sentence starts with "In theory"; perhaps they just refer to the perhaps-infeasible effort it would take to implement this.)


That's what XP mode does out of the box. I run my really stinking old version of Quickbooks through it.


Don't they already do this partly? Especially when running old 16-bit programmes.


Ah. Mayhaps. I thought they did that with some custom api translation, but I could indeed be wrong.


I really like the new look of explorer with the ribbon: http://www.itworld.com/sites/default/files/HyperV-11-600.png

Wish I could get that on Win7.


So my question is why not throw away Windows, start from scratch and build new state of the art OS and then HV for backward compatibility?


Sounds battery-expensive.


What does this allow me to do that I couldn't before? (Serious question - this article is over my head).


It would be much better for Windows, if it would be based on a BSD kernel instead of using a newly developed kernel. Legacy apps could then be executed using virtualization. Doing so Microsoft would have a lot of capacity to improve their windowing system and desktop on top. This would also open Windows for serious software development.


This is not a new kernel, and there is absolutely no way in which moving to a BSD kernel would solve anything at all.

The NT kernel in and of itself is very small, simple, and remarkably well designed. The Win32 subsystem is what is horrid, and that's 99.9% in userspace. There's no reason that they can't create a whole new subsystem along side Win32 where they improve everything, but I don't see that happening.


> The NT kernel in and of itself is very small, simple, and remarkably well designed.

Is there current documentation on that? I remember NT3 had a very microkernel-ish design, but I have also read that a lot of its elegance was compromised since NT4.


As said in a sibling comment, Windows Internals is a must-read. However, you are correct -- NT4, 2000, and XP saw the addition of lots of stuff inside of ring0. However, most of that was independent of the actual NT kernel, and since Vista the trend has been reversing in a huge way. There's more in ring0 than there was back in the day, but a lot of stuff has been moved out, e.g. many drivers, even video drivers. The new (relatively speaking) usermode driver framework makes it trivial to write drivers that don't run in ring0, and the kernel now has fewer dependencies than ever.

NT has had some growing pains architecturally speaking, but it's been handled remarkably well. Probably the best thing to ever come out of MS, especially when you contrast it to the mess that is the Win32 subsystem.


The best documentation are the Windows Internals books by Russinovich/Microsoft Press. This book is the "Design & Implementation of BSD" for the Windows (NT-lineage) OS.

http://technet.microsoft.com/en-us/sysinternals/bb963901


a bsd kernel would make a lot of devs happy to switch back to windows.


Why? The vast majority of developers never touch the kernel. If you believe a BSD kernel would get you better compatibility with other OSes, I suggest you look at the subsystem model in NT.

There's absolutely no reason for Windows to switch kernels.


ZFS support would be a huge improvement over NTFS.


Totally, totally agreed. I wish the patent situation weren't such that an interested party (e.g. me!) can't go and implement this. ZFS and DTrace (the ultimate reverse-engineering tool) on Windows would make me the happiest man on earth.


MinWin is a new kernel and the old NT kernel has nothing to support virtualization.


MinWin is NOT a new kernel. It is a subset of NT. Drop the binaries into IDA and look at it yourself, or dig around for Russinovich's talk on the subject of how they disconnected all the dependencies to build MinWin. Even Wikipedia disagrees with you: "MinWin is not, in and of itself a kernel, but rather a set of components that includes both the Windows NT Executive and several other components that Russinovich has described as "Cutler's NT"."

As for virtualization support, you're wrong on two counts: 1) with the creation of Hyper-V, a number of facilities were added to NT to support something akin to paravirtualization, and 2) MinWin has absolutely nothing here that NT doesn't have, being a subset.



Thanks politician, that is the link @daeken was taking about. I have to say, that I was wrong.


Some points regarding your comment:

1. Some of us have the preposterous notion that the NT kernel is superior and more modern than the BSD kernel.

2. The NT kernel architecture is not new by any means. It's mature and robust.

3. Whats wrong with their windowing system? The DWM is pretty speedy, efficient, and stable.

4. "This would also open Windows for serious software development"?! It's currently the number one consumer (some, including me, might argue iOS actually is), and the number one business and server OS. How much farther ahead of the pack does it need to be before you'd consider it "serious"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: