>The most amazing aspect of all this is that the core of Windows, its kernel, remains virtually unchanged on all these architectures and SKUs.
Is that really such an exciting concept? Maybe I have a poor understanding, but I've been under the impression that nearly every competing kernel (Linux, Darwin, *BSD) is exactly like that. The only situation where I can imagine a slight departure is Solaris and its SPARC architecture, where SPARC had certain features to specifically to tie in with the kernel but I'm sure it still compiled to foreign architectures from the same codebase.
Well it is an exciting concept but it isn't as you point out unique.
It was fairly clear in the 80's and 90's there were two 'streams' of systems programmers, those who started at mainframes and mini-computers and were making things work on smaller and smaller versions of those machines, and those who started at the ROM monitor or DOS level type machine and made things work on larger and larger versions of those machines.
It was stark how differently the two 'types' of people approached operating system problems.
Windows "crossed over" from being more of a monitor type interface to being an executive interface with Windows NT at the hands of a former VAX software architect (which was firmly in the mainframe getting smaller path), and that cross over happened for the masses when Microsoft merged the NT kernel with the Window "old tech" kernel in Windows XP as I recall. By the time Windows 7 came around it was pretty much unified for life and has evolved since then.
If all you knew were Microsoft OSes, then yes it is an amazing journey. If however you had already been experiencing systems like UNIX that ran on everything from mainframes to PDP-11/70's you would not have been quite so surprised.
That's the core irony of the PC revolution. PCs started out with the explicit goal of being something different than mainframes and minis. But when people started networking them together, the only way to keep them viable was to essentially turn them into mainframes and minis, just in different shaped boxes. So we've gotten to live through 20 years of PC vendors reinventing things that big-iron users had access to long ago.
(The original idea behind) PCs continued to exist long after networking, in the form of video game consoles. A Gameboy Advance isn't all that different from an Apple II or an IBM-compatible PC running DOS: both focus on running one executable and giving it complete control over banging bits toward "rich" I/O devices, where instead of abstractions over ugly low-level interfaces, the peripherals have been designed with nice ABIs that are their canonical APIs (and, until the ARM era, the CPUs themselves were also designed such that their ISA, as written in macro-assembler, was the canonical "high-level" language.)
Of course, eventually game consoles became networked too...
I think it was Casey Muratori who made the argument that we could return to having nice ABIs that are canonical APIs, and still have more than one program running via hardware virtualization features. Then any OS would just be like a hypervisor managing all of the OS-less VMs that, from their perspective, are interacting directly with hardware. It was an interesting idea.
Heck, you can get away with that today, by just creating a virtualization platform like Xen, but with much more carefully-thought-out exposed virtio hypercalls; and then writing unikernels against that interface.
Though, from another perspective, this model is exactly what the "transitional" era of game consoles did. The Nintendo Wii U, for example, normally [i.e. during a game] has a unikernel running across two "application processor" cores, that sees those as the only cores and has direct ownership of the peripherals; but then the OS [running on a third, separate, low-power core that the game VM has no knowledge of] can, at any time, force the game VM into the background with an interrupt, whereupon it must relinquish one of its two cores, some memory, and not touch the peripherals at all until the OS says it's okay; and then the OS can use the borrowed high-power application core, memory, and peripheral-hardware ownership to temporarily foreground an OS modal GUI, like a game manual or a web browser or an instant message.
Sadly, this was all abstracted, for the game developer, behind an HLL [that is, C] SDK—which is sadly a "requirement" in the modern era, given that devs are going to be porting their games to/from PCs.
IIRC he mentions that he actually presented the same content in person to staff at Intel, and discusses how it's very hard to get any buy-in even if the engineers consider the approach to make sense.
One interesting part was that it had to become 10x to 100x cheaper to become viable on the PC platform: FPU, MMU, hardware virtualization, vector processors (aka GPUs), etc. It was possible for quite some time thanks to the Moore law.
At the time this switchover was happening, I remember being pretty excited about things like protected memory. But in retrospect, I think it’s a shame there hasn’t really been a continuation of the “monitor” type interface. I could imagine them winning at things like minimising latency when responding to user input.
> If however you had already been experiencing systems like UNIX that ran on everything from mainframes to PDP-11/70's you would not have been quite so surprised.
Back then (R) there wasn't one UNIX, rather everyone had their flavour of it for their kind of machine. I don't think source compatibility became mainstream until the late 80s to 90s, before that it was all ports.
Maybe something happened with xp too, but "windows 2000" was the departure to "NT kernel" land for desktops/home users AFAIK. (I guess xp was really 2000 with windows 95 heritage bling added).
Maybe they did try to force windows millenium on poor saps parallel to w2k - and didn't quit that racket until xp?
The fact that you did is completely beside the point. I used 2k and NT at home too. Doesn't matter. It was never sold as or intended to be a consumer/home OS.
But I think 2000 introduced the "pro" licence/label, meaning it was aimed at business users. I can't recall there being a lot of "desktop pcs" or laptops with NT oem licenses, aimed at "business" users.
So it's probably correct that xp cut over and abandoned the win95 codebase - but I also think 2000 marked the start of that effort.
I know of non-technical users using PCs that came with win2k at that time so there was some level of win2k usage in intended for consumer machines, though maybe this was after ME acquired a bad reputation.
The Windows NT kernel always was Cutlers neat and tidy core with ugly parts bolted on because they somewhat made sense at the time. These parts are integrated pretty deeply so it’s quite hard to use this kernel in a system that’s not Windows with a desktop.
So although it’s a race for Microsoft against Microsoft but it’s still somewhat of an achievement they ended up with this result.
The evolution of DOS and Windows over the last 30 years makes me wonder if Windows is really even Windows.
It started out as a Windowing environment that ran on top of DOS but with Windows 95 it switched to a Windowing OS that also ran DOS. Later all of that was replaced with an entirely different OS (NT/XP) that basically supports the legacy OS in emulation.
You mean this one? [1] The shells in 3.0/3.1 and NT 3.1/3.5 were identical. Except the latter wouldn't hard crash the machine every time a program did something naughty, which might indeed be confusing to a consumer 3.1 user...
Although there are differences, many keyboard commands are the same, and some other stuff is also similar. Double clicking the icon to the left of the title closes the window, and double clicking the title maximizes the window, in both versions of Windows. ALT+F4 still closes the window, too. DOS commands are mostly the same (although Windows NT changes this somewhat, but still it is closely enough that you may be able to use without too much difficulty). All of this despite they move all of the menus around everywhere to make that stuff confusing.
Solaris doesn't scale down to cell phones, and Darwin doesn't scale up to 896-core servers. Linux does scale like that, but Linux is pretty unusual in that regard too.
That is actually a really good point. At the end of the day there are many, many kernels out there and only Linux, BSD, and NT scale from all the way at the bottom to all the way at the top. I actually even recall my scepticism when MS announced Windows for IoT platforms (not that this scepticism has abated, though).
If you ever bothered to watch Sphere talks, the reason has nothing to do with scale, rather with OEMs, licensing and access to source code for customization.
Perhaps you had 16 megabytes of memory? It's a bit tough going but my research seems to show that in-period, people commonly ran Pentium 75 systems with anywhere between 8 and 32 megs of RAM.
I don't remember any issues. WP7 backward compatibility was very good on WP8, in WP7 they didn't supported any native code, it was all .NET so kernel change didn't matter.
Solaris hasn't been scaled down to cellphones, but remember that today's cellphone is yesterday's high performance workstation. A Raspberry Pi outperforms a SPARCStation20: http://eschatologist.net/blog/?p=266
Windows also used to scale down quite well; I'd love to know how much really was common between NT and CE.
I'm skeptical. In the 90's, Solaris ran on boxes with 16 megs of ram, just like NextStep. It could easily scale to run on today's phones... if there was a reason.
Are there servers with that many cores? I only know about Xeon Phis and 8 socket xeon servers that go up to around 160 cores / 360 threads/logical cores
Skylake-SP will do glueless UPI to 224 cores (8x28). HP SuperDome Flex scales up to 32 sockets (896 processors) using a custom interconnect based on SGI’s NUMAlink.
There are still a lot of people out there who don’t understand that there are computing environments outside of Microsoft Windows. Not as many as there were in the 1990’s, but still quite a few.
This seems like it's essentially confessing to something we already knew. Back at the turn of the century Microsoft liked to claim that Windows Server and the ordinary client NT were different in some important way beyond just the branding and price. People would even try to argue that Linux couldn't be a "real" desktop OS because it lacked some kind of secret sauce not found in a server OS... Hackers eventually found out how to tell a client NT kernel to behave exactly like Server, because of course they are the same kernel, duh.
Microsoft's reaction was to stop selling the two next to each other. There is no Windows 10 Server and Windows Server 2019 doesn't have a client version. So the exact same thing continues, but now there IS no equivalent product to prove the point that they're identical bar branding and price.
I've worked (outside MS) for 20+ years with NT, beginning with the 3.1 beta. It is news to me that anyone believed the kernels were different. I can't imagine why anybody in the least bit clueful would believe that. Furthermore I've never heard anyone suggest or argue that they were different.
Believe the main point of contention was a 10 client limit to the services of NT Workstation. MS argued that Workstation was not able to handle more due to fundamental differences, but those with a clue suspected it was merely a registry setting or two.
Not to mention it could handle that many web clients (at least) without breaking a sweat, but licensing prohibited it in theory.
Related, there's been a switch in the System control panel to prioritize services/interactive use, though it has been papered over by two layers of cruft in recent versions so is hard to find.
>MS argued that Workstation was not able to handle more due to fundamental differences
I've never heard that argument made. Memory is hazy but I don't believe the client connection limit was present in older versions of NTW.
Back then MS was chasing market leader Novell which had a business model dependent on charging for each client connection to the Netware server. MS attacked their dominant position by adopting a CAL model where the charge was for the client access license, conveniently bundled with your copy of Windows. So you could buy a $500 (or whatever it cost back then) copy of NTAS and use it to serve your 1000 employees with Windows desktops. Meanwhile Novell would want to charge you $$$ for a Netware server to handle that same user population. Hence Novell went out of business.
The connection limit in NTW was probably added to stop people going around even the very low $500 cost of NTAS.
> The connection limit in NTW was probably added to stop people going around even the very low $500 cost of NTAS.
Yes, you could install third party servers on it and get the same performance for a fraction of the cost. That's when we knew MS' differences argument was BS.
Yes - NTW was tuned for interactive desktop users and NTS for server type workloads. But everyone knew these were just tuning sets for the same kernel.
> It is news to me that anyone believed the kernels were different.
Indeed. At boot time, the version of the kernel was displayed on the blue screen above the progress dots, and it was the same on Workstation and Server. If I recall correctly, the important distinction was between the uni-processor and SMP kernels.
Microsoft kinda gave up on this ruse back at Longhorn and Vista SP1 days.
Longhorn was a fork of the Server 2003 codebase, and Vista SP1 was a backport/newfork of changes made to Server 2008. (Server2008 and VistaSP1 were both released Feb.4.08.) They were absolutely released in lockstep. Vista SP2 was released April.28.09 and Server 2008 R2 was released Oct.22.09. At that point the consumer product became the testing ground for the server kernel.
It's easier to think of Vista RTM as a giant public alpha, and and Vista SP2 as an insider preview for Server2008 R2 / Windows 7.
Windows is one of the most versatile and flexible operating systems out there, running on a variety of machine architectures and available in multiple SKUs.
Linux, BSD, Darwin... You support a dozen architectures? How adorable. condescendingly pats head
And they only support four (x86, amd64, ARM, ARM64)! This is quite a cute thing to mention after saying you're "one of the most versatile and flexible operating systems". Ignoring the obviously qualifying language of "one of the most", I would argue it is still false -- virtually every mainstream operating system I can think of supports more architectures than this. Here is my quickly-put-together list:
What an achievement. Now -- don't get me wrong, Windows has done a lot of things, but they have set the parameters of this comparison and they don't fare well in it (even if you get past the qualifying language over a comparison that they decided to make).
To be fair, NT shipped on x86, PPC, Alpha and MIPS. There was a never-released version for SPARC. It was originally developed on the i860. Wasn’t later Windows also available on Itanium too?
MS cut down on their list of supported CPUs for commercial, not technical reasons.
Linux also supported more architectures in the past that I haven't included in my count (same for the BSDs too). If you updated the count with all architectures ever supported by the platform then I have doubts that it would change the ordering significantly.
My very quick googling said that Darwin doesn't currently support 32-bit ARM (or 32-bit x86 -- but this is a bit hard to separate since the architectures are quite similar). But I could very well be incorrect.
It seems to me like Oracle has figured out a sort of secret formula for making a lot of money through shrewd business practices while managing to avoid the public hatred Microsoft and IBM attracted when they did likewise.
Perhaps because they market to business leaders who could stand to lose a lot if Oracle unleashed the legal hounds?
> Perhaps because they market to business leaders who could stand to lose a lot if Oracle unleashed the legal hounds?
I think so. But that doesn't seem like it's going to remain a viable business model for too long. Those who are forced by their bosses to work with Oracle today, are going to be the bosses of tomorrow. I personally know a company where anything oracle is explicitly blacklisted because of this reason.
Eventually, Oracle is going to run out of their oblivious- executive-type customers. (inc. govs)
Oracle isn’t just good at convincing oblivious executive types- once they are at an organization they start spreading “oracleness” everywhere. When a huge portion of the orgs DBAs, developers, and IT managers get their training and expertise from Oracle they become advocates too. Oracle gets REALLY hard to get out when an orgs DBAs refuse to provide access to non Oracle tooling (yes, this happens.)
Also, I probably throw enough hate at Oracle to make up for 100 average consumers
> Those who are forced by their bosses to work with Oracle today, are going to be the bosses of tomorrow.
Yes, but if it’s still the same company, they’re likely subject to a lot of vendor lock-in. It can be very hard, technically speaking, to move away from a large Oracle investment. Lots of code to rewrite.
Oregon won a lawsuit against Oracle a couple of years ago after the disaster of trying to implement their state ACA marketplace. The damages were paid in free Oracle licenses. This seemed so wrong to me.
>Eventually, Oracle is going to run out of their oblivious- executive-type customers
Perhaps some are oblivious, but not in general. At some point, when one gets busy enough, convenience and reliability trump bargains.
I know I can replace the backlight on my macbook, but I'm still going to the Apple Store. I know I can save 15¢/gallon at a gas station a mile away, but is it really worth the time and energy to do it at that price? That's why I buy Oracle and just about everything else.
Modern cloud hosting usually provides all of the above you need. Is a PostgreSQL DB in Azure really any less reliable or convenient than an Oracle DB? Perhaps if you’re already heavily invested in and comfortable with Oracle, but I don’t think so otherwise.
Oracle still runs circles around postgres for high availability, backups and management tools.
However most big companies are slowly migrating all their small Oracle instances to pg.
I happen to work in one of those companies who's had enough with Oracle's outrageous pricing schemes and audit threats.
Aka "how deep are your pockets?". The flip-side to this comes at quarter-end, when the question is turned around on your sales rep - "how far are you from quota?". Some amazing deals when they just need a few deals to make their number...
~59.2M USD on first year (includes maintenance), assuming:
- Xeon processors core factor
- OEE without any options (no RAC, no partition, etc..)
- List price
22% yearly maintenance, about 10.8M on second year and forward.
The Windows Kernel is an insane engineering feet by itself, so getting to see how the inner components mesh in articles like these always fascinates me.
Is it possible to "backport" the Kernel (and other parts, such as the WSL, with it) to Windows 7?
I'd like to have a modern Windows, but with the "classic" UI and no forced ads/bloatware/crapware. And yeah I am aware of W10 LTSB but it's not legally available for 99% of users.
edit: Looks like the post attracted a bit of a flamewar. Originally, I just wanted to know if this was technically feasible or not.
I've not looked into it in detail (I used to be a "hacker" who spent lots of time disassembling, patching, and otherwise modding Windows in the 98/2K/XP era) but I read that the telemetry stuff is deep in the kernel too, so you probably don't want a "stock" Win10 kernel. Vista and above have various DRM-ish obstacles that get in the way, and the near-continuous updating of the newer versions also frustrates any efforts. Probably against the EULA too.
That said, there exists leaked source of an XP/2k3 kernel and people have managed to compile and use it with an XP userland. Anything is possible if you have the time, skills, and the complete lack of concern for EULAs. ;-)
MSFT themselves open-sourcing large parts of Windows in the future, including the kernel, would not be surprising either.
Another, possibly less legally questionable approach, would be to try getting the ReactOS/WINE userland to run on the Win10 kernel.
Of course it's possible. Windows UI is just set of userland programs. Back in Windows XP time, there were plenty of Windows shells which completely changed the way OS looks. I'm not sure if those programs are still in development, but I don't see any particular reason why it would be impossible to replicate Windows 7 L&F for determined developer.
I had a startup attempting to build a Windows shell. It’s no longer possible. You could do it with 7 with some very clever work. Geohot came on around the time Win8 was in beta and figured out some extremely clever ways to make it work on that. But by 8.1 a few key elements to make a decent shell became signed functions and you’d have to perpetually hack the UWA sandbox which is not feasible.
That front page does have a link about how one of the reasons the author stopped working on it is that they break the ways it works too often, by removing functionality needed for it.
Barnacules just told me that he worked on the Windows 8.1 start menu update just before he left in ~2014 before it was cancelled in favor of Windows 10. Anyone remember this, BTW: https://news.ycombinator.com/item?id=7774476
When XP came out, people were moaning that it looks like shit and how they wanted the XP kernel, but the 2000 look.
When Vista came out, same thing again, all that Aero looks like shit, disable it so that it looks like XP classic.
Now 10 comes out, same story, looks like shit, we want it to look like 7.
You should think deeper to see what drives this desire of yours. I bet that when Vista came out with the Aero look, you felt like it's shit forced on you, and you wanted the XP classic look. Now you want the Aero look that you once hated. Ironic, isn't it?
It appears that MS made a major design effort when they designed the "Windows 95" look and feel. They produced a number of coherent concepts and tried to stick to them everywhere. They kept the amount of fluff at bay.
It also appears that since then, they've never made an equivalent re-design effort to produce a coherent design and stick to it. XP was mostly an incremental change + re-skinning. Win 10 looks haphazard, unfortunately, and, most of all, the new UI is only applied sparsely, so you can't expect it to be equally usable everywhere.
A Linux desktop circa 2000 could have a hodge-podge of GUI programs based on of GTK, Motif, and Athena widgets. But I could not expect the same in 2018 from Microsoft's flagship desktop OS.
> It appears that MS made a major design effort when they designed the "Windows 95" look and feel. They produced a number of coherent concepts and tried to stick to them everywhere. They kept the amount of fluff at bay.
The key thing is it wasn't just a design effort -- they did a huge amount of human design research, and they published at least some amount of documentation about that. I haven't really seen anything similar since, but I would love to be pleasantly surprised with something to read.
Everything else since doesn't feel like there was that much research behind it, and clearly nobody was convinced enough to fix up almost everything. Yes, windows 95 left a couple old style dialogs in the depths, but it was pretty complete.
Every desktop environment is a hodgepodge of inconsistent UI design at this point. Windows has never been completely consistent; IIRC even Windows 95 had some settings dialogs that came along from Windows 3.1. Linux has GTK, Qt, and several toolkits aping aping them with various levels of success. MacOS comes closest to uniformity, until you start paying attention to title bars or run popular 3rd party applications on it.
On Linux (and, generally, under X), you at least have copycat skinning engines, so that you can have your Qt programs mostly copy the look of the current GTK theme, and vice versa.
Under Windows, no such thing existed and exists, unfortunately. One thing that I had hoped when installing Win 95 back in the day is that old binaries that use standard window-drawing functions will automatically upgrade to the new look and feel. Not all of them did, alas.
Agreed. Windows 95 was really well thought out and mostly consistent. I think MS went off the rails when Office started introducing its own UI concepts like the ribbon that were not available anywhere else. Then Win 8 had that weird split between Metro UI and regular Windows.
I still find it hard to believe that they couldn't produce a consistent control panel for Win 10. I never know where to find certain settings. They could be hidden in the new UI or the old. Who knows?
I also don't like the look of the UWP apps. Their rendering doesn't look very polished to me.
UWP programs seem so damn unresponsive, to me. I think part of it is excessive animations. Compare the right-click dropdown in Edge to that in Firefox, for example. Maybe if there were some setting where I could set the speed of those animations it wouldn't be so bad. But, for now, these UWP suffer from a very sluggish feeling.
I don't think there is much you can do in the new panel. I always pop back into the classic panel. But then there are other things that only the new panel can do :)
You should think deeper to see what drives this desire of yours.
Decades of using it (21 years, by now), plus the total dislike to any kinds of advertising IN A PRODUCT I PAID FOR. Additionally, people had the CHOICE to keep Windows looking, basically, like Win95 - until Windows 8/10 came along. With Win10 that's not possible any more. It treats me like I'm either a child or using a touch device, and hell no. I want a clear optical distinction between a mobile and a desktop/laptop OS, alone because the operating mechanisms (mouse vs touchscreen) are so different.
Who in their right mind thought people would touch on their big-time office PC screens and forced the UI to this absolutely crappy (for desktop!) tile stuff?
Oh and don't get me started on the forced telemetry crap and the forced / silent updates and the reverts that these updates "accidentally" do once you managed to render your PC free of this crap. Microsoft has completely lost my trust by now, I switched happily to an Apple environment. At least it doesn't force Candy Crush upon me or injects ads in the Start Menu and Apple vendors (well, to be fair, there is only Apple itself) don't preload my machine with crap I don't need (a dozen updaters punching security holes in the system, "free" virus scanners, spyware)!
From Windows 8 it is no longer possible to disable the Desktop Window Manager, presumably they made too many UI changes to be able to continue supporting it, except for the Server Core mode in the Server releases. Trouble with Core is that it's basically 3.1 with no program manager or any capability of running additional Windows applications.
From Windows 8 it is no longer possible to disable the Desktop Window Manager, presumably they made too many UI changes to be able to continue supporting it
Hmm. Sounds sensible. But... shouldn't the kernel be backwards compatible? Like, take a Server Core and install the old Win7 user-land on it?
Is it worth to rape your customers when you can just give tham a clean simple version of the product which would be much more popular?
It's the same why WhatsApp went free, and why many ad-funded sites refuse to let the user buy a no-ads experience. Simply said, the demographic with cash to spare is the one that advertisers crave the most. When this demographic does not get tracked and doesn't see any ads anymore, the remainder (which does not have spare cash) pool drastically shrinks in worth.
Win2K to this day remains my favorite Windows release. It was just so clean and practical. It had its frustrations (setting up a Win2K/WinXP/Win7 system from scratch was always a task and a half, with all the driver installs and updates involved) but once you got it set up it was great.
XP/7 aren’t bad but they definitely felt more gimmicky.
GUI-wise I agree completely. I remember how both windows 3.0 and windows95 were a huge leap forward in usability. Win2k combined this with the NT kernel. The main piece missing was decent DirectX, which came in XP.
But there is another important change after win2k: The attitude Microsoft has to its users. Windows XP was the first windows where actively user hostile technologies like DRM were introduced.
I’m still carrying a Win2K Pro image around in VMware. It runs all the windows software I need (and this is the right way to choose an operating system IMO) and if it gets screwed up I can always blow it away and revert to the snapshot that’s got all the service packs installed.
What software would that be, if you don’t mind my asking? I was under the impression that Win2K support had been dropped from most software several years ago.
Of all the Windows version Win2k still has a special place in my head. Way better/stabler than Win95/Win98. For a long time my goto OS, I finally considered switching to XP SP2 but then Linux entered my world.
Windows 10 UI is objectively bad on a lot of points like consistency, search from the Start menu, etc. At least up until Win8 you could switch to the classic theme to get something looking like 98’s UI. Now you have no choice but to suffer the big button dialogs made for tablet on any laptop / desktop without touchscreen.
It also inherited a ton of bad habit from Windows 8, such as shiny fancy-looking but overly simplistic configuration screens, except the original configuration dialogs are still there underneath. This is indicative of a bad development methodology: devs/teams work on useless small projects (jazz up these dialogs so that grandma can use them) which can easily be executed and completed within a year or so (you know, a review cycle). While leaving all the old mess little changed underneath (because of course the new shit is horribly incomplete and doesn't actually fully replace the old controls). They have a corporate/dev culture which actively discourages long term incremental improvements of core components and focuses instead of trivial window dressing that can be packaged up in a SMART-goals format so that people can tick off their commitments and get their bonuses, raises, and promotions.
When 7 came out people said "hallelujah you listened!" even though it wasn't perfect. When 8 came out people said "why did you do this to us?" When 10 came out people said "at least it's better than 8." Windows 7 was probably the high point of actually doing the work to make the OS usable and clean instead of bloated and annoying. Before that Windows 95, for all its flaws, was built on a ton of actual real world usability research. Unfortunately, MS has a long history of ignoring fundamental research and common sense and dictating "the way things will be" with the OS UI/UX. They think this is what people want since Apple does it and gets away with, but MS isn't Apple.
Also, the fact that people keep complaining doesn't mean that those people are wrong, it's possible for a company to continue fucking up year after year for decades. I could explain exactly why MS makes these mistakes (part of which is due fundamentally to bad corporate culture and a long legacy of bad incentives and performance review systems) but that's another conversation.
But Windows XP, Vista and 7 _could_ look just like Windows 2000. Starting with Windows 8, that is no longer an option.
I can't speak for anyone else, but I thought every version of Windows since 2000 looked like garbage by default. The difference was that you used to be able to do something about it.
I'm starting to see 8 nostalgia. Very few these days will defend the start screen, but some people are starting to miss the rest of the OS. Which is understandable. It was the last version of Windows without many of the things some consider objectionable about 10: no telemetry, no forced updates, no big feature updates which might break things, etc.
There are a lot of 10/7 fans around, thus they pin for the Vista/8 look.
First you complain loudly "what is this shit". Then when the new one comes which is basically the same as the previous, but a bit more polished you say "I love this, you should always skip one Windows version"
I dispute this. Vista focused heavily (on launch) on the sidebar and transparency (golly gee!). 7 and XP look very similar out-of-the-box.
Honestly, I like 10 mostly. If they finished converting the settings menus to the new style, it'll feel a bit more complete. But that will also likely hide many of the useful (for "enthusiasts") settings. And if you could control updates, so it doesn't use huge amounts of memory/storage when booting when the machine has been off a week or two. Avoiding tracking would be nice too ... But I don't know much about that.
Edit: my views of 10 are likely shaped by the fact that I use 1607 LTSB, which doesn't have a lot of the new windows 10 crap (app store, etc)
Microsoft sent me like $300 dollars worth of paper plates, cups, utensils, and gave me pizza vouchers and a bunch of free licenses because I signed up to host a Windows Vista launch party. No one came, and I have yet to ever install Vista on a machine either. The pizza was good though and my roommates and I used the plates for weeks.
Actually they are the first to complain about the registry. The problem is that an app will make some changes all over the registry so it hard to uninstall it cleanly or control the state of the machine. They are considering exposing a virtual registry to an app and saving the diff the app applies and stuff like that.
How about a file system? Because that is precisely what the registry is, a file system in a file. The keys are just paths to a variable which can be a text or binary file.
And people who suggest SQL or another monstrous abstractions as an implementation need to take a step back and apply the idea of philosophical simplicity to their work. I honestly don't understand why people have this tendency to over engineer simple problems. I suppose it helps during review time.
It has a few more types than that[1]. While the registry is not exactly super elegant and suffers from issues I think it's a pretty simplistic answer to a complex problem, especially one rooted in history and rife with backwards compatibility concerns (windows still supports the legacy format used only in Windows 2000!).
People who sit back and say "replace it with text files/sqlite" don't really understand it or the problem it solves that well.
Using the actual file system brings in file system concerns - how do you handle registry transactions or concurrent updates?
From Wikipedia: "The Windows Registry is a hierarchical database that stores low-level settings for the Microsoft Windows operating system and for applications that opt to use the registry."
Are we still talking about a configuration archive or a production database for an application? That's when you have to start asking yourself: "is this a sane way to do things?" Because the more you argue for it, the less attractive it becomes as a solution for something that stores configuration data.
We're talking about the configuration registry. The one that may require multiple configuration options to be changed in a transactional manner. The one that may have multiple threads or processes reading and writing to the keys (because who knows what stupid devs will do). The one that's a bit more advanced and granular than a config file in /etc/ because it needs to be.
Oh yeah, that reminds me, you'd also need to handle subscribing to registry update notifications in this flat-file 'who needs anything other than text files' fantasy.
Again, just because you don't grok the problem space doesn't mean its empty.
> Oh yeah, that reminds me, you'd also need to handle subscribing to registry update notifications in this flat-file 'who needs anything other than text files' fantasy.
> Again, just because you don't grok the problem space doesn't mean its empty.
has numerous problems as an API and is fiddly and requires every app that wants to listen to do this correctly.
Look. Yes you can hack together the registry using flat files, using FS notifications, using the insane transactional FS apis, magically working out how to handle concurrent writes etc and have every process that wants to interact with it implement this correctly.
But why. In every sense it is less clean, more complex and generally worse. Wasn't the idea to make it better and cleaner?
So now every program that interacts with the registry in that way needs to implement transactional IO operations correctly and sanely, and if they don't bad things will happen? A thing that is so complex Microsoft is considering deprecating it due to that alone?
You also didn't mention concurrent access, or think about random code writing random completely invalid data into files, or consider how you would evolve the on disk format.
That sounds awful burdensome for a simple key-value api. Perhaps you could use some kind of service to shield clients from that complexity, and so you don't have one dumb program messing the entire registry up for everyone else.
That would be cool. You could call it the registry service.
Random inconsistent locations, random inconsistent rules for inclusions and imports, Random ill-documented ad-hoc formats, no standard interfaces to add/remove/set config values, no ability to put ACLS on individual config entries, basically no way to export and re-import or auto-merge partial config, without at least screwing up whitespace and comments or risking duplicates, no ecosystem wide single 'source of truth' pattern.
I've never seen one using acls on registry keys.
I have however seen several config files for the same program with different permissions. Think .ssh/config or /etc/sshd_config.
For your partial config / single source of trust please refer to ansible.
I think that's a fallacy. Just because it works somewhere (and not great, as every bit of software has its own damn arbitrary format only it can parse) doesn't mean it works here.
It is conceptually a bit simpler, however that doesn't mean it is better.
after all I've only been administering linux/unix/windows boxes for 15 years so how would I know!
Installing by untaring a package, versioning the config files, adding comments...
I'd rather have a bespoke format than deeply nested keys in various locations (HKCU/HKLM/HKROOT and all that mess).
Having to find leftover keys, or to use pmon to find which ones are accessed is such a horrible experience.
I'm not sure why you are uninstalling software so often, nor why you are so anal about cleaning the entire registry.
But hey, it's a one click thing to deploy a registry change to a fleet of 10,000 AD boxes, one that can change the configuration of the 1Password app as well as MS Word, and have it done reliably.
Has never happened to me since perhaps the XP days.
Why are you uninstalling software so often through AD? Not that it absolves this issue, but it's not common to have huge software churn like this. And hey, isolate the issue and push a registry change to every single machine in the network, instantly and consistently.
I'd prefer going into the direction where it would be preferable to replace most of our filesystems with databases. I imagine that, if we didn't have to serialize stuff all the time to send over the network we would already be there ages ago.
Yea the storage format isn't the problem (MS has its own tech like LocalDB or some variant of SQL server that could be used).
The problem has always been that the data is shared system wide, even if it doesn't need to be. That and there is no easy way to clean up the registry if uninstalled programs did not do so properly.
> The problem has always been that the data is shared system wide, even if it doesn't need to be.
If you do have a use case where that matters, you can create a new user and apply permissions you want at a single key level. Available since windows 8 (or maybe before?)
I agree, that's more of the issue. But how would you solve it? What is 'uninstalling'? Via what mechanism? Should 'rm -rf program/' clean up the registry? How?
The uninstalling procedure should clean the registry. "Uninstalling" is whatever the OS standardize as the correct procedure, Windows has one already, but it's reminiscent of the way people did packages by the time Linux software was distributed in tgz archives.
Android is way more tightly controlled, uninstalling an applications scoped data is just removing a directory.
In Windows it's up to the app to clean up. It's not as great as *nix, but nobody is saying it was. Apt and friends just do the cleanup for the app, in a standardized way. If they miss something it's still there.
Most applications today store their preferences in an AppData directory that can be per user, per machine, or per network. The registry is used largely for OS settings.
> What would replace it? Thousands of individual config files? That's a mess to manage.
That's not necessarily true.
In the 90's I worked at a large telco where we'd gotten good user roaming functionality along with package management using third party tools, for about 1600 applications, on Windows 3.
Uninstalling a package demonstrably left no remnants behind.
The migration to Windows 95 was troublesome partly because it was much harder for roaming users, but mostly the registry was much more difficult to manage - Office applications were notably impolite, with thousands of registry entries, many of which lingered after an uninstall, and no way to sanitise the registry other than reinstalling the OS.
If you're using a computer without a package management system I could see how you could be convinced config files are harder to manage (by hand) than an opaque blob that poorly hides its ugliness, but you'd have bigger problems.
"This is a picture of Windows taskmgr running on a pre-release Windows DataCenter class machine with 896 cores supporting 1792 logical processors and 2TB of RAM!"
Reminds me of an UltraSPARC T5 server with 3072 processors I used to work on... from five years ago. Old tech, eh?
Do you have any basis for this claim? The Windows kernel is very efficient and well designed and doesn't have any more overhead than other kernels out there as far as I'm aware. This is also the kind of machine that won't be running many extraneous processes or features so I'm not sure what would be "bogging down" the system.
I want to upvote this because it made me laugh... and then I realized that it's shit like this that made me flee reddit(98% of the comments being snarky jokes).
> For a community that prides itself on the hacker mentality and the curiosity that comes with it, you lot certainly don't show it. Idiots.
This is my first post in the thread. The blog is cool and all. Though, all I can say is that it’s hard to get excited about software whose source you can’t look at if you’re curious.
Forgot to add the line count of backdoors and stupid absurd shit like processing fonts on the kernel rather than userland so people can hack your 896 core machine and mine bitcoin.
Your claim of backdoors is nonsense of course, but the kernel mode font handling was indeed a security risk. However on recent releases of Windows, they changed font parsing to run in a user mode sandbox: see the "Mitigating font exploits with AppContainer" section of https://cloudblogs.microsoft.com/microsoftsecure/2017/01/13/...
I wish Windows kernel been optimised for a given architecture. It is one of the examples where "one fits all" doesn't work. Even if Intel or AMD bring another grounbreaking generation of CPUs - giving mighty 5% increase in performance - it is all for nothing, because it gets eaten up by Windows poorly written guts. I don't know what kind of monster machine you need nowadays to watch YouTube video without audio breaking up or without having to dedicate one core just to handle the mouse.
Is that really such an exciting concept? Maybe I have a poor understanding, but I've been under the impression that nearly every competing kernel (Linux, Darwin, *BSD) is exactly like that. The only situation where I can imagine a slight departure is Solaris and its SPARC architecture, where SPARC had certain features to specifically to tie in with the kernel but I'm sure it still compiled to foreign architectures from the same codebase.