Hacker News new | past | comments | ask | show | jobs | submit login
A dream of an ultimate OS (1995) (okmij.org)
246 points by animalcule on Aug 21, 2019 | hide | past | favorite | 175 comments



Unrelated to the article, but the author (Oleg) is one of the star of the typed functional programming world. For example, one of his research project (BER MetaOCaml) is about generating typesafe OCaml at runtime (to improve performance mostly). I strongly recommend you to check this (http://okmij.org/ftp/ML/MetaOCaml.html) if you are interested in the field.


Indeed. For example, the Sydney Paper Club (for discussing CS papers) was sometimes jokingly called the Oleg fan club.


Yeah, it's always interesting to see what makes it onto the front page of HN from the blogs I read regularly. He's the sort of person I'd love to be around so I could ask him questions, but if I were with him in a room and could ask him 10 questions, none of them would be about operating systems.


Indeed. For example, the Sydney Paper Club (for discussion of CS papers) was sometimes jokingly called the Oleg fan club.


All I want is an OS that has as few of the weaknesses as possible and all the best things about all the current major operating systems; examples: Flagship gaming support from Windows, Stability and flexibility from Linux, and Design principles from MacOS.

Each of those three operating systems have some weaknesses that makes me jump between them quite often. Things like: Forced updates in Windows 10, certain lack of hardware, application, and game support in Linux, and MacOS "requiring" Apple hardware (I know about hackintoshes, built some in my day, but they always felt like a house of cards).

There are plenty of other strengths and weaknesses to each of them, I just feel as if I'm always giving up something awesome and gaining something painful when switching. One would think by now we would have more competitors (and I consider the various distributions/desktop environments of Linux as a single competitor to Windows and MacOS), trying out new ways of user experiences, and/or trying to bring together what makes each of them great into a single source. Of course, so much is driven on ROI and developer buy-in, so I understand the reality and complexity.

The above is my dream :)


> Things like: Forced updates in Windows 10, certain lack of hardware, application, and game support in Linux, and MacOS "requiring" Apple hardware

I'm pretty sure you've discovered a simplex (a thing like a CAP triangle) here! These all trade off against one-another:

1. highly-integrated first-class hardware support

2. support for all third-party components under the sun

3. low security-vulnerability surface (and thus a long security-critical update cadence)

Windows picks 1 and 2 at the expense of 3. Linux picks 2 and 3 at the expense of 1. Apple picks 1 and 3 at the expense of 2. (Hackintoshes trade some of 1 for some of 2, ending up fully satisfying neither.)

I'm pretty sure you can't have 100% of all three, no matter your resources. They're trade-offs, not in time/effort, but in design-space.


Is linux really 2 and 3 at the expense of 1? I've had several issues where hardware doesn't work on windows, but does work on linux.


I think "highly-integrated" and "first class" are the key terms here.

Linux supports much more consumer hardware out of the box than it did a decade ago, but if you ever do encounter a driver issue, you'll need to know your way around the terminal and be able to google-fu your way around some forums. The people on this site might be able to get through it without too much pain, but we're in the minority.

Windows definitely supports more hardware out of the box, and even when something doesn't work, it gives you some nice GUIs to manage the drivers. At the end of the day writing drivers sucks, and it's tough to provide support for the virtually endless supply of components and periphs without paying people to write those drivers.


I'd argue that Linux is moreso 1 and 2 ("highly-integrated first-class hardware support" and support for all third-party components under the sun").

You rarely have to install any drivers. The kernel comes with drivers for almost every driver under the sun.

On the other hand, there are a lot of CVEs for Linux, and while they do get fixed quickly, the security vulnerability surface area is quite large. Especially considering the sheer size of kitchen sink kernel that is.


Re: #1, when I said “highly-integrated”, I was referring to base-OS features that depend tightly on things (maybe third-party) drivers do. I.e., having more kernel or userland code, on the OS side of the fence, that calls into untrusted driver code through specialized (rather than generic “device-class” API) interfaces. Things like kernel serial-debugger-over-USB support, or how Windows elevation prompts interact with screen-reader software. Another example, not from desktop OSes, is how Android/iOS treat being plugged into a car.

In all these cases, a third-party driver is essentially being given the the reins to the system, intentionally, as part of the user’s expected workflow. Windows and macOS (and, I would suspect, any OS solving for enterprise requirements) both have places where they allow third-party drivers to participate in these elevated APIs; while Linux doesn’t really (Linux only has third-party driver-blobs that fit to specific isolated in-kernel sandboxes.)

My thought re: #3 was that:

• with Linux (and disregarding the few large corporate binary blobs for wi-fi and GPU), mostly corporate-sponsored FOSS devs write PRs for the kernel, and then the kernel maintainers “take receipt” of that code by merging it, taking all further responsibility for that merged code, making it essentially equivalent to code they wrote themselves (and at that point, the code is now owned by the kernel project, such that new development should not occur out-of-tree against the original development, but as in-tree patches against the merged code);

• with Windows, corporate third-parties (but often fly-by-night Shenzhen ones, if that’s the hardware you buy) maintain most drivers, and Microsoft just certifies them, like apps in an app-store (which ensures mostly that they don’t crash Windows, and isn’t a security audit);

• with macOS, Apple writes most drivers. Of the drivers third-parties do write, Apple takes ultimate control and responsibility for QAing that code and customizing it for macOS—Apple does their own builds, hardware-matrix testing, packaging/deployment, etc., essentially making each particular point-release of a driver into a part of the base OS. However, they don’t take ownership of this code; the driver will rot unless the third-party sends them a new version to start integration on all over again.

• with Hackintosh’ed macOS, random FOSS hobbyist developers write the (extra required-for-your-build) drivers, and nobody is guaranteed to be maintaining or QAing them.

A security-vulnerability surface isn’t just about surface area, but also about how much control and visibility—essentially, trust—the OS maintainer has into that surface area.

Windows forces updates as often as it does, because Microsoft doesn’t have good visibility into which third-party drivers will be affected by which updates (which they could use to bunch together all driver updates addressing a particular CVE); nor do they have control of those third-party drivers enough to structure them such that their updates can be woven into a seamless, non-restarting update process. Security-critical updates just... show up, on their doorstep, coded in arbitrary silly ways (to deal with the arbitrary silliness of the original background services and/or tray utilities that talk to the driver) and they have to deal with that.

Linux—at least where the base OS is concerned—doesn’t have any such “opaque-to-the-maintainers” surface area, so Linux can get away with fewer “restart required” conditions. They have both visibility and control.

macOS—because Apple demands ultimate control of all macOS drivers (to the chagrin of companies like Nvidia)—can demand that drivers take a particular form (i.e. microkernel daemon servers with activation ports) where Apple are easily able to just in-place upgrade many drivers as part of a package installation, without restarting the OS. The fact that they’re already in relationships with these driver developers also gives them visibility into the progress of addressing CVEs.

(macOS does want to restart for point-release updates pretty often, but that’s down more to Apple’s BSDish philosophy of “the base OS is a whole that should be replaced with a new whole, rather than getting into a state where both old and new components are running.”)


Well, this is very similar to the Haiku project's goals, though of course on many of these fronts we're only partway there. Care to come join us? :)


Join Haiku Project / Write lots of code to make a / Unified OS


Last time I checked it out, I was very impressed. Especially the HaikuDepot. Pretty cool collection of software including KDE stuff, and latest python. I had some problems (mainly with booting, I had to do it in a roundabout rEFInd way) and thought I'd check it out again in October. Thank you for your work on it. Nice to see you here.


Hey, thanks for sticking to it. You're doing important work.


I’ll be happy to give it a go in a VM to start :)


Please do! You can find us in Freenode#haiku; feel free to join there and send us feedback, etc. :)


> Stability and flexibility from Linux

I might have been using linux wrong. Because as a daily driver media consumption device, it has been anything but stable. I suspect it is because it was a laptop with a discrete graphics card, which for some reason completely trips linux up.

> Forced updates in Windows 10

I think this has been solved in a recent update


Linux is extremely stable as a headless server. We have instances running for years with no issues.

That is where Linux's reputation for stability most likely comes from.

However as a desktop machine mainly used for media consumption I would take windows over linux any day. No doubt due to poor graphics and audio driver support.


> Linux is extremely stable as a headless server.

No question. I love my linux VMs. They are rock solid.

> for media consumption I would take windows over linux any day. No doubt due to poor graphics and audio driver support.

This is me right now. You boiled it down to the essence of it.


This. The Nvidia drivers are getting there, but they're nowhere close to Windows. Tend to mess with audio, have to restart to use a dual monitor setup (yes, I've googled). It's a real hassle for anything but a basic setup.


> The Nvidia drivers are getting there, but they're nowhere close to Windows...have to restart to use a dual monitor setup

I have a desktop with a discrete Nvidia card, and on Linux I'm easily able to turn on my second monitor after the system is already booted and have it be recognized. I'm super curious to hear what issues you ran into with this sort of thing, because it all works without hassle for me.


The issue I'm having is that Nvidia driver 418 doesn't support my HDMI out (still haven't figured out why), but 390 has atrocious power management. So when I'm mobile, I switch to 418 (restart), and at my desk back to 390 (restart again).


Ah, yeah, that does sound extremely painful. I'm guessing you've already tried using noveau to see if that works better?


> No doubt due to poor graphics and audio driver support.

Dell Precision with Intel, Ubuntu Mate, rock solid. VLC, 4k monitor, great media tools. No telemetry and UI shenanigans.


You might have been using Linux wrong, but there are also better and worse hardware choices and distro choices. There are laptops with friendly hardware that you can run anything on; there are some that will fight anything but their original OS.

I have been running Linux daily at home because it's been rock solid for years on my home laptops. And that has involved plenty of media consumption.

I used to dual boot, but on the latest I couldn't untangle Windows from 100% disk usage and failed Windows Updates -- this on a laptop that was literally a few weeks old. So long, Windows.


I use Ubuntu Desktop as my primary machine and find it very stable. I use it primarily for development with WebStorm, Atom, VirtualBox, Chromium, Node, and Bash. I rarely play games on it, and my taste in games is not fancy. I handle mail, videos, and writing chores in Chrome and Firefox. I listen to music via Spotify. I use the video card built into my motherboard. The most demanding thing I do with it is record videos that capture what I do on the screen.

Perhaps I am not a typical user, but nonetheless, I rarely have trouble with Linux. Everything just works.


Flagship gaming support from Windows, Stability and flexibility from Linux, and Design principles from MacOS

I would love such an OS too! Sadly I'm not aware of anything that ticks the three, but at least ElementaryOS [1] attempts to tackle the last two. I enjoyed it three years ago, though I had to move to MacOS for work so I'm unsure about how it is now.

[1] https://elementary.io/


I switched to Linux in June 2018 with the intention of playing less games simply because Linux couldn't run most of them. It worked pretty well. Then a month later Valve released Steam Play and Proton which ruined my perfect plan. I spent far more hours playing Windows games on Linux than I had intended.


I've been using osquery (https://github.com/osquery/osquery) for a while. It is neat and I can appreciate the idea of 'exposing OS interfaces as databases'.


osquery is cool. But, as far as I know, it doesn't expose the filesystem as a database, it is closer to /proc-as-a-database. (osquery can monitor specific files, in particular security-sensitive files, and expose events related to those files in SQL tables; but I don't think that facility is scalable from certain specific files to the entire filesystem.)


Indeed it isn't, in order to make file-level querying performant at all, you really need support for that at the filesystem level. Which is exactly what BFS, the BeOS filesystem had, and of course Haiku reimplements it: https://www.haiku-os.org/docs/userguide/en/queries.html

As that page describes, the "query" command (or its equivalent GUI) can be used to write filesystem queries, e.g.:

     query ((MAIL:from=="*joe*") && (MAIL:when>=%2 months%))


> But it does not have to be this way. If a database engine is implemented as a core system service, along with simple tools to browse and modify database records, the gordian knot of system configuration files disappears. MacOS comes very close to this ideal, with ResEdit as this universal database editor.

Or AS/400, which is more or less a SQL database.


Tandem had a real database OS. They didn't have files. Just blobs in the database. Their distributed, redundant database system had proper ACID properties. It owned the disks; there was no file level below the database.

This made a lot of sense for a system intended for high reliability transaction servers and nothing else. Banks loved Tandems.


ResEdit was an interesting idea. It suffered from a major problem - terrible data integrity. The original version had to work on floppies, with very slow seeks and writes. So it didn't reach an consistent state until you were all done and closed the resource fork of the file.

Attempts were made to use it as a database, but stuff was always getting corrupted.

The original MacOS suffered badly from being a cram job into 128KB with no hard drive or MMU. No CPU dispatcher, no memory protection, brittle ResEdit and TextEdit. The trouble was, that architecture persisted for 17 years, long after the hardware improved.


And it's tragic when you consider that LisaOS had multitasking, process separation, and I believe a hardware MMU.

The Mac grabbed most of the good UI aspects of the Lisa, but stripped out all the decent OS concepts. It worked to sell the first Macintoshes, but didn't scale out later, and by that time they'd killed off the Lisa.


Yeah, and the Lisa cost $10,000 and the Mac 128k was $2500. You can debate the merits of the hw/sw tradeoffs that got to those price points, but it was pretty clear at the time that Apple wasn't going to take over the world with the Lisa.


> Or AS/400, which is more or less a SQL database.

People keep on saying that, but how really true is it? Many operating systems come with bundled relational databases–e.g. most Linux distributions come with more than one relational database implementation bundled. Does that make Linux "more or less a SQL database"? How deeply integrated is DB2/400 into the OS/400 kernel (or equivalent term, such as "System Licensed Internal Code")? There is a paucity of public information about the actual nature or depth of this integration.

I think an OS with a truly deeply integrated SQL database would expose things like a table/view of all files in the filesystem, a view of running OS processes, etc. As far as I am aware, OS/400 (or "IBM i" as they are calling it nowadays), doesn't have any such features.


Long time AS/400 programmer here. I think the confusion arises from mixing the terms "SQL database" and what IBM refers to as the "Single-level Store" [1]. The AS/400 has true orthogonal persistence, much like a database engine. But for a few specialized exceptions, there are no "files" anywhere on an AS/400 like on a typical operating system. Everything is an "object" (yes, extremely overloaded term) and there are specific ways to perform actions against objects.

To add to this confusion, IBM ported DB/2 to OS/400 atop the native OS/400 data object model. You can use SQL but that just gets compiled to native operations against what are known as "physical files" (confusing name, yes) and "logical files". Physical files are fixed-length record files with data in them. Logical files are indexes atop physical files. A PC analog to this would be FoxPro, Dbase, Paradox, Alpha, etc.

Many OS/400 programs access these files using the native model; not SQL. Tandem and HP Non-Stop work in very similar ways.

[1] https://en.wikipedia.org/wiki/Single-level_store


So, my impression, is many applications for AS/400 are written in RPG or COBOL, languages which in their AS/400 implementations treat "files" really no differently than they do on any other platform. Or, similarly, DB2/400 stores database tables as these single-level store objects underneath, but to a programmer writing SQL queries, it doesn't really make much difference – the experience is pretty similar to writing SQL queries for DB2 for z/OS, DB2 for LUW, or DB2 for VSE/VM (and what differences do exist are more due to the divergent code bases of the different products than due to the single-level store.)

I get the impression that S/38 and OS/400 have some really interesting concepts at the core of the OS, but it is questionable how well the higher levels layered on top leverage those concepts.

"Everything is an object" would be a lot more powerful if IBM let customers/ISVs define their own object types, when as far as I am aware they don't. Yet IBM will define dozens upon dozens of object types for all kinds of obscure requirements, many of which are no longer even relevant today [1][2].

[1] https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_74/...

[2] https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_74/...


Interesting history in your [1]:

The thinking at the time was that disk drives would become obsolete, and would be replaced entirely with some form of solid state memory.

What were they smoking? Bubble Memory?


TPF might be a better example. It's pretty much a distributed nosql database delivered as an OS.


If we are going to talk about NoSQL databases as an OS, we could also talk about early implementations of MUMPS and PICK, each of which was once upon a time delivered as a standalone operating system. (It's different now; at some point each were ported to run as packages on top of more mainstream operating systems such as Unix or VMS, and more recently Linux and Windows.)


People who used it characterized it that way.


I used it, and never characterised it that way - I simply saw it as an OS with DB2 installed!


I know they did/do. But when they characterise it that way, are they just repeating IBM marketing assertions without challenging them?


That's a good point. And as you say, it's hard to know with closed-source stuff.


Or the Object Data Manager (ODM) on IBM AIX.

https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/ge...


I used to work with an AS/400, and never once thought of it as a database - sure, it had DB2 pre-installed, but I don't see how that makes the OS itself a database?


I only knew it only as the source of SQL-like data dumps. And people told me that it was more than an OS with DB2 as the default database. But maybe they were wrong.


Or Windows with the registry. Wait...


But is ”uses a database” really the same thing as “based on a database for everything” as outlined in the article’s vision?


The registry is basically a hierarchical filesystem with some tiny size limits.


> If a database engine is implemented as a core system service, along with simple tools

Windows has it, though user-level tools are lacking. https://docs.microsoft.com/en-us/windows/win32/extensible-st...


From a sufficiently abstract point of view, one could argue that classic DBMSs are already looking like small operating systems:

- are composed of multiple processes; can include process management code

- can access hardware directly (for efficient arrangement, caching and retrieval of data)

- include support for multiple users, concurrent sessions

- stored procedures require compiler tooling and runtime (VM)

- can be platform-independent, with an abstraction layer connecting them to the underlying OS


You raise a point - we have a bunch of small operating systems running on top of our existing operating systems. DBMSes, editors (emacs), browsers, and more. There's a great amount of inefficiences being added because everyone is re-inventing OSes and/or DBMSes at every layer.


I often wonder how much more efficient computers would be if we were capable of writing optimal machine code for whatever task we wanted to accomplish. No layers, just efficient code written at the lowest level for everything.

It obviously wouldn't be worth the effort, but how many times faster would our computers be?


I think it'd be possible to automate that at some level, generatively code just the minimum needed at the machine level for the task, and no extra fluff.


A lot of things are starting to work like operating systems for performance and/or security reasons. We might need to eliminate the distinction in our thinking to get better results on new systems. Quite a few programmers and researchers already have, making things like you describe. Others are building from hardware up to try to make their implementations more 1-to-1 with how hardware works.


Yes this has been true of major RDBMS for a long time. A conventional OS bootstraps Oracle (for example) and Oracle handles its own memory management, process scheduling, IPC, talks directly to storage, handles logins and interactive use, etc etc. Same with SQL Server. The AS/400 takes it to the extreme.


This sounds like my nightmare operating system. Reusing the same command to operate on very different kinds of objects is a bad idea. A file is very different from a process. A database table is very different from a file. Whenever you try to homogenize operations on different kinds of objects under a single operation name, you inevitably lose flexibility or you have to add a lot of if-then-else statements inside the implementation code which increases cyclomatic complexity and leads to bugs.

Also I don't think it makes it easier. What if I want to delete a process but I accidentally end up deleting a file because I mistyped the name of the process (or the file has the same name as the process; which one should be deleted?)? I think we need separate commands because the user needs to be in a different mindset when doing these operations.

I'm actually not a huge fan of the Unix philosophy for that reason; you end up with a lot of general purpose commands which work together in theory and you can combine them in an infinite number of ways... But in practice, they are too general and this means that combining them becomes too slow for a lot of scenarios... If commands are too small and too general, you will always end up having to write long sequences of multi-lined commands chained together in order to do anything useful and performance will be bad; you might as well just write C/C++ code.


I would make a distinction between querying and mutating the operating system state.

For querying information, I think the idea of representing everything as the same data structure really makes sense. The Unix philosophy or a relational view is perfect for this. However, for more sematically complex things that have possibly surprising side effects, such as starting a service, setting up a new user account, connecting a drive, installing a printer, I think we need separate commands or procedures to encapsulate all the complex behavior.


He probably meant having the same internal interface, it would still be doable to have various aliased commands that call the same interface.

As for UNIX philosophy of stringing together lots of general purpose commands to get the desired result - the nice thing about that is you don't need to know programming, and it's very convenient for one off scripting.


As for UNIX philosophy of stringing together lots of general purpose commands to get the desired result - the nice thing about that is you don't need to know programming, and it's very convenient for one off scripting.

I beg to differ. It's a workflow by and for programmers, or at least people who think like programmers.


Why not to trade a multitude of "custom" database managers for a single well-designed distributed database manager?

It's not clear whether the author is arguing that all existing OS data structures should be replaced with a single database system, or if they simply want all OS data to be capable of being queried/updated via a standard database style interface.

The former is simply hubris ("I know the perfect data structure for EVERYTHING!") while the latter is pretty much implementable as an interface layer on top of an existing OS.


That's the beauty of the relational model, it doesn't specify data structures. Those can be added later by indexes.

For more info check https://www.researchgate.net/publication/2364452_Data_Struct...


This is the key. You can even say that we only have a small amount of abstract "shapes" of data:

Scalars, list, tables(relations), tree/graph.

You can build generalized operations on them and it will work even if the internal structure (ie: list can be done on top of vectors or linked list or tables or tree) differ.

This is what I working on in my own little language (http://tablam.org). Certainly requiere specialize per structure and some stuff is hard to generalize but the idea hold.


The set of indexes are a form of data structure though, with all the state juggling and heuristics that usually go along with them (What access pattern requires this index? When do I have too many of them? What locks when I add/drop an index?). This adds complexity and unpredictability to the user.


I would argue that dropping and creating indexes is much less work than refactoring large parts of your codebase just to change a data structure. Also, you can have multiple indexes on a table and the RDBMS ensures that they are always synchronized, which is something you won't achieve if you code the data structures yourself.


There's another aspect to this that seems almost tangential as well. The author goes into the idea that program imports are directly referenced at the OS level. I suppose this replaces compiled system calls with some kind of foreign key reference?

The author mentions this removes the need for ids, but really it would just be hiding them in a non-serializable way.


Oh, it's the every OS sucks story again.

It's all about the user experience. System should be friend and still enable the user to increasingly get more out of the computer. It also needs to be extremely responsive.

AmigaOS got a lot right back in the 80s. BeOS (and now Haiku as spiritual successor) took a lot from that (system kits vs amiga's default set of shared libraries, and the concept of datatypes) and added some concepts of its own.

Meantime, the mainstream systems have only gone backwards.

My dream of a general purpose OS is definitely an open source microkernel, multiserver RTOS with capabilities and an user experience that builds on those two systems. It is important for it to be an RTOS, as unbounded response times would make the system fail as a general purpose system and as a personal computer system.

The closest existing system would be Genode. It currently only meets the technical side unfortunately, but it could meet the user experience requirements with some work.


Being an RTOS is only necessary if your application has a requirement for real-time response and is also itself designd for bounded-response-time.

Ignoring drivers for the moment, your typical desktop application does not have a need for a bounded response time. Desktop applications typically do not fail due to a dialog box being popped up a few milliseconds late.

The underlying drivers, on the other hand, do tend to need bounded-response time, but this is typically performed by the use of the interrupt handling subsystem, whether directly handled or split into upper/lower.

Even non-RTOSs should be able to provide real-time response to drivers in this way and the trade-offs taken by an RTOS scheduler are not always appropriate or optimal for a desktop, user-facing system.


Audio.

A great many people work with audio or want to work with audio, and Windows and OSX are lightyears ahead of Linux in terms of ease of use and overall quality.

You'd think the PulseAudio fiasco would have improved the situation, but I believe the developers thought as you did, and so built a system that's woefully encumbered with latency. Jack improves the situation WRT latency, but it comes at the cost of an altogether brittle and broken integration with PA, and thus with all major Linux desktops.

Bounded response times may not be needed for your word processor, but there's plenty of folks out there who would be better served by it. Hell, I'd love a snappier editor on account of predictable response times to keypresses.


> You'd think the PulseAudio fiasco would have improved the situation

PA is in rather good shape now & it has capabilities its predecessors didn't. But it's very hard to get people to update their notions about something once it sets in.

No matter how much Microsoft tries, Windows will always be 'insecure, virus-ridden', macOS will always be super stable, beautiful, no matter how much Apple screws up & Linux will always be unusable on the desktop no matter how many decades I'll spend enjoying it on the desktop at home and at work.

Sadly, it seems a person's mind tends to naturally lean conservative and it takes a herculean effort to regularly update one's preconceived notions.


PulseAudio remains practically unusable for an audio workstation; latency in the tens or twenties of miliseconds are unacceptable.


If you want realtime, there's JACK. PA is intended to replace/supplement ALSA, it has features like network transparency, per app volume control, EQ, virtual outputs etc. Not every SW has to have every feature, but PA has useful features ALSA doesn't.


PA can't supplement ALSA, because PA can't cover all cases. It is not suitable for realtime.

It's an exercise on self-delusion to pretend PA can replace ALSA as long as it has this blatant issue.


In my experience, jack only manages to avoid xruns on reasonably small yet still too large (5ms) buffers when running as SCHED_FIFO or SCHED_RR, and only on linux-rt.

Patch sets like -ck simply don't help with this.

Try running cyclictest from rt-tests for a while to see the true horror of Linux behavior.


> Desktop applications typically do not fail due to a dialog box being popped up a few milliseconds late.

No they won't fail but a responsive user interface is critical. A process running on an RTOS won't complete any faster (probably the opposite), but displaying some sort of acknowledgement that a key has been pressed or a button has been clicked without any human perceptible delay is really nice. I'd love for the user interface portion of an OS to run with real time constraints.


Real-time response and a/an RTOS are not the same thing. An RTOS only allows processes with a known (or computable) runtime to run so that it may make guarantees about timing of execution. In some RTOSes scheduling may be entirely static and determined at compile time. This type of OS is used in critical systems (aircraft, hospital equipment, nuclear bombs). It's not really the type of system you'd want for a general purpose operating system.


>An RTOS only allows

That's just the easy way to achieve hard realtime.

It is, fortunately, not the only way.


There is some benefit to real-time ability in an OS. Mainly, to prevent one application from freezing up the system. That's happened to me both on Windows and Linux. Took work to wrestle it back into my control again.

I remember people who used the QNX demo disk on old machines describing how they could do big compiles in the background with the system still responsive to input. I'm guessing the parts interacting with the user had higher priority.


Thats priorities at work.... not bounded-response-time.


Good point. A little out of my element here. :)

Real-time might help in another way: mitigating covert channels. Gotta know the timing of secret-handling code. Then, make the observed timing fixed-length or random. Don't always need the timing to do that, though. For instance, the separation kernels just need timing for the partitions with whole thing stopping regardless of where code is at.

One last benefit might be easier performance monitoring and diagnostics. If it's predictable, one might be able to assign a range or profile to it. Then, raise alarm if anything goes out of profile. Just speculating here: never built one.


> Desktop applications typically do not fail due to a dialog box being popped up a few milliseconds late.

Not a few milliseconds, but a few dozen seconds might be a different story. While it might be hard to predict or measure an exact number, your deadline is however long it takes for the user to equate it to "forever" and kill the process / leave the website / uninstall the app / etc..


A 'real time' OS does not mean fast. It is simply that the OS can promise an application/driver that it will get the processing time that it requests. e.g. a nuclear power station controller might use a RTOS to guarantee that the emergency shutdown process can be completed in under a certain number of seconds, regardless of whatever else is going on.

In general, being able to make these kind of promises means that the OS has to be pretty conservative about the code it allows to run, and a 'best effort' OS or some 'soft' RTOS is probably going to be faster or more efficient.

In terms of UI responsiveness, I'd bet that crappy software running on a 'hard' RTOS can still manage to feel sluggish, because the UI delays probably aren't actually in the OS itself.


I've never tried it, but isn't there a kernel for Linux that's been more optimized for desktop, but not quite real time?


>Being an RTOS is only necessary if your application has a requirement for real-time response

Or, in other words, the operating system is not general purpose, because it is not suitable for these uses.

Do note these uses do include the likes of audio. If there's no way to guarantee latency, then a spike will surely eventually cause an overrun, which will be perceived as an audio hiccup. Sometimes, this is just an annoyance. On a live performance or in pro audio work, it can be fatal.

A good microkernel will allow mixed criticality tasks to share the system. A lot of recent research in the topic has brought SeL4 to this point.


> My dream of a general purpose OS is definitely an open source microkernel, multiserver RTOS with capabilities and an user experience that builds on those two systems. It is important for it to be an RTOS, as unbounded response times would make the system fail as a general purpose system and as a personal computer system.

This sounds more like a complete version of the Fuchsia Operating System, which has both technical and is more of a user-facing OS. That probably has more of a chance of meeting these requirements at the rate that it is being developed.


Genode is really great, check it out: https://genode.org/documentation/genode-foundations/


You might want an exokernel, not a microkernel?


I can sorta agree; a database with tables and folders would be a much more useful basis for an operating system. After all, most programs will read and parse small configuration files. With a database it becomes necessary.

Windows sorta does that with the Registry but A) the registry isn't the filesystem, B) the registry is a filesystem on top of NTFS and C) the registry sucks.

If the Registry had the power of,say, PostgreSQL to back it up, it would suck a lot less. Of course, for large files, you should still be able to rely on more conventional filesystems as well as for backwards compat.


The Registry could have been a lot better.

It has no schema. Imagine if it had some sort of schema, declaring what sub-keys and values are allowed under each key. Imagine if the schema was self-documenting, with each key/value declaration in the schema had an associated description explaining what it was for.

Imagine if it had richer data types. For example, a "link" type, in which a value actually has the name of another key. If you try to delete the target, either it doesn't let you, or it sets the value to some sort of null value. (Basically, something like foreign keys in a relational database.) And an index to quickly find "back-references" (show me everything that points to this key.)

The registry should have included a database of installed packages/applications, and every key should have been marked with what package/application owns it, along with some sort of indexing to make it quick to find everything a package/application owns. Application uninstalls would have been a lot cleaner, and issues with apps leaving behind junk in the registry avoided.

Transactions: This was added in Windows Vista. But it could have been there from the beginning.

OTOH, remembering the registry was originally implemented in Windows 3.1, which had minimum system requirements of a 286 with 1MB of memory, maybe my suggestions above just wouldn't have been feasible.


I think it could, it just required a little more imagination. NewtonOS was released in 1993 so around the same timeframe, lower memory requirements than 1mb and could do this stuff:

http://preserve.mactech.com/articles/mactech/Vol.09/09.11/Ne...


I always wonder where things would have gone if they had kept developing NewtonOS. It had so many interesting concepts that may have worked really well with a little more computing power.


The perfect OS is no OS. An OS is just specialized software that provides services, but by providing those services and only those services, you're locked into that OS, and all those services whether you want them or not.

And we have to do that because we don't want all our software to have to provide the basic services, but we don't really get that: we still have to provide all the libraries it needs, some of which emulate an OS anyway by providing a big, complex runtime.

So I don't think this is an OS problem, it's a build problem. If I can ask a machine what services it has, and ask the software what services it needs, it should be possible to find a minimal solution.

I think NixOS gets very close to this ideal, even though it's obviously built on a traditional OS. But if we're going to talk about the "ultimate OS" I think it's worth it to ask, "how much can we take away from this?"

And the reason to want that is simply driving towards the goal of making software components that are able to work together reliably rather than building these monolithic devices that only a handful of veterans can understand.


What do you think of what we are building @ https://github.com/nanovms/nanos ? We are all about taking things away from the general purpose operating system and only focusing on what is needed to run a given service in prod.


That's definitely in the spirit of what I was getting at.

I'll have to give it a spin, not sure when that will be, but I'd love to explain to a security audit that we don't patch an instance because there's nothing on it but the application.


what you're interested in sounds a lot like a unikernel/library OS


So Plan 9, then? But maybe with a database like interface instead of a file like one.


I don't really know why you're getting downvoted. Plan9 is relevant to this space as it takes "everything is a file" to the logical conclusion in the same way that this takes "everything is a database" to the logical conclusion.

> The hierarchical organization of file systems manifests itself in nesting of directories (folders). On the other hand, directories are merely named views of a certain subset of files, selected according to some criteria. Thus, a directory can be thought of as a "database view," a named database query. It follows immediately that a file may appear in as many "directories" (views) as one wishes to. For example, one "folder" may show all files tagged as "sales reports", while another directory contains files modified within five days. Searching a file system and creating and populating a folder becomes therefore the same activity. Since saved views are database objects themselves, one can reference views within views if one so wishes. There is no required hierarchy however: one may create two views that refer to each other, or any other network of views that best suits the problem.

I built koios to solve this problem, but I didn't take the fuse approach because it seemed like a lot of extra work for the user, and puts databases first rather than files, which is un-UNIX.


So AS/400 then?


I think there is a lot to dislike in today’s lineup of OS, but I don’t think it is worth doing a full redesign until we have open source hardware and persistent memory. Persistent memory will cause a need to redesign the OS anyway and open source hardware will enable new dynamics in the community. Linux or whatever you have on the client side will have to do until then. Hopefully we will get an OS that takes security seriously. Actor model for robust scaling and capability based security.


Sounds like a lot of the same goals of Windows Longhorn. Ironically, .NET Core has made alot of the goals they were trying to achieve more achievable now. Trying to build an OS in .NET in 2005 it was too slow, database filesystem has sort of kinda materialized, but not like the design as I remember it. In my view, the ultimate OS is something that is: -Modular -Consistent in API across domains -Skin-able


Its also interesting to see what MS does offer these days. Powershell returns objects so you can actually get at the a Process object with proper typing instead of dealing using sed to parse out some info.

It can be really nice if you have the time to look up the documentation. It doesn't have the WYSIWYG simplicity of files, though.


Can someone describe the NEGATIVE aspects of having a database-OS?


First enumerate all of the constraints and guarantees your database provides. All the basic minimum services it provides for every access. Look at the memory it takes to provide those, and the processor utilisation overhead to perform all those checks, and services such as to guarantee ACID compliance, guarantee referential integrity, enforce a global type system, etc, etc.

Every single access of every single piece of data, for anything, will incur all of those overheads every time. There is no way to opt out or choose a different balance of guarantees or trade-offs without implementing them on top of the OS provided ones.


You're describing the fact that OS apis are typed, and check state for validity. That's already the case and OS system calls already incur those costs.

In a database, preparing a query compiles the access plan (ie all the above) on the server side. There's no reason it would be any less efficient than an OS call today.

Could in fact be more efficient, since you would probably be able to leverage set-based operations on the server (ie OS) side rather than iterating everything on the caller.


If OSes are already doing all these things, to the same degree required, and with the same overhead of a DB based OS, why do we need a DB based OS? What more is it giving us, and how is it providing that at no additional cost over the current model?

'On the server side' isn't magic. You don't get server side operations for free and these 'server side' operations would be occurring on the same computer.


Even my database doesn't incur all overheads for every operation. I don't know why an OS designed around the concept would. It's interface, not implementation.


It makes the conceptual model of the OS more complex. Filesystems with basic permissions are easy to understand becuse of the forced structure. IMO, ACLs are a bad idea for the same reason, they make it harder to see what is going on while not solving a problem I actually have. I guess they do solve problems that some people have so that makes them more attractive if you are one of those people but IMO there must be better ways to solve those problems.

Personally, I'd like to see version control and file compression integrated in my filesystem. I do not want my operating system to be distributed or rely on network access.

I also think a separation between administrative access to the full system and views of applications may be a good idea. It seems to me that the ideal OS would heavily restrict the file and network access of applications and there would be a separate administrative domain with full access but limited complexity. The operating system then manages the sharing of needed data between applications and network access that the user allows. I think having the system parse a standard configuration file format for applications would make sense in this model, but that configuration file can still just be text. Also having a sub-user permissions structure similar to the overall system users seems like a good idea to me. Many applications that users run are hostile and dealing with that requires major changes vs current desktop operating systems. Database vs. filesystem is a minor issue in comparison and personally I don't see the appeal of using a database.


I remember people used to say, "Emacs is a nice OS, what it lacks is a good editor". While reading the Ultimate OS paper I was thinking about Emacs. It already exists.


I was thinking about the Oberon operating system, which also already exists.


Same. I've been reading through the book at project oberon and really like reading about the design decisions that went into it. I also think its UI is a great starting point to reason about a touch based IDE.


Already done! AOS was written in 2002, well before Android, and included a multithreaded ZUI by default (along with FAST native module compilation and dynamic linking).

Shame google didn't use it instead of Android, they had to invent Go to get back some of the AOS advantages.

https://en.wikipedia.org/wiki/Bluebottle_OS https://www.research-collection.ethz.ch/handle/20.500.11850/...


Even things like USB are kind of database like. It's kind of silly that we have so many adhoc databases with their own query APIs (and different between every OS)


Just a general question: Do you think there is a market for a new operating system? The cost of building something better or equal to mainstream systems seem too high to even consider challenging the status quo.


I think there is, but it'd need to be very different to existing operating systems to justify itself, and in ways that other OS's can't simply adopt for themselves. In practice that means radical architectural change, and even then you'd re-use a lot of code.

For such a project you don't really want to just adopt existing ideas and implement them. I don't really understand why Google is doing Fuschia for this reason: architecturally it's nothing special. Is the GPLd Linux so bad? It took them this far.

If I were to do a new OS I'd explore ideas like all software running on a single language-level VM with a unified compiler (e.g. a JVM), a new take on software distribution ... maybe fully P2P, rethinking filesystems and the shell, making it radically simpler to administer than Linux, etc. You'd need to pick a whole lot of fights and make a lot of controversial decisions to justify such a lot of work: you'd need to be right about things other people are wrong about, not just once but multiple times. Then maybe if people see it working well they'd join you and port existing software, albeit, the porting process would inevitably involve some rewriting or even deeper changes, as if you can just run existing software and get all the benefits you probably didn't change anything important.


Build a new operating system, or build an operating environment that runs on top of an existing operating system, like how e.g. Windows used to run on top of DOS? Or User Mode Linux on top of Linux?

An operating environment means you don't have to worry about stuff like device drivers. It can run inside a Docker container. (At most companies, try suggesting "let's run a brand-new OS that nobody has heard of", and you'll get an emphatic "no"... say "here's this app we want to run, it is packaged as a Docker container", and often nobody will even ask what is inside that Docker container, even if it contains your operating environment with the app running on top.)

You can start out using facilities of the host OS like the filesystem and networking stack. Later, if you want to, you can implement your own filesystem (create a 100GB file on the host filesystem, pretend that is a block device and your custom filesystem exists within it). Or your own networking stack (which can run in user space using facilities like Linux tun/tap drivers.)

An operating environment can always evolve into a standalone operating system at a later date.


Well, there is always a market for a new OS as they all are so complex and have so many baked in assumptions, that they cannot possibly do well on the variety of use cases that people need computing for.


Can you elaborate? What kind of use case are you considering?


Making a new OS doesn't have to be more complicated than making any new program. See http://unikernel.org/


Split the OS into different services. The file system, the run-time (EXE) management system, the UI manager, the coordinator, etc. For one, you could mix and match and pick the best part for your needs. That's a more webby and cloudish view of things, I would note.

I do agree our file-systems need to be more RDBMS-like. I'd like to see "Dynamic Relational" experimented with more. You don't necessarily need to pre-wire columns: add them as you need, unless not permitted for a given table.


I'd say Genode fits the bill. The cool thing is that parent processes have complete control over the services their children access and can block them, replace them, delegate them, etc.

https://genode.org/


Plan 9 was trying to do this.


What went wrong?


This article doesn't define what an OS is, so it suffers.

The layers that sit directly above the hardware need to be simple and efficient (think like OSI layer 1 and 2). Actual hardware beneath this needs to be even simpler so it can focus on what good hardware should be: high performance. For example, having hard drives implement database-like concepts in hardware is bad. Because then you have to change hardware if your concepts evovle, which is expensive. So let the hard drive do what it does best, which is get data off of a platter or NAND, and let a layer on the OS abstract that for higher-level layers.

The UNIX API is the best we got so far I think, for OSes that are actually useable on a wide variety of hardware platforms. There's a reason why files are byte streams and "type" information is not part of a file - it's not a storage device's job to do anything but store and retrieve data fast and reliably. And it's not the job of the immediate lower layers of an OS to do anything but facilitate that and interface with a higher layer, like an SQL daemon.

The user facing layers high up, like the shell, are technically not OS facilities, they are "default applications" that, in a perfect world, would work under any OS.


> The user facing layers high up, like the shell, are technically not OS facilities

Even directories, file metadata and caching are too high level and not part of an operating system if you want to go that far.


Ryan Marsh @ryan_marsh · Jul 19

Prediction: Linux kernel will be replaced in the Data Center, but its replacement will be developed by the cloud providers. Eventually they’ll need something custom fit to running isolated Node, JVM, and other runtimes as efficiently as possible for Serverless

https://twitter.com/ryan_marsh/status/1152236961660362753


Operating Systems are commonly viewed from two major standpoints: managing computing resources, and hiding hardware idiosyncrasies while putting a friendly face for a user.

The important hardware in this age is the 'collection of datacentres'. Things like Kubernetes and whatever Amazon calls their thing these days are the proto 'operating systems' for this hardware. What we currently think of as Operating Systems are more like threads.


> whatever Amazon calls their thing

Depending on what "thing" you're talking about, it might also be Kubernetes. See: EKS.


My dream is for a computer OS that has all the good new stuff (CPU/GPU acceleration mainly, OS level HW support) and none of the bad new stuff (bloated UI/my every mouse click being transmitted to the cloud, web app infestation).

I don't care how it's built, as long as we have above.

Bring back the Windows XP GUI (with GPU accelerated window theming and a few select other things) and I will be a happy bunny.


An operating system with a built-in, default database provides tremendous advantages. Hewlett Packard's MPE (running on 3000 series) came with the Image database. Every third-party app count count on Image being available. Apps inter-operated and complemented each other; you didn't have to reinvent the wheel. Very effective.


For user/app data at least, this sounds a lot like Fuchsia's Ledger: https://fuchsia.googlesource.com/peridot/+/master/docs/ledge...


Pick? hello.. are you back from the dead?


My first job involved doing some basic system administration on a Pick system. It was a really interesting architecture.


My dream is simple OS that works kinda like DOS: * Micro OS - Boot base from floppy drive if need be * Let me edit a text file to left put in the pieces I wants instead of a hundred pieces I don't * Includes BASIC language that build build a binary * Includes text based GUI for a text editor * Allow me to pick the drivers I want

I like concept of some Linux OS'es but the ecosystem has got some huge it takes an expert to truly understand it. For a server I really only need a network drivers, SSH client, and the server software. I maybe nostalgic but I miss the good old days.


It’s not like Multics or ITS or the Lisp machines or the IBM OSs were simple at all. In fact, they probably were more complicated. What you’re talking about are the toy OSs designed for extremely limited hardware for people who probably couldn’t tell the difference in the first place.

But if you want something simple, try out the base OpenBSD install. No magic, no complexity, just simplicity and elegance.


Plan 9 Inferno Oberon try these systems


Old comments on an old post about basically the same topic: https://news.ycombinator.com/item?id=14542595


Those would be really cool improvements to see. And, in my experience, wise are those who have already built personal tooling to get some of this level of convenience functionality working for them. I'm still blown away by what can be accomplished by system scripts or simple apps _just_ in the realm of personal productivity.


First thing that came to my mind when reading this is Windows' WMI with its WQL: https://en.m.wikipedia.org/wiki/WQL


Can we get (1997) added on this? (revision date)


Nit, but PLEASE date your essays. I have no idea if this is from 1998, 2008, or 2018, and it matters to how I read it. A lot.


I'm always sad when I see *nixes using FS as datatypes.

conf.d to indicate list of entries as files for instance.


To me, the author is basically asking for OpenVMS with the RMS and RDB file types.


> Everything is just editing

not if you look at average consumer patterns (sadly)


I do like the idea of weaving database functionality into the OS, but I also have concerns. Oracle has, on a few occasions, taken to extending their database technology to areas like this and the result was obtuse and perverse. I am thinking of a large system I worked on forged in early Oracle Forms and Reports. Each form (window) would have trigger procedures stored in the package that represented it, and you were left with the kludged-in features of PL/SQL for basic things like looping and control flow. SQL was never intended to define software. And attempting to take it and mutate it in that direction is starting from a false premise I think. Sure, with anything that is Turing Complete you can build anything. But that doesn't mean you should.

If that danger is avoided, the broad concept itself might have merit, most especially in replacing the way we deal with filesystems which should have changed decades ago. I believe that storage should be separated into mutable and immutable categories. Immutable data (software packages and the like) should be managed by a universal networked content-addressable versioned immutable data store. Something like Perkeep mixed with the Internet. If you did something like 'apt-get install firefox', it would lookup the current versions GUID from a DNS-esque service and check the local store, then the local network store, and then hit the Internet store of data if necessary to pull down the content. It could be thought of like extending storage from registers, L1 cache, L2 cache, L3 cache, main memory, disk outward to the network, with data traversing each level only when necessary as a kind of overgrown cache control. Very storage-constrained contexts could purge anything immutable with confidence that it would be able to pull it back in when necessary. Call it 'swap to the cloud' if you're a marketer who doesn't realize how horrific that sounds to technical folks.

Mutable storage would be handled as a local database, it would be all of your personal content and similar, stored and indexed with ways that are efficient for data which can change rapidly and which must be preserved as it may be the only copy. Individual disks should, ideally, be presented to the user as a unified whole with reliability information baked in and managed by the OS in the background. Whether and which data is replicated, striped, etc should just be something people set with their hardware providing maximums (if you have 3 drives, you can choose 300% reliability as its max and based on the drive reliability, age, SMART status, etc, the system will rebalance what is stored where in order to best preserve the users desires, only bothering the user when things cross a certain threshold of uncertainty). Data is important. Not just for corporations, but for everyone.

Software Transactional Memory should be built into the hardware level and supported at the OS level, and can be supported at the OS level before hardware has a chance to catch up and accelerate it.

The OS should present itself sort of like Smalltalk "images" operate to the user, albeit with most of the moving pieces covered by a thin veil unless you launch a 'system edit' sort of mode which would make the running objects editable and present live development tools. Code editors should not simply be text editors, but tools which know they are operating on code and treat it as such, allowing one to operate on an AST or syntax tree view for certain operations.

Vector graphics and the concept of graphics 'surfaces' rather than a simple framebuffer should be the 'step 0' of supporting moving to a more modern system. Take advantage of modern pixel densities on high-resolution screens.


k8s is like a big OS


(1995)

    <META NAME="Date-Revision-yyyymmdd" CONTENT="19971128">
    <META NAME="Date-Creation-yyyymmdd" CONTENT="19950521">


Thanks!


When someone initially looks at all the bespoke interfaces on an OS that could easily be provided by a single API, one’s first and natural response is to wonder whether all this complexity has a non-historical justification for it. On Linux/BSD, I would argue that these choices are mostly historical and do not originate from some theoretical model of how an OS should be structured.

But let’s say we sit down and write a new OS: should the API for sockets be the same as for local files? Should we be able to write and read from processes by catting to and from some synthetic process file on disk? Should I be able to mount the internet on a directory and interact with sites be ls’ing their directory? Maybe I should be able to mount remote cpus and pin tasks to them? We can even take this further: registers as files, memory addresses as files, pixels and windows as files, etc etc.

All of these things sound super nice, and in many ways, they are. The everything is a file concept can be taken further than even Plan9. And fundamentally, this is what the post is arguing for. Except instead of everything is a file on a file system, everything is a table on a database.

The advantages of this approach are pretty obvious, we provide a simple and consistent interface: read/write for files, or select/insert/delete for tables. This allows a development surface that appears very simple and straightforward.

The problem is that this simplicity is an illusion and just a black box abstraction over what’s really going on. In many ways, it actually makes things more complicated as read/write become incredibly polymorphic. Maybe the API stays the same (syntax) for everything but the actual semantics still remain complex, maybe even more complicated than distinct API’s.

Even when the complexity of the semantics between the two approaches are similar, there are other problems. There’s arguments to made the heterogeneous resources should not all be provided by a homogeneous interface. For example a call to a data structure that has O(n) probably shouldn’t have the same interface as O(n^2). It makes it very easy for a developer to write incredibly inefficient code.

At its core, this dichotomy is best epitomized by Richard Gabriel’s “Worse is Better.” In the essay he talks about the difference between the New Jersey school and the MIT School. One of the differences is how the two schools think about APIs. The MIT approach is to design opaque and complete interfaces that solve the problem correctly at the expense of underlying complexity. The New Jersey or Unix approach is to value simplicity of the system at the expense of a more complicated API.

You can see an example of this in the read() system call. read() in Unix is hard to use and annoying as hell. There’s many bugs that stem from it’s misuse. The system call can (and does!) return less information that asked for even if it didn’t hit an EOF. Making read() always fill the buffer except for an EOF is a very hard problem and would have created a lot of extra complexity in the system. The MIT approach would be to implement this complexity as a simple interface is more important.

As you can probably imagine, there’s pros and cons to both approaches. Maybe the MIT way is better because many more people are going to be using read() than actually hacking on the OS. Or maybe the Unix way is better because the underlying simplicity of a system allows developers to attain a mental model of what’s going on.

If someone is interested, look at the OpenGenera source code (MIT school) vs say Plan9. OpenGenera is undoubtably a sublime and beautiful system but the code required to do it is just absurd. Plan9 is maybe less sublime, but the code itself is dead simple. Also you could compare the GNU userland vs the OpenBSD one.

tl;dr: There are costs to homogenous APIs for heterogeneous things: complexity, hard to form a mental model, easy for a programmer to misuse unintentionally. There are benefits to: consistency, beauty, easy programming.

Personally I like to take a balanced approach and decide on a case by case basis on how to trade off simplicity and correctness. Being a dogmatic programmer isn’t a good thing and it certainly doesn’t help your employers.


Yeah, I think an api of that sort would quickly turn into a headache if it subtlety did code dispatch through a common interface on objects that are hardly related in reality. From a programming perspective, that kind of opaqueness is more of a mental burden than relief.

I can see a database interface being useful if you constrained the scope of what an entry is and don't allow those operations to actually modify the objects.

The extent of the scope could start at what are traditionally considered OS resources, not random blocks of texts, newsgroups, or pixels, but things that would generally be more useful to query.

Not being allowed to actually modify an entry through 'delete' presents a problem. The database and the code that is allowed to modify its state would have to be synchronized/notified on an update.

In this way, the database is a front-end to the OS.


At "ultimate", I thought the author was going to aim for consciousness... But what, databases?!


Likewise. TBH this has all been done with NewtonOS: http://preserve.mactech.com/articles/mactech/Vol.09/09.11/Ne...

It uses soup unions to manage removable storage, and worked really well for the 512k memory available.

These days I'd recommend an embedded SQLite database for program configuration and data - somewhat standard, easily recoverable, backups are easy, uniform access from a program's API etc.

Lastly, I'd say the ultimate OS would have a bit more than this surely - minimal formally verified TCB, Arrow datastructures, Managed (but not GC'd) memory, native scripting language that blurs the distinction between users and developers, fully scalable auto-generative but skinnable UI, just to name a few.

I guess the trick with all of this is a suitable language which forms the OS base (think LISP OSes, Micropython, Squeak, AOS, NewtonOS), you get that correct and the rest can be layered on top.


Actually Microsoft tried this idea themselves with WinFS. It was planned for Vista but eventually the project was stopped because it was too much effort. I could imagine this to be really useful if it was possible to add arbitrary custom attributes.


Does anyone have the story on why this project was actually halted? It reportedly sounded like it would have been a giant leap at the time, had it actually landed.


Probably because it was a dumb idea.

There's no sense storing videos, baby photos and "quarterly report new (2) backup newest 08.2019_john.xlsx" in a database.


If the filesystem itself was modeled after a database (which it can be argued it is, somewhat), it's not much of a stretch for WinFS to have been a thing.

I was looking forward to seeing it in action based on reports of what it was.


File systems are databases in a small way. They use b-trees and such, but they don't enforce a rigorous global type system, don't enforce a table structure because everything has to have a schema, don't have the overhead of a query language to interpret every file access command, didn't ensure that all file accesses be ACID compliant, etc.

WinFS imposed all of those things and the associated overheads and much, much more every time you accessed anything.


It was very slow and resource hungry, and also very time consuming to develop things on. Imagine if you were only allowed to write or run applications that ran on top of Oracle. Even if you try to write a program that works on text files, those text files are stored as blobs in an Oracle database.


> Even if you try to write a program that works on text files, those text files are stored as blobs in an Oracle database.

Ever heard of Oracle Database Filesystem (DBFS)?

https://docs.oracle.com/en/database/oracle/oracle-database/1...


Isn't that the sort of problem you solve once, though? If you want a "this is just a stream of characters I can do POSIXy things to" interface, that's a library with fread(3) et al in the front, and a query for a single field in the database at the back.



That discussion doesn't contain any high quality posts, why not link the actual article?

http://hal2020.com/2013/02/14/winfs-integratedunified-storag...


My ultimate OS is Linux, but it's rock stable, supports all hardware and runs OSX and Windows software natively. Heh


You can already run a substential amount of Windows software natively on Linux with Wine, there is also Darling working on bringing MacOS to linux.

With that being said, my ultimate distro is Linux with a Desktop Environment that is less braindead than Gnome.

https://www.winehq.org/ https://www.darlinghq.org/


Yes... "run"... a "substantial amount"...

Nah man, sadly most of the really useful software is quite unstable under wine, if it even runs :(


My ultimate distro is very similar to Linux but with a desktop environment similar to OS X, but more open and configurable.


ElementryOS is heading that way, if you haven't heard of it yet. But the ecosystem is no where near close to MacOS. Sadly, for me, cause I mostly run Linux. :P


As an OS I agree, but the desktop experience leaves a lot to be desired.

I'm currently trying to figure out how to properly drive a 4K monitor with my laptop and not have it end up lagging. Supposedly hardware acceleration is enabled on Firefox, but even trying to play even a 1080p YouTube video drops frames. On Windows it is buttery smooth, so it's not a hardware issue. Don't even get me started about scaling...

(Currently using GNOME on Wayland)



Linux is a kernel


I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.


> A dream of an ultimate OS

I want linux with:

— Wayland

— KDE and only KDE

— Development SDK based on Qt (other toolkits strongly restricted, gtk and gnome gone wrong way)

— Packaging based on something like APK from android or like DMG (no repos by default, only for base system maybe, but other methods of installing not restricted).

Snaps and flatpack are trash because their goal is so called security and isolation, when normal human being need just convenience (hate repos on desktop).

Yes, it's macOS model, but with linux you can have different filesystems.


I don’t think very many people agree with you. I appreciate Unix/Linux because they are fundamentally libertarian. I don’t have to suffer using some default crappy and bloated desktop environment (KDE certainly fits that definition), I get to choose (I personally like cwm).

You could certainly make a distribution in which using anything but Qt would be very very difficult. But who would want to use such a thing? Besides you, of course.

Moreover, your comment isn’t a cogent response to the post. Did you even read it? He’s talking about a new paradigm of OS, something fundamentally different. What you’re talking about is the same old same old, but worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: