Hacker News new | past | comments | ask | show | jobs | submit login
Mercury OS (mercuryos.com)
162 points by sagelemur on May 1, 2023 | hide | past | favorite | 159 comments



This looks like the exact opposite of what I would do if I were to redesign operating systems. The job of operating system is just to run application and get out of the way, and most importantly be deterministic and reliable. I don't need my OS to "predict what I might need", I know what I need myself.


It pretty much embodies everything I hate in modern OSes. I know what I want to do and I know how I want to do it, get out of my way. Meanwhile modern OSes can't even handle simple "maybe don't steal fucking focus from window I am currently writing something in just because some other app decided to make another windows."


God, the fact that stealing focus still exists in 2023 is maddening. Please just let me open windows and apps in the background while I continue to work uninterrupted in the main focus window in the foreground. Even web browsers let you open tabs in the background, why haven’t OS’s caught up to the yet.

That said, I like the concept of this OS design. It reminds me a little of the theoretical concept behind Star Trek’s LCARS interface, where the interface adapts to the user and the task at hand. As a power user I doubt I’d use this myself, unless it could be installed on top of NixOS, but it might be more intuitive and accessible to a more general user base.

It will be hard to get rid of files and folders though, even for general users. As messy as that can get, it’s a simple abstraction burned into just about every human’s brain at this point.


I have the opposite problem. If I open too many Windows, the Windows window manager starts glitching out. When I open an app, it doesn't always come to the front and I have to go alt-tab through dozens of windows to find it. (Sometimes it ends up at the bottom, so alt-shift-tab switches to it... but sometimes it ends up in the middle...)

Also if I open too many windows, opening Windows explorer takes 1-2 or more seconds. Something about exhausting GDI handles?


How many are we talking about here, I'm curious about how many windows is too many for Windows?

I don't often have more than 30 open, and at that point I start thinking it's excessive and that I should probably tidy up to make sure I actually complete a task rather than get distracted.


Asked ChatGPT to write me a script to count the open windows :) I have 164. Although I have quite a bit of RAM left at the moment. The OS getting sluggish is usually what prompts me to close a few open apps. So having lots of RAM is bad for my productivity...

The annoying part is just that Windows explorer gets really sluggish -- to a comical degree. For example, I'll do a file operation (copy, rename etc) and it'll take that same explorer Window 5 seconds to register the operation that it itself performed. Meanwhile, Sublime Text open in the background detects it within miliseconds.


What you're doing with them ? Even on browser tabs I usually (I have vertical tabs addon so it is on the side of the screen) stop when I have full tabbar.


I still have no fucking idea how alt-tab logic works in windows. It's definitely not in order of recency...

Search function in window names would also be nice... I swear that shit haven't progressed in windows since '95


Why are you still relying on alt+tab in 2023? I haven't touched a windows for more than 5 minutes in a long time, aren't there a function to show all windows like in most Linux DE and Mac OS ?


Real question is why I'm still using Windows ;)

I just looked it up, there's a thing called Task View (Win+Tab) which is this big scrolling view of open window thumbnails.

Curiously, Alt+Tab itself is very slow (half a second of lag?) but if I kill explorer.exe I get a different Alt+Tab screen which is extremely responsive.

(And Win+Tab is, of course, much slower than even the slow Alt+Tab...)

I wish I could just leave explore.exe killed, but it makes it a bit harder to do things. Maybe I should do that, to force me to program my own alternatives in Python. (They'd still be faster than what Microsoft made...)


I disable that functionality on Gnome. Alt-tab to a terminal and alt-tab back to the editor is much faster that displaying all windows and clicking on the one I want to go to. Even two or three alt-tabs are faster than that. Furthermore alt-tab don't move the screen. It only raises a window on top of the other. Everything is still. The Gnome way is to move everything on the screen and move it back to the original place. I don't suffer from motion sickness but windows should stay put where I placed them.


>God, the fact that stealing focus still exists in 2023 is maddening. Please just let me open windows and apps in the background while I continue to work uninterrupted in the main focus window in the foreground. Even web browsers let you open tabs in the background, why haven’t OS’s caught up to the yet.

I3 tiling WM does it reasonable enough; I use it pretty much in "app per workspace" model (with config auto-putting started app on its designated workflow) and if app on other workspace wants focus it doesn't get it, it just lights up a given workspace and app window; there are also config options to explicitly disable all or some windows from taking focus.

> That said, I like the concept of this OS design. It reminds me a little of the theoretical concept behind Star Trek’s LCARS interface, where the interface adapts to the user and the task at hand. As a power user I doubt I’d use this myself, unless it could be installed on top of NixOS, but it might be more intuitive and accessible to a more general user base.

That idea seems to only work where either you build the interface for yourself, or for a specific narrow set of tasks


Focus stealing 100% has a place, just not on app launch and while you're typing. Since switching to Wayland, which doesn't allow focus stealing, I keep thinking apps have frozen when a popup shows up in the background.


Not for me. It's by far the most annoying thing any OS can do, especially that (I think?) every single fucking one have not figured out that it is NEVER desirable to steal focus from app user is currently typing into

In I3 at least it just highlights window and workspace if it is not on currently active workspace, instead of switching focus, so you know app wants something but don't get diverter.

Only thing that should be able to is app that you're using opening another window. There is no case focus stealing of different app is desirable and couldn't be just a notification.


The window popping up anyway, but in the background and without focus, and in a way that freezes the foreground, isn't the answer to focus stealing.

It's more like a rebellion in a jail block doing time for focus stealing. :)


Totally agree. I'm sick and tired of apps "helping me," constantly interrupting with tips, "did you know .." rearranging my windows and so on. I want to get stuff done not start a journey and relationship with new code. And nobody knows better than the user what stuff they wanna do.


I think this is the best argument against "telemetry" in apps. If there is no way to measure people's reactions to such annoying prompts, there would be no way for a product manager to justify it by saying "metric X went up by Y% when we showed this popup".


No, no, we need to connect microphone to telemetry so we can hear user yelling "fuck off" at the features


It's like the corporate OS developers think they know better and need to constantly remind you how pathetic and insignificant your existence would be were it not for their omnipresent magnificence.


It's disgusting and patronizing honestly.

It's like perpetual chase of "new, inexperienced user", at cost of people using the OS for last 5-10-20 years, and no actual benefit to anyone using it seriously to do work.


In KDE Plasma, a window manager, you can configure "focus stealing prevention", as well as activation and raising of windows to be on click or on hover in a fairly precise way.

Still that is fairly far from the OS layer. Microsoft and Apple sell the 'OS' as a bundle with everything (a window manager, file explorer, etc) that you cannot really configure a lot, but that is a problem with their products.


How does it embody stealing focus when this design goal is specifically about focus?

"Focused. The clutter we take for granted in today’s operating systems can be overwhelming, especially for folks sensitive to stimulation. Mercury is respectful of limited bandwidths and attention spans."


That's because this OS is a speculative puff piece by a UI designer looking to drum up some business.


I don't know why you people are so touched about somebody's attempt at redesigning an OS.

Here's the thing: You are not the target audience of this OS and you are by no means required to use it.

In my opinion, this is a solid attempt at creating an operating system for non-techies. Way fewer concepts to learn, way easier interaction.


I think the entire concept of "non-techies" is condescending to the people you're referring to. Computers exist, the cat is out of the bag -- computer literacy is now simply a facet of literacy in general.

I would much prefer a world where we empower people to actually learn about the tools they use, rather than one in which we jump through hoops in order to try and design around predictions about what we think "they" want when, in reality, it's like trying to hit a billion different targets with a single dart.

We should be reaching for designs that give tools to and enable people to achieve that which they can think up with their own free will and imagination, not predictive trash that more often than not simply results in annoying the user with something that is, at best, close but not close enough to what they want to actually be useful, leaving them with the frustration of not being able to do anything about it.


> I think the entire concept of "non-techies" is condescending to the people you're referring to

It's not, stop being offended on behalf of others. "non-xxx" is as neutral as it can be to refer to a population.

And it does make sense to give different end products to techies and non-techies, because the needs are wildly different.


They’re not redesigning an OS. They are describing a theoretical UI.

This is about as close to designing an operating system as drawing a front cover is to writing a book.


That thinking is why Linux failed as an OS for desktop. The UI part is harder than the hardware part, because it deals with a piece of diverse, non logical and unpredictable piece of meatware, instead of simple and predictable silicon.


You've shared quite a number of misconceptions there (not least of all your definition of "failed" -- Linux is used by millions of consumers every day so if that's a "failure" then you have some rather unrealistic expectations of what is considered a success. But ultimately "Linux" isn't an OS anyways so it isn't really fair comparing it as one).

> That thinking is why

What kind of thinking? Acknowledging that writing an OS is a massive undertaking? I have written hobby OS kernels so I'm not speaking hypothetically here. I wasn't suggesting that the UI isn't important either (which is what I believe you were alluding to with your "that thinking" snub), just that the UI is literally just the surface layer in a much deeper stack of technologies -- all of which is important.

> The UI part is harder than the hardware part

Writing device drivers is much harder. The failure modes are much more extreme. No amount of UI polish will improve the user experience if your OS frequently trashes your file system because it doesn't handle buggy storage devices correctly. No amount of UI polish will improve the UX if your platform frequently crashes, or even just doesn't support your hardware at all. And to compound matters, the vast majority of hardware isn't documented either.

So yeah, UI is important but your OS isn't going to win any fans if you don't nail the stacks beneath too.

> [UI's] deals with a piece of diverse, non logical and unpredictable piece of meatware, instead of simple and predictable silicon.

Hardware is often anything but predictable. The fact that it seems that way is a testament to just how well written most operating systems are.


It definitely failed.

Having lived through 20+ “the year of the Linux desktop”s I can tell you the goal was clearly to become he dominant desktopOS that everyone used.

About that last part… the op was talking about people, it actual hardware, you seem to have missed that. Hardware is in fact predictable, unless it’s failing or just that poorly designed.


> Having lived through 20+ “the year of the Linux desktop”s I can tell you the goal was clearly to become he dominant desktop so that everyone use

Again, Linux isn't an operating system like Windows, macOS or even FreeBSD. Even the broader definition of Linux has it as multiple different collections of different individuals sharing different groups of packages. What one team might aim to do with Linux is very different to what another team might want to do. And "The year of the Linux desktop" wasn't a universal goal by any means. Case in point: I've been using Linux (as well as several flavors of BSD, Macs, Windows and a bunch of other niche platforms that aren't household names) for decades now and I've never once believed Linux should be the dominant desktop. I'm more than happy for it to be 3rd place to Windows and macOS just so long as it retains the flexibility that draws me to use Linux. For me, the appeal of Linux is freedom of choice -- which is the polar opposite of the ideals that a platform would need hold if it were aiming for dominance.

So saying "Linux failed" isn't really a fair comment. A more accurate comment might be that Canonical failed to overtake Windows but you still need to be careful about the term "failed" given that millions of regular users would be viewed as successful by most peoples standards -- even if it didn't succeed at some impossible ideology of a desktop monopoly.

> About that last part… the op was talking about people, it actual hardware, you seem to have missed that.

No i got that. It's you who missed the following part:

> Hardware is in fact predictable, unless it’s failing or just that poorly designed.

Don't underestimate just how much hardware out there is buggy, doesn't follow specifications correctly, old and failing, or just isn't used correctly by users correctly (eg people yanking USB storage devices without safely unmounting them first). The reality is that hardware can be very unpredictable yet operating systems still need to handle that gracefully.

The market is flooded with mechanical HDDs from reputable manufacturers which don't follow specifications correctly because those devices can fake higher throughput by sending successful write messages back to the OS even when the drives are still caching those writes. Or cheap flash storage that fails often. And hot pluggable devices in the form of USB and Thunderbolt have only exasperated the problem because now you have devices that can be connected and disconnected without any warning.

Then you have problems that power saving introduces. You now have an OS with hardware that shouldn't be connected and disconnected yet still able to power that on and off gracefully (otherwise your OS is borderline useless on any laptop).

...and all of this is without even considering external conditions (ie the physical nature of hardware -- the reason it's called "hardware"). From mechanical failures, hardware getting old, dusty, dropped etc. Through to unlikely but still real world problems like "cosmic bit-flips".

Now imagine trying to implement all of this via reverse engineering - because device manufacturers are only going to support Windows, maybe macOS and, if you're lucky, Linux. And imagine trying to implement that for hundreds of different hardware types, getting each one stable. Even just testing across that range of hardware is a difficult enough problem on its own.

There's a reason BeOS-clone Haiku support FreeBSD network drivers, SkyOS's user land was eventually ported to Linux, and Linux (for a time at least) supported Windows WiFi drivers. It isn't because developers are lazy -- it's because this is a fscking hard problem to solve. And lets be clear, using another OS's driver model isn't an easy thing to implement itself.

Frankly put: fact that you think hardware is easy and predictable is proof of the success of Linux (and NT, Darwin, BSD, etc).


Today I learned that humans are predictable and consistent, but hardware is not....


I wasn't making any comment about the predictability of humans. However there have been plenty of studies that have proven humans are indeed predictable. If we weren't then dark UI patterns for email sign ups, cookie consent pop ups, and so on wouldn't work. The reason UI can be tuned for "evil" is precisely because of our predictability. But this is a psychological point and thus tangential from the discussion :)

To come back on topic. I wasn't saying hardware is unpredictable per se -- just that it often behaves unpredictably. And I say that because there are a set of expectations which, in reality, hardware doesn't always follow.

However the predictability of humans vs hardware is a somewhat moot point because that's only part of the story for why writing hardware-interfacing code is harder than human interfaces. I did discuss cover a few of those other reasons too but you seem fixated on the predictability of the interfaces.


This is yet another thing that non-techies sincerely don't care about. For them, it is the OS.

Now you can walk around telling every grandma that they should not refer to it as OS, but you will unlikely succeed in that mission. Accept it...


Non-techies don’t wouldn’t even have heard the term OS before, let alone know whether the post above is an OS or not.

But we are technical people talking about OS design. So I don’t really understand your point.


"Operating system" is a frequent term in the news. I don't know where you are from.

My point is that you produced dozens of paragraphs about terminology but added zero value to the actual discussion and all that while totally understanding what others actually meant.


> “Operating system" is a frequent term in the news. I don't know where you are from.

Tech news sure. But not in average peoples news. Or at the very most, only on special tech segments. Either way, definitely not frequent.

> My point is that you produced dozens of paragraphs about terminology but added zero value to the actual discussion

I was discussing the complexities of writing hardware drivers after it was stated that it’s easier than UIs.

Where’s the value in your comments here? You are adding nothing, just arguing for the sake of arguing.

> all that while totally understanding what others actually meant.

“Misunderstanding” I assume you meant (given the context of your comments). Either way, and as I said to the other commenter (though I’m not convinced you two aren’t shill accounts for the same person given your writing styles are the same), I did understand. I just disagreed and cited some experience I had to back up my points of view.

I don’t really understand your problem here. This is a tech forum and we are talking about operating system development. Yet you’re begrudging me having a technical conversation about operating system development.


I didn't understand how to interact with anything using Mercury OS. One of the first things I'll do today will be opening a terminal, git pull, then edit some files in my editor, start a Vagrant VM with Django, check the result. Nothing of what I saw on this site gives me an idea of how to do it.


Not sure about the non-techies part. He wants to replace icons by command line interface and voice recognition.


That may be why I thought "this might be cool to see a video game character interact with a tablet in a futuristic-theme rpg to make for like the menu screen or something."


I would go in a different direction too.

Personally, I would like some sort of "personal workspaces". Sort of virtual desktops that were project-oriented.

I would like switch to a project, find all my notes, emails, lists, files and more (terminal windows? web browsing/tabs? apps?) in that environment.

and when I run out of time, shut it down. Next time I switch to it, all my context is right there, ready to jump in.

could be some combination of fast-user-switching, virtual desktops, containers, vms, don't know? but unified.


MacOS can do this, it is called stage manager. https://support.apple.com/en-us/HT213315

I tried it, but it is just not there yet. My use case is switching between 3d cad design and printing apps (browser, slicer, notes) and between coding (jetbrains ide’s, terminal, browser, draw.io app)


But in the spirit of the parent article, it would be nicer if the os knew i wanted to do 3d cad printing and then organize my window with all cad/3d printing related modules. To let it create my perfect workflow because it gets my intents.

And when i switch from design to getting into the production stage, it should make the slicer module more prominent. Cad design usually are tweaks after this.


+1 to this, it's crazy to lose all (or most) contexts across a reboot or when switching between machines. Each app has some sync capabilities (e.g. PowerPoint, web browsers, Google docs, chrome tabs, Firefox tabs), but there is still no universal solution.

Fuck.


MacOS is actually pretty good about this. It restores applications on restart, and it’s provided applications pick up where they left off.

I don’t use the workspace feature on the Mac, I assume they’re recovered as well.

Obviously applications need to be restart aware as well. I really like how the provided Mac apps (Pages, Numbers, etc. ) work with the first class document model in the system. I have dozens of Untitled documents across apps, some are years old. Never “saved” them. They just exist. Across reboots, app upgrades, and OS upgrades.

I wish more apps embraced my lazy house keeping.

I don’t know how the documents sync across devices, if at all.

I wanted to mimic the OS document model in Java with my own app, but that’s easier said than done.


Unfortunately not. After reboots, which are annoyingly often, MacOS places all my carefully organized windows and their layout into a messy pile on the first/primary space and I have to spend a significant amount of time to get everything back to the state I want. This is also a huge pain in the ass when moving between external and internal monitor, where my windows and their size ends up not how I want them every single time.


KDE Plasma has something like this called "Activities" Check it out.


Sounds like this is pretty doable in Linux with some tinkering


Not even tinkering, KDE has this builtin.


I presume you're referring to KDE Activities. I use Activities all the time, but they're still far too much of a static thing, and way too High Ceremony to create/curate/maintain.


KDE tried something like that in KDE 4 but it wasn't well received so they toned it down in KDE 5.


You can do that in fvwm . Since 20 years.


I'm sure it's "well-designed" from a theoretical perspective, and unimpeachable from the perspective of Professional UI/UX Development, but it's so inflexible and non-extensible it comes off as something between a toy and a jail cell. "You will enjoy being productive in this environment focused on channeling productivity to Approved Ends. You will enjoy using Approved Technology in Approved Ways. You will ignore everything beneath the interface, for what the Approved Technology is doing is Approved, and therefore not for you to interfere with."

Cf: The Anti-Mac Interface, for a bit of a tonic

https://www.nngroup.com/articles/anti-mac-interface/


> This looks like the exact opposite of what I would do if I were to redesign operating systems

Me too but to be fair, we would probably call this a desktop environment, not an OS. It's a shell that could run on top of Windows, MacOS or Linux. Data could be stored in a standard filesystem or a database and managed by the shell and presented to users as "content and actions [...] fluidly assembled based on your intentions"


I agree, but to be fair, the vast majority of this stuff is very high level window manager and UI framework features.


What would you change compared to Android to improve "let me choose the application I want and run it?"


what could be easier than "tapping on an icon with my index finger"?


Cmd + D > Fuzzy search

... or whatever your OS application search is


Just for context: this is a 4 yo design study by friend and amazing product designer Jason Yuan, who also worked on Sprout.place and more recently some stuff at Apple (@jasonyuandesign). A lot of the comments currently seem to miss this and are focusing on immediate feasibility, “successful” design, or ability to deploy irl. That is not the point here. Things like this are both creative exposition, as well as corner stones for conversations for the rest of the community.


I enjoy this kind of creative exploration.

I get a lot of value from HN, but technocratic communities often fail to understand the value of reimagining everyday things.

They sometimes get so caught up in the technical bits and pieces they don't like about something you end up with responses like those you in this discussion, or HN's reaction to Dropbox, or Shallots Slashdot's reaction to the iPod.


Yeah, this whole thread is why Hacker News has the "orangesite" reputation it does. I wouldn't have shared anything like this on here, and that is a criticism of HN, not projects like Mercury OS.

I thought this was an interesting project. The site is well-designed. The UI is attractive. As a designer, I really enjoy reading about these things. The fact that this isn't something you'd build your personally customized Linux distro on is not a problem for me. (And no, I don't know the designer or any of his friends. I think the challenge the guy took on was interesting, and I liked reading about it.)

I like HN because of what people share here, but I've never liked the community itself, and this thread is a pretty clear example of why.


Why wouldnt you build this on Linux?


No, I said the OS didn't look like something you'd use for heavy-duty development work.

I was being a little sarcastic, but I was just amazed at all the people who spent ten seconds looking at this and said "this is useless, it clearly wouldn't be very good for all the extremely complicated powerful things I want to do. Can it handle fifteen different terminal windows? Fail!!1"


I don't actually see any os at all on this site. I see a ui design idea. Hence my assumption that you don't build your s on top of an os.

Or maybe I'm just old and words no longer mean things the way that I am used to interpreting.


The interesting thing to me is that a compromise is totally feasible and doesn’t need its own OS to accomplish. Let me save a “workflow”.

What is a workflow? A collection of apps, all with their internal states saved, and their positions saved on the screen. Let me close my current flow and open a new flow, with as little friction as possible. I could have an “email” flow in the morning, a “repo 1” flow after that, a “repo 2” flow, (and repo 1+2 etc) a documentation/paper reading flow, and on and on. A few macros can probably accomplish this. Maybe it already exists?

On a more fundamental level the authors are totally right about the debilitating distractions of apps and their damn notifications. But you’d need OS-level control to address that.


That's all non obvious. As far as I can tell this was just as serious as any of the other, 'this is the future', post. See the recent humane keynote for example, and the vergecast's reaction to it.

That said, it's absolutely valid to question stuff like this. A lot of design student type stuff miss HUGE usability issues.


Is it non-obvious? The page goes like:

1. Title

2. Screenshot

3. "Mercury is a speculative reimagining ..."


Yeah, that sounds like pretty standard marketing copy. Definitely nothing to say this isn't a thing someone intends on bringing to market.


Speculative literally means conjectural, hypothetical, toying with an idea for its own sake, though. It's not a word I associate with product of any kind, more with a sketch than a prototype.


The screenshots tricked me.


To me there were a couple of giveaways.

1. “Art direction” is the second tab. There’s no technical components, no installation info, no GitHub.

2. Jason Yuan Design in the footer. So clearly some kind of UX person/group built this, not a hacker collective.

3. Flowery UX-heavy language and the absence of concrete features. Focusing on the “what” rather than the “how”.


I shouldn't need to be a detective to understand the point of a website lol, unless that is the point. It was completely non obvious to me as well.


Standard marketing copy does not have the word “speculative” in it because it’s not… speculation


Can you really identify usability issues on something that is both a) just a concept and b) is meant to shift the work paradigm substantially?

Local optimization for your existing workflow is not the intended goal of this exploratory project.


It is well known issue in products that require serious 'design' for human interactions that tech nerds are useless as target audience samples.

In gamedev there are 2 groups of people first is 'game designers' or 'creative direction' and second is - 'tech people'. Best games for wide audience happen when 1st completely ignore 2nd on what needs to be done... And 2nd completely ignore 1st on how invisible bits have to implement it. Often 1st can not iterate properly without code changes on every step and it kills best ideas. I have seen a lot of success when design iterations can be done in no code or very low code environment.


It's worth discussing implementation feasibility of product designs. While I think a modern Canon Cat would be awesome, there are reasons why such a device doesn't exist.

For example: When you make a non-dedicated device, you are outsourcing a lot of the functionality to third parties (e.g. via an app-store). This significantly affects the costs of a product launch.


The ability of deployment of an operating (observe the word!) system is paramount and it IS the point here!

Designers should lay off of approaching functional tasks through vague and coined phylosophical mission statements in pace of pragmatism, it is like sawing the coat to the pretty button with esoteric properties.

Forgive me being harsh but I am pretty fed up by the trend in our time when product design is approached like pompous famous fashion designers make weird clothes only suitable for the catwalk and celebrate each other on the otherwise useless piece of textile.

We can instead express creativity and converse ideas through actual usability, we must discuss operating(!) systems through actual usability, through something could be realized and deployed, actually being possible to use for its purpose!


I'm going to disagree with you here. Concepts have their place in the world, one cannot just have pragmatic ideas in a society, nor can you just have idealistic ideas, there must be a balance, they both feed each other.

While those weird clothes by pompous designers can only be worn on the red carpet, they offer cues for things to trickle down to the clothes you and I wear. They're big ideas, exaggerated themes, like an extract, you don't consume the extract alone, you dilute, you mix.

Car concepts are the same, you never see a car's concept version go all the way till production, they're the concept, the idea is to shoot big, dump everything, write it all out, you can edit later.

We've got enough people on this earth that a great many of them can stay up in the clouds as astronauts of ideas, intermittently communicating to us their findings from the vast universe of creativity, with no impact on society's advancement, in fact to great benefit.

Just my 2 cents


Being corner stones for conversations means that people are going to talk about things that matters to them, like usability or ability to deploy. These are good things! That's what design studies are supposed to do!

I don't know why you're attempting to discourage this.


I don’t know how to feel about a promising designer only spending 2ish years with ADT and bolting. Perhaps if this work is paired with a personality of similar righteousness, that could be a team issue - otherwise to have someone attempt to do great things and see them immediately (in the relative product roadmap sense) leave, is troubling from the outside.


And after 4 years no download link and no system requirements ?


that's great context, i started having a flashback to hypepitch around thegrid.io when i started reading the copy.


What a bunch of non statements.

They’re just designing another tablet UI and renaming everything.

Instead of applications, virtual desktops, and launchers we have “flows, modules, and spaces”

It’d be cool to see more natural language support in OS interfaces though!


I don't know if they could be more pretentious even if they tried:

"Mercury rejects the Desktop Metaphor and App Ecosystems as fundamentally inhumane. "

2 pages later...

"Every Module can be defined and redefined using its Locus (the action bar). Locus combines the power of a Command-Line Interface with the convenience Natural Language Processing and the discoverability of GUI."

So they reject the inhumaneness of the Desktop metaphor, but replace it with another (Locus - what even does that mean?). At least I know what an action bar is.

Then, it touts itself as a complete replacement of the old paradigms, focused on touch interactivity, but:

"You can tap the Command key to toggle the display of available actions if you are unsure of where to start."

The command key? Is this built for Apple devices only? And if it's touch centric, why do I have a key?

ETA: a way more useful redesign IMO: https://uxdesign.cc/redesigning-siri-and-adding-multitasking...


Related:

MercuryOS – A speculative reimagining of current operating system paradigms - https://news.ycombinator.com/item?id=29948455 - Jan 2022 (8 comments)

Introducing Mercury OS - https://news.ycombinator.com/item?id=20043255 - May 2019 (2 comments)

Mercury OS: Modern, Humane OS Design Concept - https://news.ycombinator.com/item?id=20033445 - May 2019 (1 comment)

Mercury OS – A speculative vision of the operating system - https://news.ycombinator.com/item?id=20033328 - May 2019 (1 comment)


> The process of moving from App to App generates friction that takes you out of flow and distracts you from your intentions.

Please tell me why my workflow, which works for me, sucks more. What is this magical 'friction'?

Reads like it was written by an MBA.


Okay, consider you have 3 apps: gmail, slack and google docs

In gmail, you have a shipping confirmation, a calendar invite, and a team newsletter.

In slack you have a question from a customer, a review request from the team, and a poll for the next offsite.

And in Docs, you have a new project summary, a budget proposal, and a post-mortem doc.

You see, how you actually have 9 different things to do (intentions), but the number of apps you have is just 3? Sometimes, you'll have a thread of things across multiple apps. So despite switching "apps" there's still mental overhead of piecing things together. It's easy to overlook, but sometimes switching apps can causes an effect like walking into a new room, and forgetting why you were there in the first place. Hurting your focus/flow.


.... so from MBA that also doesn't know browser tabs exist.

Also OS doesn't fix any of that, especially for the webapps as they tend to not integrate well with anything else


That’s not fair. Context switching often involves a lot more than opening up another tab. And even if it’s not a problem for you, maybe it’s a problem for other people? Or you in the future?

Maybe what I just outlined doesn’t require a new OS but acknowledging the problem seems in line with accepting the most charitable interpretation.


Right. What helps(me) is 12 virtual desktops, one per app so I am single keybind away from anything I need. Not.... whatever this idea is.

I went over the years pretty much from full fledged GNOME/KDE thru light alternatives (XFCE), all the way to nerded out i3 tiling manager + tilda console for the ad-hoc stuff like CLI calculator. And cute little script to display list of .pcap files in /tmp and open wireshark on chosen one coz that's handy for the job

I just observed that most of the usage is "switch between a bunch of full screen apps + starting those" so I cut out the fat.

I don't need icons on desktop, I don't see it most of the day, apps are ran from alt+f2 launcher, few common apps got their own shortcuts, and I have some keybinds to move them between the desktops in that once a month case I need this app on different desktop than usual.

I3 matches apps to desktops by name so if I press caps_lock or hyper_l + 2 I always get firefox, if I press <mod>+4 I always get IDEA etc.

I do realize that's custom customization most people won't bother to do but "distraction-less" OS should aim to do exactly that, get out of the way as much as possible from someone's chosen workflow, not try to shoehorn its own one, and one that seems to love idea of wasting space and minimum information density

I would enjoy some "omni-menu" that is smart enough to figure out whether I want to

* run an app * run unit conversion/calculator * search my stuff * have some shortcuts to add calendar event or todo element or "open ms teams (company cursed me with it) and call person X"

But any existing ones seem to care about look more than functionality and information density so `rofi` is enough.

And that office-adjacent app developers (mostly Microsoft, but Google isn't blameless) stop making integration of anything harder and harder, so stuff that should be nice and easy decades ago (like calendars and todo working from any app) isn't shit.


So just add all the apps' functions to emacs and we're golden. Got it.


In all seriousness, emacs is a good example of an environment where "apps" are less siloed and rigid than they are in a typical graphical OS environment. Another is the Unix shell. The idea of creating an environment that achieves the same level of flexibility and empowerment while being modern and graphical and, somehow, accessible to non-programmers… it's not a new idea, but it's a noble one. "Mercury OS" deserves points for at least trying to imagine how it could work, even if there's not much meat on the bone.


This is my approach.


According to the "read more" link in the "humane" section(1), this friction seems to happen because the guy who came up with this proposal is painfully ADHD.

1: https://uxdesign.cc/introducing-mercury-os-f4de45a04289


Not an MBA, but an UX folk, an equally dangerous but different species


I try to stay relatively positive and open minded on this site.

But I hate everything this stands for. I want tools! This whole idea feels insulting to its own users, seen by the authors as an infinitely confused people incapable of understanding any complexity at all. They can’t see every option and action for an application! They will get confused! Instead we will build them half-assed applications on-the-fly based on their ill-imagined idea of what their goal is, explained in prose. Plain prose of course being the most accurate way to describe a problem space.

I’m glad this is just a concept and not a product pitch.


I see where they are trying to go with the "humane" approach, but the examples seem to require big companies to play along. For example, the shopping examples put the control back with the user, and companies supply basic info like pictures, pricing, descriptions.

In real life, of course, they won't cede that control easily. You'll get the same sort of mess you see on Facebook Marketplace, for example..."for sale" listings of cars that put the down-payment in the sales price spot and some wording telling you to call for more info. No actual prices are shown.

So, it would probably require some sort of unapproved scraping, collating, etc, to really work as intended.


Yeah, it would require standards.

The web lost track after the dotcom. In the beginning there were protocols. Now we're in the corporate moat period, where N corporations build the same thing N times because each one of them wants to dominate the others. Consequently, we were left with low-level protocols and languages (http, html, css, etc); there is no standard for buying shit online, despite most businesses being online. No standard for payments, no standard for query/search/browsing, etc. Compare that with, e.g., computer graphics, where graphics vendors (except Apple) collaborate on the development of standards like OpenGL, Vulkan, OpenXR, and shit works relatively seamlessly modulo vendor bugs and extensions.


Oh boy I’d like to play with something new and exciting like this and was sad to hear it’s just a design mock-up/idea. The OS space seems to have been quite stale for a decade or two, is there anything super exciting and different out there from a UX perspective? Remember installing Linux in the late 90s or rooting Windows phones and installing whatever latest Android OS was available in the early days of iOS and Android, and that was always quite thrilling.


Have you looked into Sailfish? https://sailfishos.org/

I'm not affiliated (nor have I actually used it) but it looks like an interesting alternative approach to phone UX, and one that you can actually use!

I'd be interested to hear any stories HNers have with it.


Says legal for use only in Europe


The other hard part besides hardware is that an OS is useless without useful apps (or at least a browser) so the OS dev also has to recreate the entire suite of browser/email client/word processing, etc etc which now requires even more effort, or some kind of window/linux compatibility layer, but in that case you might as well just spin off another Linux distro


"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

-- Frank Herbert, Dune

This has just recently hit me hard. I mean people who give every thinking to the machine are really getting their head free, but also free of thought. And thought is what makes my mind exercised. I really have problems with focus recently and fighting back my "be here and now" from my phone. This "look-i'm-humane" OS sounds like a next gen stealer


Could not agree more. I'm re-reading Dune and in this age of emerging AI its focus on developing human potential in light of increased technological capability is striking me as ridiculously prescient. Offloading some parts of the human process to a computer is simply efficient and an apt use of the tool, but eventually the equation flips to where it's doing more harm than good.

A computer really is best when it's a "bicycle for the mind," and it turns out that that's quite a tricky balance to achieve. It should augment us, yes, but we should always be the ones in control, both de facto and de jure.


I'm always excited to see new OS concepts. It's a shame that there's such a huge amount of work involved in building operating systems, otherwise we might have a lot more options to choose from. Back in the 80s you wrote an OS that supported one or two types of fully integrated computers. Now you're supporting any combination of hardware, much of it requiring proprietary black box drivers that depend on a specific ABI anyway. It's why I don't buy into the idea that people who wrote software in the 80s were ultra low-level wizards or something. They had a significantly easier job and could specialise.


The expectations are just so absurdly high now, it's the same problem in the browser space (which at this point is there a difference between OS and browser).

There isn't a lack of attempts though, there are many tiny OS out in the wild (believe there are like a dozen alone written in Rust) to play around with but many have had to dial back their ambition purely based on hardware support alone.


Well, you wouldn't need to write whole new kernel/drivers/etc. to implement this design.


The three main buttons for email: Reply, forward, delete looks exactly the same. From a UI standpoint, if all buttons look the same, the usability would be low.

I wonder why this decision is made? Probably to make it more "zen" like.

As an asian myself, I feel like this guy is feeding into the oriental fetish in the western market, or rather exploiting gimmicky eastern terminologies to justify his design(ugh). I mean fog is not a Japanese exclusive, why use Kiri? He's not even Japanese....and how the Daoist Way influenced your UI transitions again?


It's not all bad however, I can imagine the task focused modality and gestural language could be useful if translated into a productivity app, something like Jira or Notion, but decluttered.


The dark mode in this concept looks like a complete afterthought.

I like consistency of UI; fluidity at the expense of muscle memory (like having the Windows 11 taskbar centered makes no sense for muscle memory with icons moving about as you open things). The shortcut system sounds laborious.

Pretty? Sure. Practical? Not really.


This is fascinating. I've long thought we needed some ground-up thinking in this area.

But I now wonder if it's also why stripped-out UIs like GNOME are favoured by so many folks in (e.g.) the Linux space... which is famously full of neurodivergent people.

15min to pick a filename?! Really?

But I've seen SO MANY desktops like the pics in that blog.

This made me think in different directions. I already knew some users find current models very hard... But that they were this trying for them is news to me.

Maybe that's why some folk like working on iPads instead of laptops?

What is horribly frustrating & limiting to me, might be their exhilarating liberation from the endless suffering of window, and file and directory, management?

Could this be why so many aspie/ADHD-spectrum techies actually like GNOME or tiling window managers & lots of xterms?

My favourite non-pointer-driven GUI has for >~30 years been EPOC16 from the Psion 3 series. That could have lessons to teach here. No desktop, and the launcher is also a categorised columnar file manager with automatic grouping. It was an inspired design.

Screenshot gallery, but it doesn't give a good flavour of it.

https://guidebookgallery.org/screenshots/sibo3a


This title should have a (2019) on it. But I was happy to see this pop up again here, as it’s an increasingly possible and compelling UX for AI agents, where the rest of the chrome and complexity of computing is stripped away to the core actions we really need out of the software we use. I hope they give this a fresh look and try building it. Lindy looks like a close attempt


This is like in the 60s when they would reimagine cars w/ out steering wheels. 60 years later and we still have steering wheels. Maybe one or two ideas will stick. that's it.


People often confuse the Shell for the OS.

The Shell is the outer layer, the interface between the user and the system.

This should not be considered a solved problem, more research and experimentation is welcome.

In practice, systems should be designed for better Shell modularity, in many cases they can hardly be swapped.


I'm not sure how this works for more complex stuff?

What about a PO in an ERP?

Accounting functions? SOmeone adding records toa. general ledger?

editing/viewing a 3d model?

Editing a video? Multitrack audio?


I presume that if this got beyond some pretty sketches there would eventually be some kind of way to install new "verbs" and "nouns" and "modules", maybe even a place to go exchange money for interrelated collections of these things... How this would differ from "I downloaded Plunko 3D from the App Store" remains to be seen.


Sure, but what does that look like under this OS? The mock-ups don’t show anything more complicated Han messaging apps.

Other comments have stated that this is some student project of some sort. Like many other projects of this type it has failed to consider what most people actually do during the course of their jobs. Worse, it ignores what ‘nerds’ do on their computer while posting on a ‘nerd site’.


Oh yeah, all of that's very much undefined. Every attempt at a "humane" interface inspired by Raskin's post-Macintosh work always stops well before getting to that point.

There may be a much better world of human-computer interaction lurking in these sorts of ideas but getting out of forty* years of refinement of the window-icon-mouse-pointer paradigm, and forty years of building all sorts of tools and toys within it that are not designed to be decomposed into separate actions usable in any context, is a huge effort.

Or there may not, I feel like people have kept trying to do stuff like this over and over again and we just keep on ending up back in the world of monolithic apps that are entire toolsheds worth of tools, each with their own way to do the same things.

* or more, I am counting from the release of the original Macintosh, please feel free to add on sixteen more years if you'd prefer to count from Englebart's Mother Of All Demos, or forty more if Vannevar Bush's Xanadu feels like the place to put a stake in the sands of time and say "modern computing methods began here".


This is like seeing a new design for a custom forklift vehicle and asking what the 0-60 is.


No, it’s more like asking “sure it looks great, but how does it lift a pallet”


To me, this is more like UX design than OS. Or I have a big misconception on OS and UX.

Anyway this approach is interesting if this could be implemented.


Ah, yes, the Anti-Me Interface: The kind of interface most likely to make me not use something.

I'm sure it's "well-designed" from a theoretical perspective, and unimpeachable from the perspective of Professional UI/UX Development, but it's so inflexible and non-extensible it comes off as something between a toy and a jail cell. "You will enjoy being productive in this environment focused on channeling productivity to Approved Ends. You will enjoy using Approved Technology in Approved Ways. You will ignore everything beneath the interface, for what the Approved Technology is doing is Approved, and therefore not for you to interfere with."

Cf: The Anti-Mac Interface, for a bit of a tonic

https://www.nngroup.com/articles/anti-mac-interface/


Has anyone attempted to implement this into an existing Linux DE? Or a full screen web app experience?


Latter may be a good place to start.


You've edited your doc. It's ready to go. So you fire up a browser, connect to gmail.com, click Compose, click on the Attach button, navigate through a file browser to select the file you want to send, remember that you didn't save it, so you go back to Word, click file save, tab to the browser, tab back to Word because you don't remember the path, do File Save As... so you can copy the path, and paste it into the browse dialog when you tab back, which you do. If you're lucky, you won't have to go mining in the Drafts folder for your email-to-be which still doesn't have anything in it. And then you enter the subject, and finally the addresses, which mercifully auto-completes (maybe the one thing done right so far -- semi-intelligent integration with your contacts). And click the Send button.

vs.

You edit your document. You click the Send shortcut in the place where the toolbar (or even worse, the ribbon bar) used to be. The title and the addresses have been filled in, because AI, which is smart enough to know that the title of the email will be the title of the document. You type in the covering text, most of which auto-completes: "Please find attached, the blah blah blah for Q1." And click the "Fly, fly, little email" icon.

Verbs not apps. I definitely get the problem that's being chased here.

An even more interesting study would be to see how one could extend the paradigm to the Monster apps we live in, like word processors, code editors, 3d modelling packages, not just the apps that should really be completely invisible (like email, SMS, &c). Maybe a hamburger icon under which a complete menu system (and tedious trivialities like File As... live. (I don't think we can get rid of Filing. But File really should be a verb).

It would be interesting to see how far context-sensitive shortcuts could toward covering the functionality of a Monster app. Select text, see formatting shortcuts. Select a rectangle, see fill and outline shortcuts, select a picture, see retouching, filtering, and cropping shortcuts. The context-free UI was a nice ideal back in 1984, but it hasn't really panned out.


I love that this concept tries to separate content from apps. Ever since I saw some of the ways early GUIs differed from today's, I've had a feeling that we've lost something with the way apps in today's desktops are completely isolated from each other. The Alto let you compose data live from multiple windows kind of like a shell pipeline does for command-line apps: https://www.youtube.com/watch?v=AnrlSqtpOkw

The Locus and Flows in this concept seem like they could enable a similar workflow, as data isn't confined to "apps" the way it usually would be.


It's like whoever designed this doesn't know the meaning of Operating System.


I think this isn't the crowd to demo this too. Majority of the people here would prefer a command line. Too bad this echo chamber can't understand what might be useful to a wider audience than their own monitor display.


On the whole I agree with you on this. I'm so often reminded how user-hostile computers really are by my father, who recently sent me a message to ask what Bing is and why his computer keeps asking him to use it. Despite several attempts to explain it, he doesn't really understand the difference between a web browser and a web site, he doesn't really understand that web sites and applications are different, he doesn't understand that Chrome and Edge are different but also the same, he doesn't really follow why a shortcut in the Start menu to a website is different to a shortcut to an application, or where things actually go when he saves or favourites or flags them, the list goes on.

Somewhere along the lines, we have built computer user experiences that bully people into trying to work how computers work, with tons of abstract concepts like "files" and "folders", applications that silo their data, difficult-to-understand application lifecycles and user interface paradigms and settings and so on. Worse is that power users and developers — the same people who have the power to do better — have come to believe that having to adapt to how computers work is an essential skill and not a failing of the computer itself.

The Mercury OS design linked is arguably a massive over-simplification of where computers need to be but some of the concepts really aren't far off what I would probably want to design myself: the idea that applications are really just exposing workflow elements and sane data APIs and not a concrete "thing" you have to interact with or care about directly, so that the user thinks about _what_ they are doing and not _how_ they have to do it.


This is a beautiful looking OS, and the idea of "spaces" may be worth exploring. I was just reading about Facebook's failed Home phone project where rather than apps, the interface was focused on people.

I thought it was a real OS, and wanted to try it, as I've got a Fire tablet gen 1 that is looking for a new use.

Weird question, the credits say Mercury is dedicated to Jef Raskin. Why not have the home screen in the demos say "Good Morning Jef," instead of Jason? Who's Jason?


Jason Yuan. MercuryOS is his project.


OS design is in an interesting place right now.

I feel like it's impossible to have a device experience that caters to everyone.

The vast majority of users don't consciously register that they are using a computer with an operating system. Most don't know there are differences between tablets, laptops and phones. Most don't understand the concept of a file or a folder - "photos" are in "iCloud" and "documents" in "google docs", etc.

An OS for them, laptop or mobile, is something like iOS or Android (or the demo in this post). The spanner here is compatibility. I'd wager if Android or iOS could run Windows games/apps and fit on a desktop/laptop form factor, this would suffice for almost everyone. The iPad "Pro" is an example demonstrating the potential of this concept.

The non power-user professionals who use their devices for productivity purposes like spreadsheets, document writing, etc would need something that lets them multi task, wiz around and organise/optimise their workflow. Something like Windows 7 or MacOS. Something that doesn't get in their way, maximises the use of their hardware, is reliable and predictable.

The power user, administrator and developer are either agnostic or never happy (I fall into the latter category).

I want the Windows 7 with the classic theme experience on top of a Linux OS that features MacOS workspaces-per-monitor, with iTerm2 and where the start menu search is as good as Spotlight - but with extensions. It needs to be compatible with Windows games and where applications behave reliably, without jank.

My dream operating system is a spartan productivity workhorse born out of a vaporwave fever dream - and no designer in their right mind would ever sign off on it (I'd be willing to forgo the classic theme).

At the end of the day, I feel like we can't cater to everyone with one UI and the OS vendors are trying (and failing) to do just that.

Windows 11 is a dumpster fire and MacOS is increasingly hostile to the experience of power users. They gotta stop fooling around with "one OS for everyone"

What we need are real differences between "Windows 11", "Windows 11 Pro", "Windows 11 Linux Spartan Edition"


That productivity OS for power users does exist. It's free and you can even run Windows games in it now.


Haha I know, I run Linux dual booted 50/50 with Windows now and often game on it.

It still has a lot of janky behaviour though - like Chrome tabs on Wayland not being draggable (off/on the window), no dark mode support on Chrome, workspaces per monitor don't exist and I have yet to find a terminal emulator as nice as iTerm2.

All-in-all though, I am pretty happy with it and will continue to use it. Happy to see a growing focus on desktop usability and hope to see that continue to grow in the near future


Does this thing actually exist or is it just a pretty website?


It literally says it's a vision concept at the bottom of the website, so it unfortunately looks like it's just a pretty website.


This looks like a graphical design project with no concern for actually building something that has to work. I guess the "speculative" bit gives the artist license to do it.

I personally would have loved to see some working code because I have no idea how this new age sounding stuff is meant to actually work to get anything done. Or is getting something done inhumane nowadays?


Microsoft actually tried to do something like this in the mid-1990s with OLE 2.0 and Cairo which was the operating system that was supposed to embody all this.

Basically, instead of applications, the user would focus on documents (kind of like the "spaces" in this). Instead of opening applications, you would invoke "verbs" on different parts of the document.


> Basically, instead of applications, the user would focus on documents (kind of like the "spaces" in this).

It reminds me a bit of the Apple Lisa.

https://en.wikipedia.org/wiki/Apple_Lisa

> With Lisa, Apple presented users with what is generally, but imprecisely, known as a document-oriented paradigm. This is contrasted with program-centric design. The user focuses more on the task to be accomplished than on the tool used to accomplish it. Apple presents tasks, with Lisa, in the form of stationery. Rather than opening LisaWrite, for instance, to begin to do word processing, users initially "tear off stationery", visually, that represents the task of word processing. Either that, or they open an existing LisaWrite document that resembles that stationery. By contrast, the Macintosh and most other GUI systems focus primarily on the program that is used to accomplish a task — directing users to that first.


They doubled down on iMessage’s usability problems. With iMessage, you can tell which of your contacts you are supposedly communicating with, but you can’t tell which actual identity (iCloud account email address or phone number) is involved.

It looks like, in MercuryOS, you can’t even tell what means of communication you’re using.


The author mentions being inspired by Jef Raskin. His book The Humane Interface is fantastic, and as relevant as ever.


I remember Microsoft wanted to do something like that. The year was 1990, the software was called OLE, the goal was to hide files and apps from the users and let them use "documents" instead. Mostly succeeded in the files part, but not so much in the apps part.


I’m skeptical about scaling this approach to accomplish real work.

Could Mercury OS be used to write the HTML pages describing Mercury Os? What about the UX design? What about coding for it?

It seems that for anything non trivial like this the whole métaphore collapses.


This makes me think of the Lifestreams work by Freeman and Gelertner. http://www.cs.yale.edu/homes/freeman/lifestreams.html


> Mercury rejects the Desktop Metaphor and App Ecosystems as fundamentally inhumane.

Inhumane is a strange word choice, to say the least. It’s neat to think about the possibilities of future UX, but to suggest the current system is cruel just sounds ridiculous.


It is a direct reference to Jef Raskin's work, which he called the Humane Interface.

https://en.wikipedia.org/wiki/The_Humane_Interface

He is namechecked right there in the story as a pointer. You're on the WWW. If you don't recognise the name, the idea is that you Google it and find out. That's how this stuff works. It's not 1993; you don't need to try to guess what they meant...


i don't think inhumane is intended as cruel, more as "not aligned" with how human are/think/work.


I knew, as soon as this "OS" title didn't lead to a Unix-like general purpose OS, that there would be a lot of shitting on it.

Stay in your lanes, designers! Humanity cannot improve on the taskbar/window configuration for any use case, ever.


Maybe designers should research why we are not redesigning things like power sockets, forks, knives, shoelaces every other month.


But we are...

I have seen quite a few redesign for fork and knifes for people with disability. It is also very dismissive, just because current sockets works, it does not mean there is no space for improvement for more secure, or reliable socket, and the way we found out if such thing exist is with design! Like this one...


One could call the desktop environment (OS frontend) an appliance, and we certainly redesign appliances and scheduling software frequently.


I hate webpages like this (e.g. most of them :L) Javascript req'd. Also, I may have missed it under 'Architecture', but it doesn't say what (if anything it even runs on).. it looks like just a design concept.


The premise of separating the content from applications feels very OpenDoc (https://en.wikipedia.org/wiki/OpenDoc) to me.


Yeah, it gives (assumably) simpler (more likely only different) than usual answer to trivial matters where we basically need no computers. But how about complex matters and tasks what computers are actually made for?


I love when design that optimizes the humane experience is being pushed forward. I celebrate this technology ten thousand times happier than the next hyped self serving tech framework.


i like it, these attempts to challenge the way we approach our os user interfaces and ux are very welcome in bringing new perspectives.

this design miggt be fit for narrowed down use cases for special set of users who can really appreciate it.


Top bar doesn't have any kind of background on Windows. Is that intentional?


This is not an operating system, this is a UI, and as far as I can tell, a non-functioning demo at that. That's fine, great actually, but let's at least use the correct words.

Purely aesthetically, this is great. It's incredibly Apple-ish and honestly just seems like a massively simplified alternative to iPadOS -- SiriOS, maybe? -- but that's not a knock, it's extremely polished and overall fantastic design work.

But just because it's nice to look at doesn't mean it's nice to use. The predictive nature honestly just seems annoying -- there's a reason people curse at their voice assistants so often. It's a gimmick that I doubt will ever be as useful as we'd like, at least not until AI takes a massive, massive leap in the direction of Jarvis and the like.

My problem with these kinds of "declarative" approaches to user interaction -- anything that obfuscates the "how" things are done because apparently that's too difficult and scary -- is that it tends to encourage passive consumption, not active creation. Or, at best, you get toy creation apps that piddle around in the absolute most basic of the basic such that really they're better classified as educational apps rather than creation apps.

For a totally consumption-oriented device, that might be fine for a while, but eventually I'm going to disagree with either the direction a developer has taken an app, the content it's curating for me, etc, and will surely want to deviate from its "curation" in some way or another...however, this experience looks to hijack so much control from the user that they can't configure anything. You're either along for the ride, or you don't use the device.

I said this in another comment, but it bears repeating: computers are not going away, therefore computer literacy is a facet of literacy in general. And IMO literacy is a basic tenant of being free. I for one would like to see designs that give users some agency in their devices and aren't condescending in the name of offloading the "inhumanity" of...thinking about where you put your files. Come on, that's just absurd. I get what Jason is going for here, but it's completely jumped the shark from Silicon Valley into HBO's Silicon Valley. He should remember Steve Jobs' quote, that a computer is a "bicycle for the mind." The really important thing about that quote is that it's not a car, and it's certainly not a self-driving one. It's a bicycle -- it augments and multiplies what we put into it, but we still have to put something into it. If we're not in control, then who is? I would argue that it's those who are feeding you the content that you're consuming. By losing the ability to "grip" your device and actually exert some agency, and perhaps use it to create something new and novel, or at least to actively explore, you're making it so that those for whom these designs become "what a computer is" will have to climb that much higher in order to even begin to understand how to transcend the content feed, one example being that there are kids starting computer science programs today without even really understanding the concept of a filesystem, because they've never had to interact with one beyond maybe Google Drive. They're mistaking the cave for reality.

Anyways, it's late and I'm not sure how to end this rant other than by simply stepping off my stool, so -- step.


Nice. Reminds me of my unfinished project “SynchronizedOS” which automatically handles securi..err, synchronicity concerns re: Cloud/Endpoints

Inmates/asylum cc: dang




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: