Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Everyone seems to forget why GNOME and GNOME 3 and Unity happened (2022) (liam-on-linux.dreamwidth.org)
144 points by signa11 6 months ago | hide | past | favorite | 243 comments



Gnome happened because KDE, older, was based on Qt which was non free so the GNU project pushed for an actually free desktop. Actually, the GNU project followed two strategies at the same time: it pushed Gnome given Qt was not free, and pushed for Qt to be free [edit: initially by initiating a free replacement for Qt, Harmony] [1]. It succeeded in both. If Qt had been free from the start, maybe we wouldn't have Gnome at all.

I think why GNOME eventually became more widespread than KDE with most major distros adopting it instead of KDE in recent years (at some point RHEL was KDE-based, SUSE always was, we had Mandrake too) is another question.

[1] https://www.linuxtoday.com/developer/stallman-on-qt-the-gpl-...


Yeah, all that, but I also remember that many would-be developers were at that time not convinced that C++ was the way to go. Gnome hence was decidedly not using C++ (for the better or worse, i.e. C++ is today a nicer, much more acceptable and accepted language than it was in the mid-nineties, but basing the object system on C makes it easier for other languages to interface with).


Do you have something I could read about this? I didn't know about this aspect and I'm interested / curious.


And a big reason for GTK’s continued proliferation (even beyond GNOME) is that GTK is easy to write/generate language bindings for, enabling it to be practically used with many different languages. Qt being restricted to just C++ and/or JS dampens its appeal.


Qt has fully supported Python bindings as well, and there's increasingly capable Rust bindings from the community, too.


Which is great, but GTK still has it solidly bested for language choice.


Having entered the Linux GUI space only recently, I don't think GTK has that much of an advantage these days. Qt has excellent language bindings for just about every language (even Rust, despite the apparent incompatibilities when it comes to memory management). QtQuick and QML are easy to work with and can easily be designed using the visual designer tools in Qt Creator.

There are licensing issues with using Qt, though. I'm fine with most of my stuff being licensed GPL but if you want to have any other license on your project, you'll probably have to use something like GTK.


> I'm fine with most of my stuff being licensed GPL but if you want to have any other license on your project, you'll probably have to use something like GTK.

What's wrong with using Qt with LGPL? You can't link your app statically, and you have to publish changes that you make to Qt's source code, if you do, but that's mostly it. That's what I'm using for my app[1].

[1] https://www.get-plume.com/


Modern Qt is a mix of GPL/paid and LGPL code. The LGPL part is fine, of course, that's the same license GTK uses.


It's just too bad Gtk devs have been progressively removing keyboard input features from Gtk3 and 4. The idea of Gtk as a generic interface is long dead. These days Gtk really just is whatever GNOME is. Which means things like being able to paste filenames into a file->open dialog (gtkfilechooserwidget.c) have been intentionally broken and left to rot since 2014.

If you want to make an application that uses keyboard input, don't chose Gtk anymore.


Ironic, given that in the dim and distant past (i.e. version 1.2) Gtk's (admittedly butt-ugly) file dialog could be driven very easily from the keyboard, even supported effortless wildcard filtering and shell-style tab completion.


Which of these languages really mattered though? C? (which I'm inclined to believe)


It’s less about languages “mattering” and more about meeting devs where they are. Projects, especially those of a FOSS nature which are often hobbies, are more likely to happen when a dev can write in their preferred language.


This is part of the meaning of mattering I used. How many GUIs actually appeared and were based on GTK because of the lack of Qt, apart from C?

My issue is that I find the argument plausible and seducing, but do we know this is an actual reason or is it just guessing?


I can’t answer that question definitively, but it’s moot because of you have C compatibility, bindings for anything else are trivial to create.


> Qt being restricted to just C++ and/or JS dampens its appeal.

Have you searched "Qt bindings" before posting this comment? Qt is in no way restricted to C++ and/or JS.

Python bindings for Qt are very popular and work well. Even with C++, JS and Python you end up covering most developers. But those 3 are far from the only languages you can use with Qt.

A quick look at the Qt bindings page shows bindings for other languages like Rust, Java, C#, Go and many more.


Besides Python, none of them are officially maintained, and thus are next to useless.

You can quickly verify it, there is very few qt programs written in something other than c++/python. As much as I dislike C, C FFI is universal and thus gtk can actually be used from anything .


Many of the listed bindings are in varying states of repair and likely not used much if at all. The listed Go bindings for Qt for example saw their last commit 4 years ago.

By contrast GTK bindings tend to get maintained and used by devs more frequently because they’re easier to generate and make idiomatic. I see GTK projects written in all sorts of languages but rarely see Qt ones that aren’t C++, JS, or Python.


KDE was probably continually stymied by "it's nonfree" FUD, but part of the problem must be them blowing their foot off by releasing an alpha quality release as "4.0". (And why did distro packagers switch to it? Because users would see the version number and demand the new thing... It wasn't fair to expect packagers to hold back.)

Before then it seemed like KDE and Gnome were on roughly equal footing, but maybe that was just my perception.


Oh, the memories. I was always excited about new releases at that time and KDE 4 really had me rethinking that. It was so bad.


Yes but they really made up for it with Plasma, to be fair. They learned from their mistakes.


I really loved KDE 4, but then again, I'd just come from Vista.


First versions of KDE4 were unstable (notably, there were memory leaks I think, and frequent crashes - maybe related to the memory leaks - in plasmoids), but last versions were rock solid (starting from version 4.4 maybe), and it was a very nice desktop environment back then.

(Now, we have Plasma 5, and Plasma 6 in a few days, which are even better :-))


Yeah, KDE 4.0 wasn't bad because of its design philosophies or functionality. Everytime people hear that phrase, they instantly jump to that conclusion for some reason. No, it was resource hungry and crashed a lot (mostly due to aforementioned resource needs).

The later versions of KDE4 didn't change much, they just pared down resource requirements, stabilized things and let the average computer specs catch up to them. Now KDE is considered the more "conservative" DE.


I wouldn't call it conservative. Compared to Gnome probably. Compared to XFCE not as much.

But it's totally with the resources <3


Totally worth i meant. Oops


There were also basically no applications for it when fedora decided to ship it


>No, it was resource hungry

Was it really, or was that just a rumor from non-technical "C++ is bloated" Linux crowd?

I used to run KDE in the early 00's, and it run just fine, in very primitive CPUs of the time.


KDE4 was latter half of 00's. Oh, the bitter memories of having your CPU and memory getting hogged by the mysterious daemon called Akonadi. Or was it Nepomuk? At that time, many waved away concerns that Linux was simply using the unused memory for caching stuff, which is why the resource usage viewer displayed very small free memory amount, so they say. Well, that didn't explain the excessive swap thrashing that was occurring in the background.

Hunted for workarounds only to find out that disabling the above services also disables some essential desktop features, and made me rethink whether the switch from GNOME 3 was worth it. Sure, GNOME 3 had horrible UX, but at least my PC still had resources to do other stuffs then. But I sure missed GNOME 2 at that time.


I was a KDE mainliner and switched to GNOME2 because KDE4 was too resource intensive for my machine.

So our anecdotes are contrary to each other's.


I did say "early 00s".

KDE4 was released in 2008.


I think you meant to respond to the sibling comment


More recently, there's a trend of devices or newer device-centric distros tending to ship Plasma by default (e.g. Steam Deck/SteamOS, Asahi Linux for Apple Silicon Macs, a bit earlier the PineBook).


KDE is easier to pare down/minimize due to it's classical componentized structure ("KFrameworks").

It's really hard to do the same with gnome-shell, without either forking it or writing a replacement.

Not that the latter is a bad thing, necessarily, gnome wants to provide a cohesive experience. That just doesn't fit in as much for environments trying to go against that vision (Pop_OS!, SteamOS, etc).


There's a timeline mismatch.

I'm speaking about the time when Gnome was created.

KDE 4 happened long after these issues of Qt being non-free (and it's not FUD, Qt was not free, it's a fact). Qt was released under the GPL in 2000, 8 years before KDE 4. At the time of KDE 4, nobody was (even) mentioning the non free Qt issue anymore. It was already old history.

The instability of the first releases of KDE 4 might have played a role in Gnome success, but Gnome was already the dominant desktop. It was Ubuntu's default desktop, and Ubuntu was the major desktop linux distro. I think it was still Gnome 2, with Unity about to be released.

KDE 4 was a big shiny promise and took some time to be released (because it was a huge revamp), that's why we wanted to have it.


> Qt which was non free

It was GPL licenced. If you call that "non-free" that's downright Orwellian.


That's not true. It was first released under the Qt Free Edition License, which was nonfree. Harmony was started in that time. In 1999 version 2 was released under the Q Public License, which was a free software license but not GPL-compatible. Then in 2000 version 2.2 was released under the GPL, and the Harmony endeavor ended.


It's so not relevant. All the Linux desktop wars happened when Qt was already GPL, including Gnome, Gnome 3, KDE 4 and Unity.

It was already GPL for almost a decade when the described events happened.


It's relevant to the reasons for the creation Gnome. Gnome was very specifically created before that and the whole ruckus over that was if anything one of the big drivers behind the GPL release of Qt.

The flamewars over this at the time were epic. Had Qt already been GPL'd, the chances of Gnome taking off would have been far smaller - though there were other differences as well, the GPL issue was by far the most prominent.


Gnome 3 and KDE 4 are "modern history" compared to the events we describe. Back then (way before those release) KDE was less free than Gnome.


Again, I was speaking about Gnome's creation, which was first released in 1999, before Qt was freed, and started in 1997, 3 years before this and a few months after KDE's first release.


> It was GPL licenced

It wasn't. That was the point here. You do not know your hoitory well enough to comment.


Qt didn't start out as GPL. It's even in the article.


(A)GPL happens to be my favorite license, but even if I meant what you said, calling this Orwellian would be a bit much, if not outright off-topic, don't you think?

I'm all for using the definitions correctly, but people can make mistakes and deserve to be answered gently.


I remember GNOME fan base carrying this "freedom is slavery" narrative way into 2010s.


What do you mean? Do you have pointers? What freedom are you speaking about? Doesn't ring a bell.

The only related thing I can think of is the argument that permissive free software license lets people build proprietary software on top of the licensed work, but the argument is not surprising and it's still true. I also don't remember slavery being mentioned. That would be a bit much. Now I can't manage to link this matter of fact with the discussion, KDE and GNOME both being GPL software.


"Linux is only free if your time is worthless."


How is that relevant to the current discussion?

Besides, I hate this sentence. It is very wrong, or true for any OS, whichever you prefer, and I'm seeing it more and more often. This kind of falsehoods doesn't need to be spread like this. Linux just doesn't get in my way and it has just worked for how I've been using my computer for years. To the contrary, the desktop environment and tools it lets me use are very efficient for me.


> alienated a lot of people who only knew how to use Windows-like desktops

This is needlessly inflamatory. Many people simply don't like macOS style desktops.

In particular I like my menus attached to my windows because I don't run apps maximized.


FWIW, it’s not uncommon for Mac users to also not maximize their windows. In fact I’d say Mac users disproportionally don’t (maximizing everything is more of a Windows thing). Menus being in a bar at the top of the screen just doesn’t bother them.

I’m in this camp. The only windows that get maximized are things like IDEs with tons of panes which are unusable otherwise.

One perk of the menubar being this way that I’ve come to appreciate is that programs can’t try to remove it or sweep it into a hamburger menu in some misguided quest for minimalism. It’s gonna be there no matter what, which is a welcome bit of consistency.

I wish Linux had better support for this option. KDE lets you set up a global menubar but sadly it’s not supported by any programs under Wayland and only works with Qt apps under X11, with the GTK plugin required having been abandoned last I checked. There should really be an XDG standard for programs to expose their menus with, not just for customizability but as as an accessibility affordance.


> One perk of the menubar being this way that I’ve come to appreciate is that programs can’t try to remove it or sweep it into a hamburger menu in some misguided quest for minimalism.

Sure, but those same applications just leave a bare and non-functional menu bar at the top, which is almost more offensive.

Windows enforces the same kinda constraints on MenuBars:

https://learn.microsoft.com/en-us/windows/win32/api/winuser/...

The only difference is the Windows' version disappears if you choose not to use it. And the same with KDE and GTK.


If an app doesn’t populate the menubar under macOS, I’m probably not going to use it unless I absolutely have to. It’s a strong signal of bad design, low-effort port, poor functionality or some combination thereof. There are few applications so devoid of functionality that they can’t populate a menubar.


You can take whatever stance you like. The point being made is simply that menubars work the same in both cases excepting two factors: mandatory existence and location.


> One perk of the menubar being this way that I’ve come to appreciate is that programs can’t try to remove it or sweep it into a hamburger menu in some misguided quest for minimalism

Oh trust me they can. There's some really terrible electron-based Mac apps out there. They just have a token menubar with a few options like copy/paste and the rest is the usual hamburger crap.


I've never seen more maximized (full screen) windows than people using MacBooks and swiping between them. Which I do too when using my MacBook.


To each their own, but that drives me nuts personally. I use spaces constantly but full-screen spaces are probably the bit of Mac UX I find most unappealing.


Yes I like fullscreen apps (I use them a lot on KDE). But the Mac implementation of full-screen apps is pretty terrible. Especially because the menu bar disappears.


I second this opinion. I greatly prefer Windows like desktops. I never liked the Mac UI and I used the very first one back in the 80s. It was OK with that tiny screen but it was out of place as soon as Apple made Macs with external large monitors.

My current preference is GNOME shell modded to move the top bar to the bottom, merged with a task bar, a rarely used Start menu, an often used Places menu, no Activities, virtual desktops activated by hotkeys and many other less visible things. That's much better than Windows, so thank you Microsoft, at least for me.


The move of the top bar to the bottom makes a lot of sense!

I personally use KDE with the taskbar on the left, as it allows for greater vertical screen real estate. Also changed the workspaces keybindings to the same ones as Gnome, which are quite better!


(You probably know this, but just in case: the rationale for having the menu on the top is specifically for apps that are not maximized, because the mouse movement to get to the top is much faster than that for accurately clicking a hotspot in the middle of the screen)


On one hand that reason makes sense, but on the other hand I never found that to be a problem when using menus attached to windows. I need to aim for buttons in the middle of the screen all the time, and that's not slower just because the mouse doesn't bounce against a screen edge. I also hardly find myself actually bouncing against the screen edge when using macOS menus.

IMHO the bigger problem on Mac is the weird 'application centric' window system behaviour instead of the much more intuitive 'window centric' behaviour of Windows (and most Linux desktops).

For instance, "Alt-Tab" flips through applications on macOS, and it's possible for applications to be running even though they have no windows open, and in that case they just present a "naked" menu bar which I find utterly confusing. The whole thing feels like a dead-end UX experiment from the early 90's.

I'm mostly using the "3-finger-swipe" on macOS now to directly pick a window instead of tabbing though, but the main reason for that is not that it's better, it's just better than the weird macOS Alt-Tab behaviour.

PS: this sounds like I just switched from Windows to Mac, but I made that switch more than a decade ago, and it still feels "wrong" ;)


The issue with window-centric behavior, I find, is that lacking the logical grouping of application-based, it scales badly. Alt-tab for example becomes increasingly unusable as the number of open windows increases.

The fact that the taskbar has grouped windows by applications by default since XP/Vista hints at this scalability problem, with the taskbar quickly getting similarly unusable without the grouping.


I see your point but is firmly on "team window". I feel the scalability problem is better solved with multiple desktops. My work is rarely in one application so grouping based on work context is a better fit for me.

This can work really well even on MS Windows. It is unfortunately not default nor obvious to the regular user. But what comes in the box sprinkled with a PowerToy is rather good.

That is the paradigm I strongly prefer.

I have always liked the tight integration on Mac (such as printing - no driver fuss) and the UI looks really sleek. But the UX of the window/app handling does not sit well with me. Not for the lack of trying.

My first GUI was GEM Desktop which might have damaged me for life.


Probably just boils down to differences in mental models but the Windows implementation of workspaces makes the problem even worse in my opinion, since without per-app grouping on the OS level there’s no way to see or manage all windows of a particular program across workspaces unless implemented by each individual app.


Alt tab stays half-usable forever because it shows windows in MRU order.


This is mitigated via workspaces. Not sure how Win/Mac does it, but that's how I work on Linux desktops.


Works reasonably well on Windows: https://support.microsoft.com/en-us/windows/multiple-desktop...

I can never remember the (easy) shortcuts so this is helpful: https://learn.microsoft.com/da-dk/windows/powertoys/shortcut...

From PowerToys: https://github.com/microsoft/PowerToys

(Which also have the (in this context) useful FancyZones)


PowerToys with WindowWalker ( https://learn.microsoft.com/en-gb/windows/powertoys/run#wind... ) is what I do use a lot to search through windows - I have a lot of terminals open usually, could be like 30-40 spread across virtual desktops and searching one specific based on server name it's connected too saves me some time.

Back in a days when Virtual Desktops were not that handy (technically those available since win2000 if I recall correctly), I've used Blackbox for Windows and my own homemade program to quick search though all PuTTYs (written in VB6 + Winapi). Now I don't need it anymore :)


nah, the alt-tab behavior on osx is just wrong, so I use contexts.app which fixes it.


About this we agree.


And yet I'm still slower at this specific thing on Mac OS because I always first have to make sure the right window is focused or I'm in the wrong context menu. Meanwhile on per-window menus that's guaranteed. Also those buttons are the same size as almost all other buttons in the system, it's not difficult to hit them reliably.

Edit: also if Apple cares so much about this, why do they make the close/minimize buttons so tiny?


> Edit: also if Apple cares so much about this, why do they make the close/minimize buttons so tiny?

I think daily users graduate to ⌘W and ⌘M (which are all-app standards) fairly quickly, while power users graduate to the window manager they prefer.

> …I'm still slower at this specific thing on Mac OS because I always first have to make sure the right window is focused…

You might find this useful: https://hazeover.com/


True when you're using a 512x342 resolution screen and a mechanical ball mouse like the original Macintosh, but less certain with modern hardware.


Technically it’s always true.

On a Mac the menu bar is always at the top. Because of that, the menu bar is at the logical end of mouse movement (it’s like a virtual wall, which the mouse can’t go beyond).

Because of this design, you can “slam” the mouse against that invisible barrier and it will stop where you need it (at least in the Y axis).

“Slamming” the mouse is always faster than trying to move it to an arbitrary point on the screen because there is no need to slow down the pointer on the screen. It just stops immediately.


That's only true for the corners. For the edge you're only constraining one axis of motion. And modern screens generally have wider aspect ratio than the original Macintosh, exacerbating the problem of unconstrained X-axis motion.


One axis of freedom is still better than two, in this case.


Fitt's law[0] says difficulty of movement is a function of both accuracy and distance. Macintosh-style menu bars reduce the accuracy required but increase the distance. When distances were always small, this was a clear win. Now that we have large wide-screen monitors, it may not be.

The real correct solution is pie menus[1], which reduce both accuracy and distance requirements regardless of screen size. Blender has pie menus and it's one of the reasons its UI is so fast to use once you've learned it.

[0] https://en.wikipedia.org/wiki/Fitts%27s_law

[1] https://en.wikipedia.org/wiki/Pie_menu


NeXTSTEP did it best. You could bring up a copy of the menu by right-clicking on the desktop. No movement and no accuracy required.


Yeah, it does make sense. That's also the reason for the close button or start menu to be at a corner, at least on Windows derivatives. Of course, this clashes with the menubar placement on Mac, if they were placed the same way.


But Microsoft even messed that up. The original start button was not in the corner - it was near the corner. Literally just a few pixels removed. That tiny mistake made it much harder and you couldn’t “slam and click” the start button (which annoyed me to no end).

Also, the order of the window buttons for close, maximize, and minimize are incorrectly ordered, so a mistake causes you to click the opposite of what you actually want.


It’s always easier not to make fine grained control motions. Even with a touchpad not having to stop precisely means the motion requires less thought.


Although I wonder if still works out so well with 34“ ultrawide monitors (which Apple doesn’t make and therefore doesn’t care about of course).


I’ve just now realized, after using macOS exclusively for 10 years, that the menus are not where the windows are but always at the top.

Quoting this as an anecdote - showing that to some people this doesn’t matter. In my case I guess it’s because I don’t use these menus that much.


On top of that, it failed to copy most of the features of macs that make them worth using.


GNOME 3 and up definitely isn’t “mac-like”, despite having been given that label by many. A few aspects have similar designs to macOS counterparts, but the resemblance is skin-deep at best. It more closely resembles iPadOS than anything.


> GNOME 3 and up definitely isn’t “mac-like”

When many people (like me) use that phrase, we're not referring to the UI/UX. It's instead a comment on GNOME's desire to have a controlled and universal design that is more focused on user ease and accessibility versus the componentized/customizable design of GNOME2, KDE, XFCE, etc.


Here are several comments from last time this was posted that completely dispute the characterization this article makes:

https://news.ycombinator.com/item?id=32258142

https://news.ycombinator.com/item?id=32259325 (lead designer of Unity)

https://news.ycombinator.com/item?id=32259074 (lead of Ubuntu Desktop product)

https://news.ycombinator.com/item?id=32258650

https://news.ycombinator.com/item?id=32258035 (founder of GNOME)

This article really shouldn't be allowed to be posted to Hacker News anymore. Anything with this many errors, and is essentially a mickey mouse investigation, isn't worth discussing again.


I do remember GNOME3 being presented as "the future" and those of us who liked GNOME2 being told roughly that we were hopeless old duffers who didn't understand modern UI/UX design and to essentially @#$#$k off to XFCE if we didn't like it. IMO it was the desire to put Linux on mobile devices and tablets that seemed to be driving it all - something commercial in other words. Those of us "just using GNOME2" were evicted like the squatters we were.

This is why I like the fragmentation of Linux - really what's the problem? Those who want uniformity can go to windows and enjoy having everything dictated to them. I did #$%#$ off to XFCE and I am using it now very happily. Later MATE and Cinnamon came out to save the rest of us "luddites" and I realised then that I wasn't alone in my chagrin which was quite cheering.

Nobody explained the hit on GNOME2 by talking about "threats from Microsoft" and I can't understand how copying the Mac made Linux desktop safer legally since Apple isn't known for refraining from legal action either.

I always thought GNOME AND Windows copied the Motif style of menus in windows which I didn't like because I was weaned on GEM on the Atari and on DOS. I don't know though.


> I do remember GNOME3 being presented as "the future" and those of us who liked GNOME2 being told roughly that we were hopeless old duffers who didn't understand modern UI/UX design and to essentially @#$#$k off to XFCE if we didn't like it. IMO it was the desire to put Linux on mobile devices and tablets that seemed to be driving it all - something commercial in other words. Those of us "just using GNOME2" were evicted like the squatters we were.

Me too! And this mindset persists today, every time I needed something I googled it and found it was already declared "wontfix" by the devs because it's something us lowly users should not need nor want.


I remember these days like they were yesterday because this was during the peak of me caring way too much about OpenSource disagreements. None of this stuff about needing to avoid a Windows 95 paradigm was being talked about in any of the circles I was in, or publications I was reading.

I tentatively call bull, although it's possible that the most public facing narratives weren't the most urgent reasons I suppose.


I don't remember this as legal issue either. If windows 95 paradigms were to be avoided was because they were dated and deemed to be replaced with other concepts.


What do you use today on a daily basis for an desktop OS?


At home it's almost entirely Linux (usually Arch btw), unless you count the occassional Windows VM.

I'm using KDE for a desktop, funnily enough customised heavily to look like Windows 9x.

I usually run something lighter like WindowMaker or IceWM though, but need Wayland for Waydroid atm, so KDE gives me a pretty nice Wayland experience out of the box.

Did I mention I run Arch?


Thanks for the info. The Arch OS, wiki, and community have been very impressive. I use mainly Void, but I still need to go back and read the Arch wiki for many things.


None of this is remotely true. I was there.


Many of us were "there" in the sense that we were interested in FOSS desktops at the time. What does "there" mean in your case, and would you like to share a bit more?


I was the release manager of GNOME 2, a very early employee at Canonical, and continued to be involved with GNOME during the GNOME 3 efforts.


Thank you so much for your work! Unfortunately the project is so often dragged in mud by some loud minority, but I absolutely love it and use it every day, and I’m sure we are more numerous than the other group.


Same here, I loved gnome 3 when it came out and I still love the modern versions of gnome, and use them almost entirely stock, with just a tiling window management extension and an extension to move the overviews search function into a spotlight like pop up in the middle of the screen. I'll never be able to happily use any other window manager then modern gnome to be honest, I've tried to switch to all sorts of tiling window managers, tried to switch to xfce, and none of it felt as good and worked as well for my style of workflow and just got out of the way as easily as gnome does. And I can't even stand looking at kde, because all of its paddings and alignments and font sizes and stuff are just so awkwardly chosen, and no theme can easily fix that.


I hated Gnome 3 when it first came out because none of the stuff I was used to worked there. Now it's like four or so plugins for all I want, so I love Gnome 3!


I also love Gnome. It’s my favorite DE. It gets out of the way, has a consistent UI, and is beautiful out of the box with no fiddling.

Thank you, Gnome designers and devs! (Also, I love PaperWM, and am looking forward to the tiling project y’all are brainstorming about.)


OK, fair.


I was the lead developer of Unity and the creator of Ubuntu Netbook Remix and can also confirm the article isn’t slightly true.


Please elaborate


It's difficult to elaborate on "nope", unless someone wanted to do a point-by-point takedown, which it doesn't deserve.

I'll focus on one thing: the genesis of GNOME 3.

Firstly, it felt like the right time. Many of us were looking around for ways to break out of the Windows 95 paradigm. New form-factors were both inspiring and challenging.

One of my clearest memories of early ideas that became GNOME 3 was an ongoing conversation about a "chromeless desktop". How could we get as much "chrome" (user interface elements) off the screen as possible, to make every pixel available for what the user was doing? Some of us started from "what if there were nothing on the screen at all?" and added back only what was necessary. These ideas dovetailed with the form-factor challenges posed by early tablets (including the OLPC), beefier PDA devices (Nokia 770), and netbooks (tiny, cheap laptops, like the Eee PC).

You can see how that led to GNOME 3's minimalist top bar (though we had tried even less "chrome"!) and modal approach between the app experience and the control experience.


[Blog post author here]

> It's difficult to elaborate on "nope", unless someone wanted to do a point-by-point takedown

I would love that, and I think given that I tried hard to give references for my assertions, countering it merits the same.

> which it doesn't deserve.

I am saddened to hear that. I tried to be professional and impersonal about this, and such scorn is hurtful.

> GNOME 3's minimalist top bar

Which is one of my personal primary objections to the environment: this colossal waste of precious vertical pixels, which is squandered to no point.


Given how at least 5 people who actually were there in leading positions all have refuted your story I do not think you are professional anymore due to the total failure to admit that maybe you are wrong. Nobody except you seems to "remember" this.

Yes, the patent threats were real but nobody has proof that they affected the direction of desktop Linux.


The patent threats I remember from that era was Balmer making a wooly claim that all Linux users were likely infringing. Not just Gnome users and KDE users. And at the same time refusing to elaborate about which specific patents that would be.

And MS making a $440m payment to Novell/SuSE for Linux support to get Novell to make a $40m payment to MS for "patents", which seems a rather lopsided agreement to announce as a combined deal if the patent threats had been anything but MS trying to scare people.


> give references for my assertions

Like linking to your own text at another place? https://lobste.rs/s/kz0jpg/everyone_seems_forget_why_gnome_g...


"I tried to be professional and impersonal about this, and such scorn is hurtful."

You've asserted that "everyone seems to forget" why something happened when a whole bunch of people who were there while it happened, and made it happen do not agree with your version of events. People tend to react badly to that sort of thing - especially when things land on HN and threaten to become accepted fact by a lot of folks who weren't there.

Microsoft rattled its patent sabers in the direction of Linux, true enough. But those patent allegations were believed to cover [1] the kernel, OpenOffice.org, the "Linux GUI", and an assortment of "other" things.

I won't claim to have been "there" myself - but I was "there"-adjacent: either working for one of the vendors (Novell), or writing about GNOME, doing an (admittedly small) amount of volunteering for GNOME around marketing/PR, attending GUADECs, and so forth.

My memories are, at this point, admittedly hazy -- but I cannot recall a single conversation or suggestion that the direction of GNOME 3 or Unity were prompted by, inspired by, caused by, or otherwise motivated in any way by Microsoft patents. Not officially in public, not behind closed doors while working at Novell, not in the hallways of SUSE's office in Germany, not during my tenure at Red Hat, not over a beer with any of the GNOME developers, nor any people I talked with between the early whispers of GNOME 3 planning to today.

It seems deeply odd that, had GNOME 3 been a "oh shit, Redmond's gonna sue us" action, nobody in all that time would've let spill. Odder still that those vendors wouldn't have simply pivoted to KDE, CDE, Xfce, or any of the other available environments. Nor did Red Hat jettison FVWM-95 IIRC from any and all repos they'd have been liable for -- given that Red Hat has no love for allowing risky things into RHEL or Fedora, I find it odd that they'd have puttered along with GNOME 2 under threat of patent suits until GNOME 3 was ready.

If you can cite anyone who was involved in GNOME development who says differently, who says "yes, we had to change desktop design due to patents," I'd be curious to hear that story. But when folks like Jeff chime in to say "nope," I'd put the burden on you for proof.

What I recall was that desktop folks were trying to make the desktop enticing enough to move people from Windows. They'd heard time and again how Linux was too difficult to use and went in a direction they thought was more user-friendly.

You also assert that Red Hat wrote GNOME in reaction to Qt not being GPL. Some Red Hatters were involved in the early creation of GNOME but that's not really correct either. Red Hat provided sponsorship early on, but claiming "Red Hat wrote GNOME" is over-simplification to the point of falsification. The real history is much more complicated and more interesting.

[1] https://arstechnica.com/tech-policy/2007/05/microsoft-235-pa...


> How could we get as much "chrome" (user interface elements) off the screen as possible, to make every pixel available for what the user was doing?

A bit of a shame how that basically good idea turned into "let's just fill the screen with pointless whitespace instead".


I’m curious what you mean by this. I daily drive Fedora + Gnome. My foot terminal + neovim fills the screen entirely, except for the top bar which I can disable / hide if I wish.

Are you talking about the fat window titlebars? Those are too big, imo, but I have removed them from foot, Firefox, and Brave. I don’t use anything else often enough for it to bother me.


On Linux it's specifically a problem of the default GNOME window chrome and standard applications, yeah.

For instance look at the settings app screenshots here, everything is surrounded by at least twice as much empty space then actually needed for no obvious reason (except maybe to prevent fat-finger-syndrome on touch displays, but I use a mouse cursor, thank you very much):

https://apps.gnome.org/Settings/

Look how few settings actually fit on one page before scrolling is necessary (and since there's no scrollbar it's not even clear in some cases that more settings are hiding below). It looks less a settings application but more like an example application for UI design mistakes.

I have the same beef with modern macOS and Windows UIs though (but GNOME definitely takes the crown of most weird desktop UI).

It's some sort of modern UI designer brain virus to waste valuable screen real estate with empty pixels (but not wasting space doesn't mean to fall into the other extreme to fill everything with clutter, UI design is a fine art of balancing functionality with aesthetics - currently the pendulum has been swinging way too much into the aesthetics direction, ignoring that UIs should be functional first, and pretty only second.


Okay but counterpoint: that looks beautiful.


Counter-counterpoint: it’s not so beautiful (rather middling, IMO) that a well-considered denser design couldn’t achieve both nicer aesthetics and better usability (so much scrolling, ugh).


Beauty is the last thing I care about when digging through settings panels trying to figure out how to disable the latest annoyance


I was also there and jdub is correct. This retelling of history is nonsense. Especially this part:

> SUSE, Red Hat, Debian, Ubuntu, even Sun Solaris used GNOME 2. Everyone liked GNOME 2.

The default DE on SUSE was KDE. If you wanted GNOME you had to request it explicitly at install time. But more importantly, GNOME 2 was enormously controversial. It split the Linux and GNOME communities, it was the systemd flamewars of its day. The developers had to constantly justify themselves and received endless flames and hate mail about it. In fact it went OK because, just like with systemd, it turned out that there was a silent majority who did like the new direction of GNOME and become enthusiastic adopters, but the idea that everyone liked it is just absurd.

Also, GNOME 2 didn't have a Win95 style interface. GNOME 1 was Windows 95 inspired, GNOME 2 was clearly a reaction to macOS although it managed to establish a unique art style and personality as well.

The drivers of the new direction for GNOME 2 were Havoc Pennington and Calum Benson at Sun (who did a usability study on GNOME 1). Pennington spent a lot of time explaining to Linux hackers that more options and preferences wasn't always better. For example in this essay:

https://ometer.com/preferences.html

Both GNOME1 and KDE were basically direct mappings of the Linux CLI experience to widgets; it was common to have checkboxes in apps with labels like "Use Xrender", no further explanation provided. Pennington revolutionized the Linux DE space by arguing that the GUI should reflect what tasks people wanted to do, should try to automatically configure itself and that adding settings had a cost as well as a benefit. Some people saw this new direction as undermining the reason they liked Linux in the first place, as something endlessly tinkerable and tweakable for technical people. They didn't particularly want Linux to be approachable by non-hackers.


"The default DE on SUSE was KDE. If you wanted GNOME you had to request it explicitly at install time."

Depends on the time frame and which SUSE you refer to, to be clear. When Novell bought Ximian it made GNOME the default for SLED and KDE continued to be the default for openSUSE.


Red Hat didn't "create" GNOME (although they were one of it's earliest and strongest supporters), SuSe didn't "save" KDE (in fact, them signing a license with Microsoft had more to do with their Novell deal), KDE isn't "German", and Microsoft's legal threats were not what motivated the shift considering it came out 5 years later.

GNOME3 shifted because of design philosophies in a good chunk of the core developers and a desire to have a more cohesive/user friendly experience a la Mac OS.


>KDE isn't "German"

Sure, its contributors are from all over, but

* It was started by a German living in Germany while a student at a German university

* Its current-day non-profit backing org[1] - which owns the KDE trademark and represents it in legal matters - is headquartered in Berlin.

[1]: https://ev.kde.org


> * It was started by a German living in Germany while a student at a German university

Even at the time, he wasn't a sole contributor. He posted to a Usenet board and gathered interest of a multinational group before commencing. Today, it's even harder to make that argument considering the minority of KWin, Plasma, etc commits come from Germans (or even Western Europeans).

> * Its current-day non-profit backing org[1] - which owns the KDE trademark and represents it in legal matters - is headquartered in Berlin.

Sure, KDE EV is German. And both the Linux Foundation and FSF are headquartered in the US...yet you would probably take umbrage with GNU/Linux being called "American" for that reason. You have to incorporate somewhere and it's probably going to be somewhere in North America or the EU.


[Article author here]

> Red Hat didn't "create" GNOME

I didn't say they did.

> SuSe didn't "save" KDE

I never claimed they did.

(And it was SuSE, now just SUSE. When attempting to rebut, attention to detail is paramount.)

> Microsoft's legal threats were not what motivated the shift

[[Citation needed]]

> considering it came out 5 years later.

How long do you think this stuff takes?

> GNOME3 shifted because of design philosophies in a good chunk of the core developers

[[Citation needed]]

Or at least wanted. I'd like to know. I've met with the team, at their invitation and expense, and asked them personally, and I still didn't get a straight, coherent answer.

> a desire to have a more cohesive/user friendly experience a la Mac OS.

Well that failed then, didn't it?


> > Microsoft's legal threats were not what motivated the shift > > [[Citation needed]]

You made that claim, it's on you to prove your claim. Instead, you claim something and pretend it is true unless someone provides proof about a claim that just isn't true.

> > GNOME3 shifted because of design philosophies in a good chunk of the core developers > > [[Citation needed]]

A release team member mentions that this is true. What more do you need? I was also a release team member for 10+ years. I've never heard of anything like what you claimed. I spoke to loads of people. Further, you're confusing GNOME 3 with gnome-shell. Also, GNOME 3 took ages to come out while you're pretending it was something quickly developed.

Lastly, GNOME classic. As shipped with RHEL 7. GNOME 3 with a more classical interface. Meaning, pretty close to GNOME 3.


Did you meant to write “pretty close to GNOME 2”?

Either way, it doesn't really matter. RHEL 6 was released in 2010 with a traditional menu for starting applications, long after the alleged cutoff date that, according to the article, should have made this impossible.


> I didn't say they did.

You sure as hell heavily implied it then:

>> so Red Hat refused to bundle it or support it, and wrote its own environment instead.

Unless by "wrote its own environment" you mean "packaged GNOME instead of KDE".

> (And it was SuSE, now just SUSE. When attempting to rebut, attention to detail is paramount.)

Got me. Except I'm not the "journalist".

> How long do you think this stuff takes?

Considering development on it didn't begin until late 2008, about two years after the supposed "drastic need to shift"? 2.5-3yrs, give or take.

> Well that failed then, didn't it?

And so did the Zune. Yet it was still designed as an iPod-killer.

> [[Citation needed]]

You're making the claims in your half-researched/personally-biased article.

You cite them, ideally in the "article" itself.


You could be right, and maybe that is one of the reasons.

But I also thought it had to do with VFAT(?) being the default file system on Flash Drives and a patent M/S had.

IIRC, RHEL and friends refused to sign because that M/S patent was working it way through the courts.

But as others said, that article does not seem to reflect any kind of reality :)


That was one of the few (only?) patents that MS actually admitted to.

Balmer made a bunch of really vague public threats where he refused to be drawn on which patents Linux (not Gnome, or KDE, or Gnome or KDE using distros specifically) he implied were infringing, most of which never materialised into any court cases or patent deals. Did they enter any other deals than the Novell deal, which involved MS paying Novell 11x what Novell paid MS?


https://en.wikipedia.org/wiki/GNOME does a reasonably good job explaining the history:

GNOME was started on 15 August 1997 by Miguel de Icaza and Federico Mena as a free software project to develop a desktop environment and applications for it. It was founded in part because the K Desktop Environment, which was growing in popularity, relied on the Qt widget toolkit which used a proprietary software license until version 2.0 (June 1999). In place of Qt, GTK (GNOME Toolkit, at that time called GIMP Toolkit) was chosen as the base of GNOME. GTK is licensed under the GNU Lesser General Public License (LGPL), a free software license that allows software linking to it to use a much wider set of licenses, including proprietary software licenses. GNOME itself is licensed under the LGPL for its libraries and the GNU General Public License (GPL) for its applications.


The irony of Icaza now working for M$ is thick. He's an amazing guy and I don't begrudge him at all.


Ooh I 'member! /. Is filled with stories and comments of people badmouthing Miguel because of his stance of integration between Linux and the Microsoft world. His views appeared always "controversial " to the OpenSource world and the sentiment in /. Was that she was a M$ apologist and that he only wanted to be noticed by M$.

I remember back in the day when he started Gnome with Federico, he got a place in a Time magazine 's list of influential people. I being a kid from the same country, wrote an email telling him I wanted to help building Gnome, and he replied to mee! To me he always appeared a pragmatic person.


I mean, it's always kinda played off like he turned coat. But if you know the history it makes total sense.

He started off with GNOME, got interested in .net and Mono (originally as a means to integrate them into GNOME/GTK) and shifted focused on those. He built a company around that (with others), which then got bought by Microsoft due to their obvious interest in .net (and probably internal talks about the future direction of .net Core). Since then until 2022, his work was mostly on .net and its open ecosystem.

So it's neither contradictory nor counter to his roots, but is humorous when you say "the guy who created GNOME works for Microsoft". Despite the fact that he probably did some major work on bridging the two worlds together and leading to modern MS actively incubating and contributing to Open Source projects.


Do you have any insight into why the Gnome desktop never officially adopted pie menus?

I know there were a few early implementations of pie menu widgets for Gnome many years ago, but as far as I know the window manager and desktop never adopted them.

Was it because of NIH? Was it because Gnome only aspired to imitating other mainstream desktops and not innovating? Was it because the Gnome developers were tricked by Alias's FUD about their illegitimate patents? Or some other reason?

Pie Menu FUD and Misconceptions

https://donhopkins.medium.com/pie-menu-fud-and-misconception...

Simon Schneegans eventually implemented his own pie menus for Gnome, but they were never adopted into any mainstream Gnome desktop user interface:

https://schneegans.github.io/news/2015/04/18/gnome-pie-061

He later made Fly-Pie:

https://www.youtube.com/watch?v=sRT3O9-H5Xs

And he's currently making cross platform pie menus with Kando:

https://news.ycombinator.com/item?id=39206966


Hope you're doing well Jeff! Those were good days!


Please elaborate



> So, both needed non Windows like desktops, ASAP, without a Start menu, without a taskbar

For me these two were the only improvements in the user interface of Windows 95, but they were indeed very important. There has been no other user interface element of Windows 95 that I have considered useful and which I had not already used earlier.

The Start menu offered a much more convenient alternative for starting programs than starting them by clicking icons, like in the previous versions of Windows.

Nevertheless, I cannot understand how it was possible to patent a "Start menu" when it had only trivial content modifications in comparison with the menu used in twm, some 8 years earlier.

The second improvement was the auto-hidden taskbar, due to which the minimized windows no longer wasted screen space.

This is the only user interface element that I had not seen before using Windows 95, so perhaps it was patentable, even if it is a pretty obvious solution that could be found by anyone in the instant when it was recognized that the screen space occupied by minimized windows is a problem.

This is a general problem that affects many patents which are the obvious solutions for some problems and they do not require any kind of ingenuity for their discovery. The only reason why they are novel, is because those problems have not been encountered earlier, so nobody had thought to patent solutions for them.

So I believe that with the possible exception of the auto-hidden taskbar, it should have been easy to fight any claims of patent infringement by other graphic desktops, because for all features the differences from prior art were trivial. If the story from the parent article is true, then perhaps RedHat and al. have been scared by the legal expenses so they have chosen a workaround even if most Microsoft claims should have been invalid.


[Article author here]

> This is the only user interface element that I had not seen before using Windows 95

Then I suggest that you need to look deeper.

I did, and I analysed it in some detail nearly a decade before that article:

https://www.theregister.com/Print/2013/06/03/thank_microsoft...

> if it is a pretty obvious solution that could be found by anyone in the instan

That is not even slightly true. Just look at a series of the Win95 beta screenshots and you will see Microsoft slowly and with considerable difficulty iterating their way to this difficult and non-obvious solution.

https://www.betaarchive.com/forum/viewtopic.php?t=3329

> for all features the differences from prior art were trivial

They really are not. I have spent days looking. I do not think you have.


Did you purposefully choose a link that doesn't actually include any working gallery links? Here, here's the actual wiki for betaarchive:

https://www.betaarchive.com/wiki/

And here's a link to the first build with a fully functional start menu:

https://www.betaarchive.com/wiki/index.php?title=Category:Ch...

That's b81...there were 400+ more to follow that (800+, if you include the follow up service packs). Pretty much every person who worked on Chicago that talks about the start button/taskbar says it's one of the first things that was finalized, which the builds obviously confirm.

The "iteration" that you refer to is simply taking the combined Presentation Manager and Program Manager code and reconfiguring it into the taskbar.

For someone so "aware" of software development, you seem to expect unreasonably long timelines (5+ years for GNOME3) and unreasonably short timelines (what? 4-8 builds? to completely reconfigure a major UI component and paradigm) when they're convenient to making your wholecloth narratives work.


> I cannot understand how it was possible to patent a "Start menu"

It isn’t, this “article” is just bad-intentioned bullshit.


So prove me wrong, and when you do it, stop calling my work insulting names and accusing me of things I do not have or do, such as calling me bad-intentioned.

This is ad hom, and it is bad and wrong, it is hurtful, it is mean, and it is against the spirit of HN.


You could prove your unsubstantiated claim.

What is the relevant patent number, and how did Gnome's design infringe on it, and how would removing it require a fundamental re-design of Gnome to avoid infringing?

Here's what I've found: They do have a patent for a "taskbar with start menu", number US5920316A. The abstract focuses on the taskbar, and the main claims are all linked to having a taskbar and to the taskbar being tied to a specific location (though changeable) and being permanent or autoiding in that location. Ignoring whether or not Gnome actually infringed - there are a lot of additional details to the claims - replacing the taskbar with iconified windows or using a click or hotkey to pop up a UI element instead would both be enough to avoid infringing on all the main claims of that patent. But there are so many additional details that it's likely you could easily avoid that patent while retaining a full task bar in any case.

What is clear is that it'd be ludicrous for Gnome to take as long as they did to remove the taskbar given this patent if they were being actually threatened by this and given how simple it'd be to work around.

The claims regarding the start menu are excruciatingly detailed in the general description, but very little actually appears in the actual claims. This is a common tactic to try to give the appearance of a far more comprehensive patent than you actually have, but what is covered regarding anything resembling a start menu in the actual claims of the patent is near nothing.

Given you're being directly contradicted by people more closely involved, unless you have something substantially better, these claims looks like total hogwash.


> So prove me wrong, and when you do it, stop calling my work insulting names and accusing me of things I do not have or do, such as calling me bad-intentioned.

You keep repeating this, but it's not up to others to prove what you claim is wrong. It's up to you to prove it is correct.


Prove yourself right first. Literally the creators behind several of these projects called out that what you are saying is outright false.

I’m commenting in multiple places because fighting disinformation is very hard (compared to producing it), I’m just trying to make it right, before people believe the falsehoods you state.

Your intentions might be good, but that doesn’t change the outcome.


The legitimacy of this information seems very suspect - all based on guesses from public statements, not any real source.

My own memory of ~20 years ago wasn’t great but I recall the only big legal issue regarding a windows UI was the distro called “Lindows”, aimed at making the transition to limix easier for users, which had to be renamed Linspire.


It's OPs wet dream reality. Not sure even how old he was at the time that happened . Maybe he just collected a bunch of references and created a narrative in his head. Good for him.

The way Microsoft played at that time was through SCO. The lawsuit, the patent pool deals with IBM, Groklaw and Pamela Jones.


I wrote it. I stand by it.

If you can refute my points, please do so, with evidence and citations.


You made your claims without any "evidence and citations". Even when you provide links they mostly don't back up the assertions in the article.[1]

At any rate, demanding extensive "evidence and citations" to "prove you wrong" when you haven't done the same yourself is profoundly hypocritical.

Surely as a self-professed "atheist skeptic" – according to your profile here – you must know how this works? You can't just claim all sorts of controversial things with no evidence and shout "prove me wrong!" when people object.

Also, there ARE "evidence and citations" right here in this thread. Several people who were directly involved with the projects at the time are saying "that was not a consideration, you are wrong". Are they all misremembering something so large? Unlikely. Are they all lying? Even less likely – why would they? So why do you disbelieve them?

[1]: e.g. "Microsoft was threatening to sue all the Linux vendors shipping Windows 95-like desktops" isn't at all substantiated with the provided link, which just talks about generic threats to sue. It's been a long time, but IIRC one of the criticisms was that Microsoft would claim "Linux is illegal; we'll sue you!" without ever specifically claiming what exactly was illegal. Classic Balmer-era Micrsoft FUD. AFAIK there was never any specific threat to sue "start menu like" environments, and I don't recall it ever being brought up as a concern. A good citation to prove this claim would be a discussion on the Gnome or KDE developer list discussing these concerns, or something along those lines.


Proving a negative to something you asserted with no evidence of your own but guesswork at op-eds and quips from interviews is not possible or worthwhile.

If you would like to update your article with primary sources and evidence that can be verified, please do so instead.


You're an idiot who doesn't understand that primary sources trump other citations.


I disagree with the sentiment of this article (I don’t remember Microsoft having anything to do with Gnome 3 being…uh…Gnome 3) although I thank it for it unintentionally introducing me to tiling window managers. I’ve been using i3 for over 10 years and every time I try gnome/kde again, I am scrambling back to the comfort that is a TWM.


The worst thing with Gnome now, is the amount of baggage the few Gnome applications I use bring with them. I keep replacing them step by step because they're so tied to Gnome it's really painful to have dependencies on them if you don't use Gnome.


Networkmanager is a nightmare for this. So easy to pull in webkitgtk or gnome.online accounts. A travesty.

And of course you need to use it if you need modemmanager, joy.


On a lower level: Some Gnome apps will hang for up to tens of seconds with no UI or error messages if there is no dbus user session running unless DBUS_SESSION_BUS_ADDRESS is set to an empty string. Evince is a case in point.

I get they're not written to run outside of Gnome (and that a lot distros will run dbus user sessions on non-Gnome setups too), but it's nevertheless infuriating.

I don't particularly like dbus, but I also don't particularly dislike it. So it's not like I went out of my way to strip out a user session bus.

My current setup just happens not to spawn a bus for the user session because nothing else I care to run actually cares whether one is running or not, until I ran into Evince.

It mostly just annoys me the user hostility of hanging like that for a feature that the app can run just fine without - popping up a warning if a user happens to trigger something that is degraded without it would be far less user hostile.


Out of curiosity, how do you do your i3 installation? I've become accustomed to starting with Ubuntu Server and then installing x, video drivers, i3, and eventually various system utilities by hand. It leads to a very lightweight and tidy setup, but it takes forever. I tried starting with a normal Ubuntu installation, but de-GNOMEing the system wasn't easier.


I'm not running i3 (any more; went to bspwm and then my own) but I've always started with normal Ubuntu. For i3wm I seem to remember it was reasonably straightforward as long as you're fine with some of the tentacles of Gnome. That is, I'd still typically run the Gnome settings stuff, and run a bar that'd happily handle docking of nm-applet and the like. I think next time I might look to a distro with xfce or similar and strip down from that instead as a means to get a cleaner system without the pain of building up from scratch.


> as long as you're fine with some of the tentacles of Gnome

Ah, herein lies the hangup. GNOME is fine on its own but I find its tentacles incredibly annoying when they make their way into i3. Maybe I just need to learn how to make my own Ubuntu fork and shove my setup into that.


that explains the terrible UI but doesn’t explain why gnome and gdm developers are pigheaded enough to not implement highly requested features which could be hidden from sight via gconf.


To this day it won't let you easily pick a solid color for the background. It blows my mind this was removed.

Does macOS allow that? I haven't used it in ages. I'm asking because at some point it seemed all GNOME wanted was to follow Apple guidelines.


> Does macOS allow that?

Yes.


Huh. Gnome happened way before Ubuntu was a thing. Ubuntu has never "needed a non windows-95 like desktop".


Indeed, Ubuntu arrived during the Windows XP era, and that part would've been nonsense in any case given that reconfiguring a Gnome setup to use menus in a different configuration e.g. matching elements from designs like RiscOS, AmigaOS, MacOS, NeXT predating Windows 95 and giving plenty of prior art to shut down any Microsoft threats would've been trivial.

Any threats from MS over anything - that MS might have been sabre-rattling I can buy - would need to have gone deeper than that.


I wrote the article.

I also wrote this nearly a decade earlier:

https://www.theregister.com/Print/2013/06/03/thank_microsoft...

I find your response incoherent. Do please explain.


There was nothing incoherent about it. There were two largely separate parts to it.

1. Ubuntu didn't exist until nearly a decade after Windows '95. If there was an issue relating to Windows '95, Ubuntu

Separately,

2. Gnome was trivially modifiable in ways that mean that addressing the kind of superficial issues you bring up did not in any way require the drastic changes brought by subsequent versions of Unity.

If MS patents had been an issue relating to the specific features you listed, stripping them out or replacing them without significant other changes would have been trivial. Indeed, MS own justification for the taskbar points out their perceived downsides of pre-existing iconification (and ironically massively overeggs it, by arguing the problem was double-clicks while ignoring that they could trivially solve that by allowing activation with the same single-click they did in the taskbar)

Your description is simply wildly implausible, especially entirely absent evidence (patent numbers would be a start) and contradicting statements from people much better placed to know.

I'll also note that I remember MS saber rattling about Linux, and it was not constrained to Gnome and KDE, or even directed at them, but at Linux in general. Meanwhile, the MS-SuSE/Novell patent agreement was not the agreement an MS that believed it had an actual case would make - it involved MS pre-paying Novel $440m for Linux support coupons, and Novell paying MS $40m for the patent pledge. Coming out $400m ahead when accused of patent infringement is pretty good. I'd like someone to make those kinds of patent accusations against me. Pretty please, anyone?


[Article author here]

I think you're getting your timelines and versions muddled up.

The article is not about GNOME. It is about GNOME 3.

GNOME 1.0 was in 1999.

GNOME 2 was 2002.

Ubuntu was 2004 and launched with GNOME 2 and nothing else.

Microsoft's campaign of legal threats began around 2006 and intensified around 2007.

Unity was released 2010 and first became default in 2011.

GNOME 3 was released 2011.


> The article is not about GNOME. It is about GNOME 3.

The article title literally mentions "GNOME and GNOME 3".



And what a great discussion. Apparently many people who were involved in the development and design of Unity and KDE debunk the claim of the article. Edit: E.g. https://news.ycombinator.com/item?id=32259325


They deny it. They didn't debunk it.

Denial means saying "that is not true!" Debunking means showing something isn't true.

Also, as the article of the piece, I want to point out 2 things:

1. I stand by it. Nobody's refuted it yet, including today.

2. I've rarely had so many ad-hominem attacks as for this, both last time and this time. Everyone making them, including some much bigger names than me, should feel ashamed for their poor behaviour around this.

But calling out corporate BS and FUD often gets one attacked by fanboys. It still hurts, but it is important to do.


This is funny. You say someone did something because of X reason.

That someone is saying No, I did it because of Y reason" , and your reply is to ask them to prove it?

Let's see, who would I choose to believe.... mhmmm


Why should anyone believe you, who is a third party on these things over the first-party primary sources? The primary sources disagree with you, and any Historian or Journalist worth their salt should value primary sources more.


gnome saying


My impression was that Gnome 3, Unity, and Windows 8 were all a part of the "convergence" craze to use the same OS on mobile, tablet, and desktop. KDE also introduced convergence features around this time, but without destroying the default experience users expected.


I think you're right, and that phenomenon extended well beyond desktop environments, too.

The same thing happened to many existing web sites, with "responsive web design" and "mobile-first design" starting to be heavily hyped at the time.

The usability of such web sites in desktop browsers was typically severely degraded, and the mobile and tablet experiences were also often still pretty bad, as well.

I see the Slashdot redesign of that era as being a good example of that happening. That web site became unusable with every device I had at the time.

Even today, I find web sites like that to be awkward and unpleasant to use. It was a much better user experience when separate desktop and mobile versions of a given web site were more common.


There is always a simple reason - and when you hear it it is like a clear bell explaining why something always felt “off” - and this is one such example.

Thank you.

Not sure that it matters as the desktop that matters sits in our hands and is touch sensitive but it’s nice to know.


Both Miguel de Icaza and one of the core Unity developers have said tht the article is simply not true.


I wrote it, two years ago.

I am still waiting for refutation rather than mere rebuttal.

(To rebut: to say something is wrong. To refute: to prove something is wrong.)

I want to see the receipts. I want evidence.

I note that executives and project leaders from large companies and organisations subject to legal threats over intellectual property issues are required to deny infringement because to say anything else constitutes evidence: it could be construed as legal confession of the copying or other IP infringement of which they are being accused.

Or, to put it more briefly: MRDA. Mandy Rice-Davies applies.

https://en.wikipedia.org/wiki/Well_he_would,_wouldn%27t_he%3...


For anyone to have any reasons to offer up a stronger rebuttal, you'd need to have substantiated your claims with anything other than speculation first.

> I want to see the receipts. I want evidence.

Show us the receipts and evidence for your claims first. You're the one who started this without presenting a shred of evidence.

To then imply the people contradicting you are lying when they present their description of their experience is at best wildly distasteful.


> I want to see the receipts. I want evidence.

If the people who were involved are directly telling you "no, you're wrong, that's not why I did it" don't constitute evidence you'd accept then what would?

> I note that executives and project leaders from large companies and organisations subject to legal threats over intellectual property issues are required to deny infringement because to say anything else constitutes evidence: it could be construed as legal confession of the copying or other IP infringement of which they are being accused.

But saying "no, patent concerns were not involved in our decision" is not what people changing their product for patent concerns would do because that would be lying, and the lie would almost certainly be caught during discovery. Instead the magic formulation is along the lines of "no comment" or other variety of "shut up." Not to mention that it can be perfectly to make a change for legal concerns while still disagreeing with those legal concerns because litigation is fucking expensive.

In any case, if the developers of GNOME 3 made the changes because they were afraid of patent threats from MS, coming out and saying "no, we didn't do it because of that" would be among the dumbest things they could do.


I don't know about the other guy, but to be quite honest I'm not confident Icaza would be forthright about something that cast Microsoft in such a bad light, even before he started working for them. His admiration for Microsoft always seemed apparent to me since the genesis of Mono. I was not even remotely surprised when he got a job with Microsoft, I think he wanted it for a long time.


Oh. It seems a much more attractive idea. Odd.


This “article” has absolutely no basis in reality.


Then it should be easy for you to prove it wrong, then.

Go on, then.


> Then it should be easy for you to prove it wrong, then.

Again, prove that you are right. It should be easy, no?

Even a release team member at that time who says you're incorrect only resulted you in saying "citation needed" instead of acknowledging what is said.


Behind every simple reason is a much more convoluted story writter omitted.


I wrote it.

Please explain.


The fact 'troll' is not found on that page suggests they're missing some of the history.

'Debian' is mentioned twice, in quick succession, including a suggestion that Debian 'adopted GNOME 3 and systemd' (at the same time).


"The task bar, the Start menu, the system tray, "My Computer", "Network Neighbourhood", all that: all original, patented Microsoft designs. There was nothing like it before. "

Apart from, you know, NextStep which had a dock for system tray/start menu as well as my computer and network neighbourhood.

You can find screenshots of NextStep 3.3 showing this such as https://winworldpc.com/product/nextstep/3x from 1992 and it's possible they're also in earlier versions - NextStep 1.0 was released in 1989.


Not the same thing at all, as I deconstructed in some detail nearly a decade before:

https://www.theregister.com/Print/2013/06/03/thank_microsoft...


"NeXTStep had its Dock, but that doesn't have menus or a status icon."

NeXTStep not only had menus but tear-off dockable menus/sub-menus that are superior to any menus we have in apps today. It's clearly present in the screenshot I linked to.


https://youtu.be/rf5o5liZxnA?feature=shared&t=542 has a demonstration of NextStep 3 (released September 1992) that clearly shows a computer icon that Steve refers to as "My Computer" and then moves into hitting the "Network" icon globe that shows other computers on his local network.


There was animosity between Redhat engineers and Canonical engineers. The Redhat engineers were not particularly thrilled that Ubuntu was the hottest Linux distro. They themselves did all the work on GNOME.

The theme was that Canonical was not sponsoring as many engineers to work on GNOME and other essential projects. I think Canonical had trouble getting UI design changes pass through GNOME.

Canonical wanted to pursue a different UI with Unity. It was supposed to be the UI even for mobile devices. They spent a lot of money.


[flagged]


You appear to be describing an alternate universe where that comment is heavily downvoted, not to mention leaving out things like the popularity of ranting about systemd. Maybe try to clarify what you’re talking about and have an example?


It was at the time i posted ;), and obviously, we all know it's an alternate universe, an universe where the apocalypse is happening

It's no wonder civilizations never last forever ;)


I don't think this is really right. Windows did do legal bullshit, but that didn't cause GNOME 3 to happen. In fact, contributors were complaining that GNOME was stagnating years later, in 2008 [1]. GNOME 3 didn't really start development until late 2008.

[1] http://wingolog.org/archives/2008/06/07/gnome-in-the-age-of-...

Edit: Here's one of the founders of GNOME's opinion on the article from the last time this was posted:

> This is nonsense.

https://news.ycombinator.com/item?id=32258035


> contributors were complaining that GNOME was stagnating

Meanwhile users [me] were pleased as punch that GNOME was stable, useful and usable, just great all around and certainly not in need of major changes...


Why did forking GNOME 2 as MATE make it safe from M$? I understand why GNOME had to change, but how did were others not in similar danger of litigation?


Smaller Linux distros could claim plausible deniability. They could just move the MATE into some 3rd party repository and claim their primary DE is MWM or the like, but they also don't mind users trying 3rd party GNOME at their own risk.

This wasn't an option for Red Hat whose contributions were all over GNOME.



On the vic 20 and the Commodore 64 we had 4 function keys (f keys)[0] The order was a bit weird and the usability was not optimal. Without pressing shift you had F1 F3 F5 F7

If you designed a menu with just 4 options it would look something like.

[F1] - new game

[F3] - high scores

[F5] - settings

[F7] - credits

After seeing it just once reading wasn't really required. You would just hammer F1 twice for [F1] new game > [F1] easy mode You might see the second menu flash for a frame or not at all.

If the menu got more complicated you could chose to use the shift+F key for options used less regularly or you could move the option under a sub menu, or a sub sub sub sub sub menu.

The weird thing was that if you accessed a sub sub sub sub sub menu a few times it would get mapped into muscle memory. I remember hammering out something like [F3][F3][F5][F7][F1] you press the second button twice then the one below, then the bottom one below and end with the top one. If there was any hesitation the visual menu would show up and trigger the memory before even parsing the text.

It would almost instantly get me to the correct action out of a seemingly unlimited number of options.

Not counting the unusual even options, if the menus have 4 options each 4x4x4x4x4 is 1024 things!?!

I remember the awe seeing other people instantly plop things onto the screen.

Of course with such a limited computer there was not much need for such elaborate navigation but it was truly fantastic. My hands would navigate menus I didn't even know I knew.

Trying to navigate windows with the keyboard was a big disappointment by comparison. I now had to repeatedly think how to do things which on C64 was almost never the case. After finding something my mind had to noticeably get back to what I was actually doing.

[0] - https://www.nightfallcrew.com/wp-content/gallery/c64_brown_f...


>So it went its own way with Unity instead: basically, a Mac OS X rip-off, only IMHO done better

>They are like twins, and switching between them is very easy.

>Unity, alienated a lot of people who only knew how to use Windows-like desktops

Unity alienate a lot of people because it was a mess. It was not nearly as poilished as macOS counterpart and at the same time failed to provide the old experience and you couldn't do much about it.

But obvsiously this is just one "IMHO" vs another one.


I miss Unity. I still have an old slow as hell PC that has an Ubuntu install with Unity that runs incredibly well. The global menu bar also conserves quite a bit of screen space, which I love.

KDE is close enough (and menu bars have mostly disappeared), but it's just not the same.


This is fascinating. I was a Gnome/Linux user at the time, and just remember everyone hating Gnome 3. I don't remember ever hearing anything about the background. Had more people known about this, it would have at least helped users sympathize.


And let us also not forget also to praise Microsoft for their role in Linux. See http://www.groklaw.net/index.php if you don't know about that part.

Or their anti trust actions as listed here: https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....

But any Microsoft/big tech apologist will tell you all that is in the past. Except then there is the $28.9 billion they owe in taxes which they refuse to pay.

We don't care, we don't have to care we are {Microsoft, Apple, Google, FaceBook, Elon Musk, Jeff Bezos, Bill Gates, AT&T, et al}


Wild to me that you can patent a "taskbar".


Not in the slightest. There was nothing like it before, and I broke this down, in detail, nearly a decade before I wrote this article.

https://www.theregister.com/Print/2013/06/03/thank_microsoft...

Just look at the development history to see how hard it was to invent:

https://www.betaarchive.com/forum/viewtopic.php?t=3329


A GUI itself was hard to develop Im sure but no one is patenting "gui for an operating system".

Can I patent a steering wheel?

Its wild. Where does the line start and stop.


They absolutely did patent OS GUI designs, yes.

This is why Apple sued Digital Research -- over the GUI to its OS:

https://www.cnet.com/tech/tech-industry/apple-in-the-courtro...

https://ctrl-alt-rees.com/2023-05-13-how-apple-ruined-gem-an...

https://www.osnews.com/story/26322/apple-vs-dri-the-iotheri-...

Where does it stop? Specific precise implementations.

Menu bar at the top of the screen? Too general.

Menu bar at the top of the screen and menus are opened by left clicking on the title, after which the menu appears (termed a "pull down menu"): yes, that was Apple IP.

So, everyone else changed their designs:

* DR moved to drop-down menus instead: they appear on mouseover, without a click.

* Commodore moved to a status bar that only becomes a menu bar on a right click in Amiga OS.

* Microsoft moved to menu bars inside the window underneath the title bar (although in accordance with Fitt's law, they are much harder to hit. But MS had dedicated keystrokes for this, which the others did not.)

MS licensed this design to the Open Group for Motif, and today, most Unix GUIs use it, although now many are moving to hamburger menus.

There are also industry standards for this stuff, which software designers are intended to follow and therefore 100% will not get you sued.

https://en.wikipedia.org/wiki/IBM_Common_User_Access

This is all documented history. The fact that people today forgot this is a problem, because there are reasons things are the way they are. This stuff makes sense. It got the way it is, not by accident or coincidence or random drift, but because of multi-million dollar lawsuits.

GNOME 3 threw out a lot of good design because the designers did not know there was a difference between "stuff that is Microsoft IP" and "stuff that are industry standards".

But the GNOME designers didn't know what bits were perfectly safe to use, and which bits infringed on MS patents... so they tried to build something new and different, and took inspiration from phone designs, which were not meant to be driven by a keyboard and pointing device.

There are reasons things are how they are. I think that matters. History matters.

As Santayana said:

« Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it. »

It's usually misquoted as "those who don't understand history are condemned to repeat it."

Which is why David Spencer said of Unix:

"Those who do not understand Unix are condemned to reinvent it, poorly."

It is a mystery to me why the FOSS and Unix community react so negatively and hostilely to being reminded of their history... but that is not going to stop me doing it.


Whatever part they patented, if anything of note, would have also been very narrow, given the history of interfaces with iconification of windows/"tasks" going back at least a decade prior to Windows '95. Maybe making them textual or organizing them into a bar? It'd in any case have been trivial to work around given the number of alternatives with prior art.


If there was no prior art, why not? I know nothing about the origins but whoever invented it, tested it on people and improved over it to make it shippable deserves some protection under laws which all inventions are protected.


A 1980s car dashboard is prior art.


yeah then my index finger is prior art for computer mouse. it doesn't work that way. if you can't see how the idea of putting a task bar where it is and making it usable in an operating system requires significant invention, don't know what to tell you. sure, today we are looking at all like it is the most obvious thing in the world, but back then it wasn't that obvious.


More closely, taskbar is a close digital equivalent of bookmarks.

20 people in clean rooms faced with the task of figuring out a way to show all currently running applications in a small area of the screen would have come up with the same thing in 15 minutes. The fact that someone came up with it first shouldn't give him the same patent protection as say the blue LED.


ok, as suspected you are underestimating the scope of the invention and reducing it to a tiny feature of it.

the patent deals with what it is, how it interacts with running and exiting applications, how the user interacts with it, how it can be configured, in what ways it can be configured, even the start menu and a plethora of other things. it is an invention that can be patented and was granted a patent. was not even controversial. you don't need a blue led type breakthrough to patent something. if you can't see how coming up with all of the above from a blank slate and iterating on it until it fits with the system and user expectations to become the most successful UI innovation ever (remember, something that obvious was mysteriously not present on earlier systems, even when they supported multitasking) then again, I don't know what to tell you.


I'm not sure what you mean by not present in earlier systems.

Windows 3.1 had minimizing programs' icons in bottom-left corner of the desktop, and so did NextStep, and similar icon space can be traced all the way back to Windows 1.0.

Extending that to all running programs and/or dedicating an always-visile area doesn't appear to be as fundamental a you want it to appear. Integrating it into the OS is obviously not meaningful part of the innovation as Windows approach would not apply elsewhere.

The essence of it is someone having an idea that all program's icons should be always visible on the screen, the rest is implementation, and ideas shouldn't be patentable. I'm also pretty sure adding a task bar to KDE/Gnome was done not by reading the patent, but rather by implementing this basic idea from scratch.

Going back to blue LEDs, it's as if the concept of a blue LED was patented itself, and nobody could do a blue LED with a different kind of semiconductors and doping.


Can you though?


Why does anyone actually give a damn about desktop trivia?

I can sort of understand getting uptight if you're a huge fan of tiling, for instance.


This is crap. What happened was a change of paradigm inspired by the iPhone. Windows a few releases later followed exactly the same trends that GNOME and Unity spearheaded. Same thing with macOS.


So you are claiming that your argument of "inspiration" in some way proves my list of dates, legal threats, and corporate decisions?

Really?


You are making connections that are not there. For example, Novell, which owned SUSE, made that deal with Microsoft. But SUSE, the corporate distro, at that point was not using KDE as a default desktop, but GNOME. This stemmed from the fact that Novell had also incorporated Ximian previously and was deeply involved with GNOME (and Mono and .NET). Corporate SUSE used GNOME, and they still do. KDE and SUSE have no relationship of that type, not even today. How could KDE be protected by a deal that did not involve them at all?


Qt had licensing terms that were less compatible with open source.


Sounds like a load of bull to me.

They could have gone with any sensible approach and they choose an absolutely braindead one.

Luckily I was always a KDE user so I believe anybody who refuses KDE switch also has no footing to complain.

> So it went its own way with Unity instead: basically, a Mac OS X rip-off, only IMHO done better. Myself, I still use both Unity and macOS every day.

Why write this post to complain then?


XFCE user checking in here. I use Plank/Docky to add a Mac-OS style hotbar at the bottom so I guess I've secretly wanted a Mac for a long time. I played with Compiz Fusion in 2008 so I've had plenty of cool graphical effect gimmicks to satisfy me for a while, so I tend toward something that just gets out of my way these days. XFCE does a pretty good job of that without overloading me like KDE can.


I wrote this and I use Xfce too.

The Docky/Plank thing has always mystified me, so can I ask: why?

An ordinary Xfce panel will do that and if you add the "docklike taskbar" plug in it integrates really well, without needing additional 3rd party tools and the extra RAM they use.

So... just: why?


I should test to see if I can cycle through multiple terminal windows by scrolling up, that's the main reason I use it. However, I'm also not generally RAM limited in a way that I have to run too lean a system. I will give the native plugin a try though, thanks for the recommendation!


> Sounds like a load of bull to me.

Why? That's a harsh thing to say to me when all I'm trying to do is explain some history a lot of rich and influential people would prefer was forgotten.

> Why write this post to complain then?

I'm not complaining. I am trying to counter FUD that is obscuring documented history.


> I'm not complaining. I am trying to counter FUD that is obscuring documented history.

But you haven't actually shown any proof of your claim. You claim to "explain some history a lot of rich and influential people would prefer was forgotten", while offering no proof. While demanding proof that you aren't wrong.


Let's face it, nobody remembers the Linux Desktop wars. This story is long forgotten and the rich and influental people don't have to do anything.

Both GNOME and KDE had their highs and lows but in general they ended up OK.

Apologies if I was too harsh.


Forgot? I never knew, and I can't pass the #%@*ing captcha to read a blog post so I still don't know.


Do you mean the cloud flare captcha thing? I will gladly email you the text of this article if you'd like it or you can try this archive link https://archive.ph/l3yX8


Spoiler alert:

> The "why" part seems to be forgotten now: because Microsoft was threatening to sue all the Linux vendors shipping Windows 95-like desktops.


Which, last time this was posted on HN, was denied by a lot of the key people who were actually there.


Denied, but not shown wrong in any way.

Reminder: anything other than denial means admission, and that exposes them and their employers to legal threats.


Feel lucky, it’s completely made up bullshit.


You are very hostile and keep calling me names, and I don't even know who you are.

Why do you keep attacking a stranger? Do you hate me for some reason? Because it certainly feels like it.


> Why do you keep attacking a stranger? Do you hate me for some reason? Because it certainly feels like it.

I do not see that person attacking you. He's saying the article is bs, that is about the article, not you as a person.

> You are very hostile and keep calling me names

I haven't seen one reply from that person that is about you as a person. I see loads of replies saying the article is bs. I see loads of replies from you saying that a claim is true unless proven otherwise. Not sure, but the aggression is not one-sided.


Frankly, until I saw their replies, my impression was that the article was bad, and I'd have forgotten about it and who wrote it in minutes. If anyone gets a bad impression of this person over this, it will be because of their responses in this thread with the stubborn refusal to accept that they haven't provided any evidence for claims that are being contested by an increasing number of people who were directly involved.


I wondered why a 2YO blog post of mine got 2 comments in an hour. Now I know.


If this was true. What about Cinnamon and budgie desktop?


That is rather silly.

Cinnamon is based on GNOME 3. So, logically, it could not have been affected, because it did not exist at the time. It is based on something that happened following these legal threats.

Budgie came along after that.

Legal threats: 2006-2007.

GNOME 3 and Unity: 2011.

Cinnamon: 2012.

Budgie: 2014.


The simple answer is that this is most likely not true at all. Nobody who actually was there remembers this.


Go on then, show that to be the case.

I was. I do. Were you?


Well... For me "modern desktops UIs" happen for few reasons:

- the classic desktop design is simply ridiculous. Let's start with icons: they are launchers organized mostly in a free form, so they can't be many, but they also get covered anytime you open a windows. Like desktop backgrounds they are an absurdity coming from the idea that the digital desktop must resemble a physical desktop where icons are "tools laid out on the desktop";

- menus, another absurd coming from the paper era where people know labels and hierarchies of suspended folders;

- generic "bars" not different than physical desktop pens, paper clips, ... small containers of "many small stuff" to work with.

This as a simple first timid step toward a less absurd design. The the floating windows model, born clearly to resemble paper stuff you can move and laid out on a desktop, that's why tiling WM have become popular again after decades.

IT standard users at the time of Unity/first Gnome SHell (the double capital is not a typo) have already tried various launchers bar on the left or right side simply because screens from 4:3, 16:10 than 16:9 and now the start of 21:9 makes standard users in need of more vertical space, while the horizontal one start to be abundant. For similar reasons someone have made "sidebars" in apps and some have started the concept of vertically stacked tabs that remain readable even when are many and do not waste vertical screen space. Most standard users back than was already inhabited to "quick launchers" mimicking a CLI as a launcher with some extra/adopted completion, that's why menus disappear substituted by "dashes" and Ubuntu rightly and ignored by many have pushed the HUD for in-app menus as a quick search&narrow.

But most are reactionary, at first they fear to "loose something" in a search&narrow setup, while they have already ditched "web directories" for search engines a decade or more before. So the evolution is terribly slow.

Do you want to know the desktop of the future than? Well active documents, like org-mode where you can click links that can execute code (elisp:) or specific actions (ol-* link types). Something we have partially had in the past, like with PostScript GUIs or even before with Smalltalk-based workstations and at a slightly limited extent with LispM UI and Plan9 UI where you can just compose menus, buttons etc typing a name in the right place.

The modern web winning other widgets-based UIs is the proof we need docs-UIs, but modern web means docsUI in hands of someone else, org-mode is local, the future will be trying to bring back some local aspects in the modern web style and the desktop will be more and more a 2D REPL.

The commercial gimmick of Windows that copy Kde and before have tried to patent some GUIs elements to deter FLOSS devs does not really count much in the evolution timeline IMVHO.


(2022)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: