Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Is there any future for the GTK-based Desktop Environments? (ludditus.com)
72 points by rohshall on April 8, 2022 | hide | past | favorite | 138 comments



So much fighting over how best to draw the UI that will launch various instances of the Chrome engine.

Want to know why everything is written in Electron or as a web app? Because literally every desktop platform over the last decade has decided to play design and engineering games instead of just make a really solid stable base to write apps on top of.


I don't think that's really the reason for Electron's popularity. While GTK and Qt are cross-platform, my (limited, to be fair) experience with building for multiple platforms was that it was a huge pain in the ass, and Windows and macOS platform integration often felt clunky and out of place.

The big draw for Electron is that we have way way way more web developers out there these days than native-code desktop UI developers, so it's easy to leverage those people and get results fairly quickly, even if the resulting apps don't really conform to any desktop's standard look-and-feel (which I guess avoids the uncanny valley effect of cross-platform toolkits that try to be native, but fail in subtle and not-so-subtle ways). People can even just (sorta) repackage their existing webapp, and end up with more or less a single code base for all desktop platforms + web.

I do agree with you that most of the desktop platforms are pretty annoying to develop for if you want a solid, stable base, but I don't think that's the prevailing reason for Electron's uptake.


My experience of building an electron application for multiple platforms was similarly painful. When you are writing native specific code, you have to write native specific code regardless of if you use Qt or electron. Something has to interface with the operating system. Either you use someone else's code that is cross platform, or you write your own cross platform code.


It's way easier to develop and customize a user interface with the building blocks found in a browser, rather than doing the same with Gtk or QT. To the extent that those have gotten easier, they are emulating html+js+css.


I don't know about Qt but at least GTK's docs are terribly bad. Even more so when you want to develop using a language that isn't C.


I'm not sure how they've kept up, as I haven't used Qt in some time, but back in the 4.x days the documentation was absolutely stellar. Extremely thorough and almost every non-trivial feature came with code samples. I couldn't for the life of me understand why anyone was using GTK at the time.


My point is not about lack of usability or documentation. Rather that the nature of HTML+CSS+JS lets you do things you can't do as easily in either GTK or QT. In the latter, yes you can easily achieve a certain kind of UI, but not deviate from it, and even formatting/displaying a significant amount of information that doesn't fit into one of the classic widgets can be a bit of a pain.


That depends a lot on what you're used to. It takes me far less time to implement a custom widget that can format/display a significant amount of information that doesn't fit into one of the classic widgets than it takes me to fiddle with CSS to get it to center things correctly ¯\_(ツ)_/¯.


Qt also has QML which allows you to do things similar to what you would have with the combination of HTML,CSS and JS


Have you tried creating a custom date picker in those? I am fairly sure web tech would come out last with the infinite amount of divs here and there.


Qt docs are excellent, one of the best I have used. They have high level overviews, examples and everything you ever want


a big part of why browser engines are so unwieldy is the implied compatibility with every website ever, right?

but the "way way more web developers" you mention are not, at least in the context of electron apps, interested in that feature except tangentially..

so while everyone knows a full web-ready browser engine is impractical to design from scratch, maybe a simplified engine that supports a conservative subset of best-practices web-style design is in order?


There are a bunch of “small subset of HTML” UI toolkits already. The first that comes to mind is Sciter: https://sciter.com/ which bundles QuickJS interpreter, built in support for React-like syntax, and a HTML/CSS like markup and rendering environment. You’ll see on the homepage how it’s used in many products you might have heard of.

The issue here is re-use - Sciter is small and very fast, but won’t run an arbitrary existing web app that targets Chrome. Maybe you could argue for a middler-ground, add more HTML features to Sciter until it can run “most” things… but you’ll end up back to having the whole banana.


That's a really interesting idea, but in practice you might find that the "conservative sunset" ends up being different for every toolkit and developer.

Maybe something a bit opinionated, like a React Native-only runtime, could fit the bill here.

Or in a sense, is this what Flutter is?


TCL/TK is still both easier than electron and will produce nicer apps. I think the Electron thing is the result of the popularity of web development more than native API failures (other than mobile development being an incredible pain in the ass.)


And then the calculator app, which is native, is wrapped up in some sort of isolating container that makes it take as long to start up as an electron app.


I sometimes, by accident, launched LibreOffice Calc. It launched faster than the calculator app :). I believe they changed that. The calculator app isn't shipped in a snap package anymore.


You're using the wrong isolating container ;) There's only one that is quite slow at startup, the other isn't.


Just imagine you need a container for the calculator.....


OpenBSD went that route, and when they introduced pledge(2) and later unveil(2), they've applied these to every single program in the base system (over the course of a single release cycle!).

There's absolutely zero reason for bc(1) to accept network connections, or for grep(1) to execve(2) into arbitrary programs. But both of these programs need to process and interpret arbitrary input, which makes them potential targets for exploits.

You don't technically "need" security, just like you don't "need" seatbelts... until you actually are in an accident.

https://man.openbsd.org/pledge; https://man.openbsd.org/unveil


you really compare something like pledge and unveil with flatpack and snap's? Completely different goal's...


I don't think the security goals are that different, but you've asked why would I want to sandbox a calculator. Well - why would I not want that?


Look, you gave the openbsd example, and that's the right way to do it.

Flatpack's are for packaged software-deployment, those are two different things.

Why the need for a sandbox if you could do it much cleaner with things like pledge? But in typical linux fashion, just put another layer on top the pile of garbage so it stop's to stink for a while.

>Well - why would I not want that?

Then please start with the most obvious application sometimes called kernel.

Instead of rigorously integrate something like SElinux they throw layers over layers of half-backed "sandboxes", up to the point to separate applications with Xen (Qube-os), then you find out about Meltdown, and we are back in 1990.


Pledge is a bad example, it isn't applied to a lot of packages in the ports tree and it's infeasible to do it for every program. In the end you'll find you end up with the same situation as Linux: another layer on top with daemons implementing blanket security policies using pledge on behalf of the programs. Kind of like... a sandbox.

SELinux is also a bad example, even if you decide you're using that as the underlying technology you still need to implement a sandbox with various on top of it. SELinux does nothing without those rules.


>SELinux is also a bad example

No why? If you have the right (and correct) rules, SELinux absolutely act's as a "sandbox", that's exactly what i meant, a sandbox don't need's to be another layer of software. Run in your "namespace", can just create/access/execute/read your port, files, memory..that's it, that's a sandboxed application.

For example that "namespace-sandbox" is standard in Plan9/9front...without any additional software, just the filesystem and 9p.

https://dwalsh.fedorapeople.org/SELinux/Presentations/sandbo...


Yeah but I mean you still have to set up the sandbox rules and maintain them and the tools if you want a user-friendly sandbox or a nice GUI to manage it.

>For example that "namespace-sandbox" is standard in Plan9/9front...without any additional software, just the filesystem and 9p.

This is what I mean, Plan9 style mount namespaces are also available in Linux and are preferable to SELinux for containers and sandboxes because they're actually simpler and less trouble.


>and are preferable to SELinux for containers and sandboxes because they're actually simpler and less trouble

Hell YES...high five!! ;)


OpenBSD's pledge and unveil are intrinsic while Linux crappy sandbox it's extrinsic...


Just imagine getting pwned by malware bundled in a calculator...


What is the connection here? Why would you install the calculator not coming from your distribution?


Because there is a push for software developers to be able to package directly for end users. Without devolving into the usual flame war of whether it's a good idea or not, once you install any piece of software you incur some security risks. It's not like distro maintainers are a 100% guarantee there won't be a backdoor in the binary, and compiling software from source doesn't free you from risks either, unless you code-review everything you install.

My point is, containerisation on Linux isn't necessarily slow—in fact it's unnoticeable if implemented correctly—and I prefer to default to having a decent amount of security by containerising as much software as I can, whatever the origin. Including, and especially software like the calculator, since it should not be able to do anything more than show a GUI and add numbers together.


That's not what i asked, why do you trust a no name dev more then the distro your kernel is coming from? And do you really think flatpack prevents you from running packed malware?


Why do you assume the flatpak comes from a no name dev? My calculator flatpak comes from the same people who wrote it, and I obviously trust them, otherwise I wouldn't be using their application.

So why should I trust them less than my distribution?


Ever used npm?


No, and how is that even relevant?


>So why should I trust them less than my distribution?

Just use google -> npm malware


I said I'm not using npm.

With my calculator flatpak I only have to trust one person and to a much lesser degree, because they declared that the calculator can't access my personal files to begin with. The same app in my distribution repository has full read-write access to all my users files, network access and much more. So yeah, I trust it more.

Distribution maintainers are nothing but a middle man, which don't even audit the code they package, so there's nothing I gain from them.


In my particular case: I don't want to wait months or even years for updates to arrive in my distribution. With the Flatpak version I get updates usually on the same day they are published, since the Flatpak is also maintained by the calculator developers, and I also get a calculator which can't access my ssh keys or the internet, due to sandboxing. And in case of any breakage, I can also quickly roll back to the last version.

Without flatpak I'd need to use some rolling release distribution, where not just a few applications get updated quickly, but also the rest of the system, which I'm not interested in.


username checks out ;)


you mean snap packages? I think the calculator is written in vala was pretty performant when I've used it the past


I once installed Linux with Gnome 3 on a USB stick and used that as my primary operating system for six months. My number one application was a web browser. I don't think I ever bothered installing anything other than code editors, language runtimes/compilers and steam. Gnome works just fine.

The only annoyance was the breakage that Wayland introduced in 2016. Suddenly you couldn't screen capture the whole screen anymore. It's kinda the same issue as switching from python 2 to 3 but the difference here is that it is a major upgrade that is worth the effort. The reasons why python 3 isn't backwards compatible are incredibly petty for the most part. However, firefox has received Wayland support for screensharing and some people developed a plugin for OBS. Wayland works just fine now too.


Qt has stayed the same and KDE has been excellent, this is a probleem with GTK and Gnome based desktops


Rhetorical question? It's cheap and quick. Can be outsourced to the lowest lifeform that legitimately describes itself as a programmer.


>That KDE Plasma 5 is finally usable and stable, after having decided to stop pushing the ridiculous plasmoids on the user (were they liking the Windows Desktop Gadgets, or they were simply idiots?), is like having an old whore finally becoming a respectable woman. It’s hard to forget the developers’ idiotic decisions, though. I don’t like to be Microsoft’s guinea pig, why should I be KDE’s?

This attitude describes why there is no future for Linux desktop.


The analogy involving whores becoming respectable women is kind of messed up. I'm not sure if I'm going to trigger the anti-political correctness crowd by saying something that's politically incorrect for the anti-political correctness crowd.


That's where I stopped reading.


Not all random users with a blog and an opinion are in charge of the future of the Linux desktop, thankfully.


It’s OSS you are no one’s guinea pig. If you don’t want to participate you don’t have to use it if the ones who participate want to try other things switch and don’t complain like a old male whore on the sidelines.


>If you don’t want to participate you don’t have to use it

This has nothing to do with OSS though. You can switch from a closed sources solution too.

The problem is you investment and attachment to certain things.


Yeah, the differences is if heh wants to change something for the better he can try.


In case it is something relatively minor - yes. Fork and rock. Though this is a though decision in itself - you will have to continue the support of your changes, merging upstream changes etc. So in many cases this is a mostly fictional option.

In any other case any person will have a long list of better things to do.


Given non-GNOME developers' frustration with increasingly GNOME-centric releases of GTK, and given some of the licensing issues surrounding Qt (such as the one-year delay for FOSS releases of the toolkit: see https://news.ycombinator.com/item?id=25748335), I believe the time is ripe for a BSD-licensed GUI toolkit for X11 and Wayland that uses server-side decorations and tries to be as unopinionated as possible in order to allow for different visions of the desktop instead of imposing a particular vision through the toolkit. In fact, starting such a project is something I've been considering for a long time.

If only had GNUstep (http://www.gnustep.org/) been the toolkit for the Linux desktop instead of the Qt vs GTK/GNOME wars; this would have saved a lot of drama regarding toolkits. Back in the mid-2000's it appeared that the matter was settled during the KDE 3 and GNOME 2 days; a lot of people seemed to be happy with these desktops, and those who preferred non-GNOME Gtk desktops and applications had plenty of options. However, ever since GNOME 3 the GNOME developers have been able to mold GTK to its vision at the expense of other GTK users who didn't share GNOME's vision. Qt is less opinionated, but its stewardship is up to the whims of whatever company that owns it. GNUstep is free from these issues, but it never received mass adoption.

Come to think about it, maybe instead of a new toolkit, maybe we should look into GNUstep some more and consider investing our developer resources into building a GNUstep desktop ecosystem. But I'm curious about what other people are thinking.


> uses server-side decorations and tries to be as unopinionated as possible

You are contradicting yourself. Some people think decorations should be server-side others think client-side. So it seems what is better is an opinion.


Good point. "As unopinionated as possible" should be the emphasis. Perhaps the toolkit could support both types of decorations, with flags that enforce which decoration policy is used.


Actaully, it’s not really about doing it server-side or client-side. It’s about how to make client render on windows border. CSD is obviously simpler, but that’s when you do bitmap…


There are several "unopinionated" tookits out there already. Nobody uses them because it is all a matter of popularity. The most popular toolkit by far is called HTML+CSS+Javascript. You might hate it but the point of no return has been passed ages ago. So If you want to make a difference implement a browser with a trusted zone for local applications (which are allowed to occasionally call C-routines to get native speed) that is not controlled by apple or google.


Eh, no. QT5 is THE serious cross-platform toolkit. Everything else it's a turd deserved to be shunned. Microsoft Teams it's seen as a disaster even from die-hard MS users.


It doesn't have to be though, it's not because it's HTML+CSS+JS that it's bad, it's bad because it was poorly written. You can write poor code in any language/toolkit. You can make beautiful and fast apps with HTML+CSS+JS.


You can also make safe and reliable applications in C++, or fast applications in Python, or readable code in 1990's Perl - it's just very difficult, because those languages lend themselves very poorly to those things, and that's why good developers avoid using those languages when those traits are needed.

Heck, you can make fast code in Python by embedding a DSL and writing a native-code compiler for it. Turing completeness means that everything is possible, which means that "possible" isn't interesting - practical is.

Judging by how few fast webtech applications there are, it seems like making them is not very practical.


>readable code in 1990's Perl - it's just very difficult, because those languages lend themselves very poorly to those things

Perl is not Forth. I know y'all know Perl because of Gold code contexts and oneliners, but look at some games written in Perl and tell me Perl is not readable.

Better, get the free books from Orelly:

https://docstore.mik.ua/orelly/index.htm

The 4th version is updated enough.


Alright, maybe Forth would have been a better example for that point...



The tone of that article makes sure that I stop reading after 2 paragraphs.


I did not last even two paragraphs: I clicked through on "Ubuntu MATE cannot be trusted" and found something better summarised as "I tried getting away with calling people idiots on Ubuntu MATE forums in a bizarre social experiment and cannot deal with them not putting up with that."


Same, it's disturbing:

* OP: "If a distro with 60k packages cannot accommodate one or three extra packages for an official flavor, then someone is an idiot, and that idiot is not me."

* Moderator came and ask to not talk like this.

* OP: "Censorship, simply because I used the word “idiot” without naming anyone in particular!", and OP continue.

* Moderator close the tread.

* OP: "Then the little Stalin closed my thread".

This is not serious. OP doesn't named anyone but hide himself with a general attack to package maintainer (maybe this maintainer is alone). Hiding its attitude behind word meaning is something my 9 years old son is starting to do and I try to explain him its not a valid excuse.

OP should revise it's social expectations.

It's hard enough to keep big open source projects on rails without having childish attitude like this.


I'm anti censorship and post ideas controversial enough to get banned from here pretty often but personal attacks are just stupid. There's no reason for that.


Yeah, the tone of the article makes me think that there might be a great future for GTK-based Desktop Environments ahead.


Yes. GNOME is better than ever, GTK4+Libadwaita is simply doing one: Looking perfect.

The HIG are excellent, without GNOME I would be maybe 40% as productive as with it.


It may look good --- beauty is in the eye of the beholder --- but it's ergonomics are bad. All those hamburger menus, all those disabled configuration parameters, the non-modular architecture, the poor state and quality of Gnome Shell extensions. I really try to like the modern Gnome, but they make it very hard. If your goal is not staring at some graphic design, but having tools that help and do not stand in the way, then Gnome feels just wrong. The primary reason why I still try to work with it, is its integration into system services.


Completely different experience here. At least for my purposes everything is exactly how I want it. I really have think a lot to come up with an issue for the GNOME Environment, except that some applications aren't ported to GTK4 yet


This. Now if only people start investing in UI\UX on linux desktop.


Honestly, the real hurdle now seems to be the support for paid apps. It's such a taboo in the Linux world, for some reason [1], but thankfully Flatpak and Flathub are working towards that goal [2], which ElementaryOS already explored.

I want small, high quality apps like macOS has. I want to make a living building open source software that can be supported and bought commercially—no, donations and sponsoring isn't good enough. I want companies being able to sell their software on Linux through the integrated app store, now that it's finally become good enough.

1: Plenty Linux users think FOSS means never having to pay for software, or that commercial software will kill us all and it's morally wrong. I strongly disagree.

2: https://discourse.flathub.org/t/seeking-contractors-for-work...


It's "supported" just fine now. You certainly don't need some centralized app store or weird quasi-distro. As you say, it's 100% a cultural problem. No amount of technical effort will change that Linux users expect their software to be open source, and that in practice you cannot sell open source software.


1: I don't get it either. For some reason Krita is okay, but selling your app in some sort of Store is not...


Did they quit drawing the massive client side decorations and go back to a file/edit menu with GTK4+Libadwaita? That was a massive regression.


I'm using KDE5 at the moment, and it's essentially problem-free. It's got more features than I know what to do with, but this somehow doesn't detract from it but rather enhances it.

KDE5 never seems to be the default choice of desktop environment for distros like Manjaro, but you're unlikely to go wrong picking it as your default option, if you're given that option.


Doesn't SUSE have KDE by default?

I use it on FreeBSD myself. It's an amazing desktop. I don't like gnome at all. It uses too much screen space (huge touch style window decorations) and doesn't have enough customisation for me. But KDE is perfect.


OpenSUSE doesn’t have a default. SUSE uses and supports GNOME.


According to their own page KDE is the default desktop: https://en.opensuse.org/KDE

> Default Desktop

> Plasma Desktop from KDE is the default workspace on openSUSE.


Strangely enough, from their desktop FAQ: https://en.opensuse.org/openSUSE:Desktop_FAQ

> openSUSE installer provides three officially supported desktop options. There is no default choice

Coming at this from another angle, their installer isn't super opinionated. The KDE radio button is at the top of the list on the desktop selection page.

Does that make it the default choice? I guess it does by some definition.


> KDE5 never seems to be the default choice of desktop environment

Apart from KDE Neon, and Garuda, and...


isn't slackware sticking with them for the default environment?


Yes, but In Slackware you can deselect KDE and KDEI and then XDM will launch XFCE as your DE just fine.


Programming GTK+ programs in C is rather unpleasant in my limited experience.

Despite the set of signals supported by widgets being a fixed part of the documented API, there aren't even explicitly typed-checked function signatures for callbacks appropriate for those signals. The docs tell you what the function signature needs to be for a specific signal like "clicked", but since all the callbacks basically get thrown into a generalized dictionary of void * keyed by signal name hanging off gobject, there's literally ZERO type checking of the callbacks your program installs on signals.

Segfaults because your callback had the wrong parameter types/arity for the signal you connected is the norm, and that's arguably the best scenario. You can also install a callback on a signal that expects a gboolean return, and your callback returns void, welp, no segfault, but unpredictable behavior. It's kind of awful, there won't be any compile-time errors when making these mistakes, despite being C.

It feels like someone learned how to implement a dictionary then made gobject and built a GUI on top of it with dictionaries all the way down. I don't want runtime dictionaries used for representing any set of things known at compile-time. Fine, use dictionaries for runtime-defined signals, but why the hell am I suffering this way for the things fully known by the toolkit before I even started writing my GTK+ program? Sigh. I may as well just write javascript.


Modern versions of GTK+ support Rust, Python and Javascript. I love GNOME, but I think there's a decent group of people that would like a Rust-based GTK+ desktop environment.

https://github.com/gtk-rs/gtk3-rs


This isn't some C problem, this is a GTK+ problem.

There's no need to be throwing all this up-front known crap into a runtime constructed dictionary.

If the widget has a "clicked" signal, there should be a typedef of the callback function appropriate for that signal and the widget struct should have a clicked_cb member for a list of callbacks of that type. Then there should simply be a unique function for installing the callback on that signal on a widget. At least then the compiler can tell you when you're mixing up callbacks<->signals for API-defined signals. It's not even complicated.


The C++, Go, Rust and D bindings already do that, as does Vala, the information is all there to generate the necessary code in a statically typed language. It still wouldn't do anything for you if you want to use a dynamic language or use the xml builder functionality.


>Rust-based GTK+ desktop environment.

Cosmic desktop.

https://blog.system76.com/post/655369428109869056/popos-2104...


Fast windows 95 would have been fine. Instead the devs lost their minds w design craziness. VS Code and chrome pycharm etc means it’s not a huge deal - they are all x platform


Not for people who are not mentally stuck in the 90s. I am very glad things have processed and have left the ugliness and usabilitly problems of Windows 95 in the past.


I'm in the opposite camp. I recall great monolithic software for media production from the 90's and think our modern web-centric view is "dumbed-down".

There's a time and place for minimalism, but the term "power user" also has a meaning: your accumulated user-base that didn't leave.

Windows post-Vista has forced me and countless others into never bothering again. The fact of a 90's MCSE certification means nothing when I blankly gaze at the visage of a "modern" Windows UI that tells me nothing of import. M$ basically told all their power users to go away.


I'm so harsh with Windows 95 here because back then I found out about other desktops (e.g. on Unix) that had better UI. Several workspaces, ability to drag windows not only by grabbing the title bar, clicking the scroll bar to the scroll place you need (instead of having to drag it all the way there), launching programs by other means than a non ending hierarchy of menus. Not to mention that anything not Windows was more stable. Also Unix/Linux was multi user and solid enough to run read services on the same machine (even if it was the same little piece of junk your Windows 98 had come with in the first place).

Exploring the world beyond Windows felt like finally seeing how the real pros do it. That is why I absolutely can't understand the starry eyed reverence people have for sorry "OSes" like Windows 95/98.

TLDR: Windows 95/98 wasn't the pinncacle of UI even when it came out, so it definitely isn't today.


The discussion is about UI here, so the fact that Windows 95/98 was built on shaky foundations is besides the point.

Other GUI systems may have had this or that feature over Windows 95/98, but none matched its cleanness, consistency, discoverability, functionality, and overall usability.

> TLDR: Windows 95/98 wasn't the pinncacle of UI even when it came out, so it definitely isn't today.

I'd argue that it was, and that it still is near, if not the pinnacle today.


" left the ugliness and usability problems of Windows 95 in the past."

And I think this is why Linux on the desktop is going no where soon. The Win 95 interface / control panel / taskbar / start menu was very discoverable for many. In corporate environments it was THE go to interface forever. Yes, KDE and its plasmoids, Windows and it's "live tiles" are the new hotness, but for folks just looking to work the "cool" fly in fly out effects, secret touch points (move mouse to side or top of screen etc) etc are just losers.

You could RDP into a Win95 box set to best performance and basically not know you were remote. All the fancy animations are actually a negative in many environments.

Microsoft itself keeps on trying out new things, but user pressure gets them going back to older ideas.


> ugliness and usabilitly problems of Windows 95 in the past.

You can't be more wrong. Windows 95 was built on solving usability issues.

My fluxbox+rox env has a Windows 95 taskbar albeit 10% shorter on the left and right sides so I can right click for the menu fine.


Yep, Nautilus is "unusable" without the compact view.

Idiotic hyperbole like this is where I stop reading. How can I take anything beyond that seriously?


Septembrians don't know what to take seriously. These kinds of strong opinions feel useful for people new to the scene so they can certainly influence a lot of people.

Hopefully they can see comments like 'gtk+3 wasn't even backwards compatible with gtk+2' for what they are. (i.e. no shit, hence the major version change).


I was an Xfce core maintainer from around 2004 to 2009. In addition to my work on Xfce, I also built a media player, and played around with some other projects that never really went anywhere. GTK2 had its warts, but it was generally fairly easy to build things, and it was fairly easy to work around things when something didn't work the way I wanted.

GTK3 changed a lot of this. Many things that were previously public were now considered private implementation details. Much of that was perfectly understandable (necessary, even), but occasionally this meant that there were things you could do in GTK2, but just couldn't in GTK3 (or could, but doing so was a huge pain in the ass). GTK3 deprecated or outright removed some things in GTK2, and it wasn't often clear what the supported replacement was supposed to be. The theming/styling system was completely replaced, which meant redoing themes, and any app that used theming APIs had to have that part completely rewritten.

On occasion I would report issues to the GTK folks, or would watch issues others had reported that I was interested in, and responses were all over the map, from helpful and welcoming, to actively hostile. What became very clear was that the GTK developers, over time, started only caring about the GNOME use cases. If you were using GTK for something non-GNOME, the GTK developers did not care about you, and would not reverse decisions that (incidentally or otherwise) broke your ability to do what you wanted to do.

I get it: as an open source developer, you have your own priorities, and you're usually not beholden to your users for features or any particular work. I certainly did not care for it when people not contributing to my software would demand I do things for them. But man, did it suck to constantly feel like the rug was getting pulled out from under me.

I haven't touched GTK4 yet, but judging from what I've read about it, I really don't want to. I used Qt a bit around the early '00s, and kinda liked it, but didn't love it (granted, I'm sure things are unrecognizably different now, for better or worse). Regardless, these days I have zero interest in writing C++. (To be perfectly honest, I have little interest in writing C either.) My hope is that one of the pure-Rust UI toolkits will take off, but at the same time that doesn't help existing C/C++ projects, since new-language rewrites are painful and often counter-productive. And on top of that, it just sucks to have more UI toolkit proliferation, which makes it harder to style things so the various apps on your desktop look reasonably consistent with each other.

Of course, my happy place would be Xfce rewritten to use a modern UI toolkit that is not GTK (or Qt, for that matter). (I would prefer Rust, but see above re: new-language rewrites.) But if we all thought the GTK2->GTK3 effort for Xfce took a really long time, rewriting using a new toolkit (let alone a new language) would likely take a decade. We were always understaffed when I was involved, with Xfce being a spare-time, volunteer, passion project for everyone involved (I believe one guy was paid for a while to work on Xfce-related things, but not for all that long). Not sure how much that's changed, but I wouldn't expect much corporate backing, despite Olivier (the project lead) working for Red Hat.


This really echoes with me too, I have a semi-big (but not so popular, at least not any more) application [1] that has gone through GTK+ 1.x, ported to 2.x, ported to GTK3, that I feel I "should" port to GTK4 since it's out.

But I just can't be bothered to even try to prioritize that since all I read about v4 is anger, weird new libraries with strange names ("libadwaita" sounds like a Disney character library) and stuff.

It really felt back in the day like the developers bringing out GTK+ 2.x where, like, on the case and really doing good clever work to bring out a solid platform for application development, and for a while (at least to me, but I was very biased even then) it was the mainstream/default/major choice in Linux GUI application development. Sure some people liked Qt but GTK felt like the home turf.

[1] The strangely-named "gentoo" file manager; I don't even think the site is up, sorry.


You can just ignore all the lame social media drama. The main reasons to use GTK4 is probably for the performance fixes and hardware acceleration. If you don't care about that and the idea of porting seems unfun then yeah, don't bother.

But you're right there was a lot more developers and funding involved in GTK2 days.


I believe a start would be to write a Rust wrapper around of Qt and GTK (and other platforms native stuff). Probably using out-of-process integration.


There are already Rust wrappers for GTK and Qt.

That doesn't really solve the problem, though. You still have all of GTK's baggage, just usable from Rust.


Just use Tk, problem solved :-) - https://tkdocs.com/


I agree with some of this, though it is perhaps a bit overly harsh at times. Frankly, I’m not terribly impressed by the direction that Qt has taken either, though. This puts the open source desktop in a pretty unfortunate place…


I'm not that enthused about where Qt is heading but it's arguably in better hands than GTK+, that's for sure. First, Qt isn't strictly tied with KDE - it has its own life by itself, without KDE libraries at all, and a vibrant ecosystem that uses it for automotive and cross-platform desktop apps. Second, while the Qt Company has decided to stop releasing LTS releases as open source, KDE has a binding agreement that allows KDE to release Qt on a BSD license, were the Qt company intentioned to close it down(https://kde.org/community/whatiskde/kdefreeqtfoundation/). Since the whole LTS shenanigan happened, KDE has shown to be quite competent at maintaining its own set of patches for Qt5, which most distributions nowadays have adopted.

If Qt were ever to become closed source again, it would probably face a very strong competitor in a BSD or Apache-licensed fork from KDE. The Qt Company sells its licenses mostly to people that want to use GPL-covered Qt modules in closed source apps, so they really cannot let that to happen.


Yeah, the KDE Free Qt arrangement is pretty unique and makes it significantly less likely that there will be a rapid dropoff in Qt open source. This became important because the modern Qt company is awful.

That said, I still think the direction of Qt has kind of sucked. QtWidgets is mostly stagnating, and Qt Quick is not really appealing to me…


On the bright side, PySide has finally been adopted by Qt as its official bindings, and it has made PyQt largely redundant. I find it simplifies a lot Qt app development on desktop, and it has the potential to attract a lot of new developers to the platform.

While QtWidgets are stagnating and Qt Quick is arguably very "HUD"-centric and barebones for desktop development, there are nice portable UI toolkits such as KDE's Kirigami popping out that are very usable even without KDE.


As a consumer of desktop environments (and not a GUI application developer) I think that GTK based desktop environments are in the best place they have ever been and Linux as a DE for 'normal' people is a reasonable option in a way it never has been in the past.

Gnome on Fedora is a solid experience. It works well out of the box, it's performant, intuitive, well designed and good looking. I'm sure it rubs some developers the wrong way (there are always some) but based on my (small sample size) observation of people using it, it seems to be intuitive. It's not just Gnome - Wayland is part of the picture because it makes things like multi-monitor more reliable too. We're past the performance issues that the current version of Gnome had to start with, and Wayland is continually having the issues eliminated (once the last few screen sharing ones are dealt with there won't be loads to complain about).


Feels like 80/20 class problem. 80% working, same effort to close 80% of the remaining 20% and recurse enough times you wind up burned out. Now add rewrites and factionalism and dissent.

They don't love and nurture each other and they don't love their users either.

I used to care. As a user I can't buy into it any more.


A preface: I use GNOME and KDE on separate devices. I have tried i3 and intend to return to it when graced with more time to customise it. Importantly, I used GNOME before GTK3 happened.

I mention the above as preface because I see these kinds of posts on this website a lot and they are very misleading, and I believe I have the credentials to explain why.

If you skim the original article you could be forgiven for thinking that it is a well researched and justified piece of writing, but I would like to challenge that. In particular, I note that a lot of 'evidence' for the claims of GTKs downfall are hinged in the author's preexisting expectations (and bizarre tangents - who gives a shit about title bars one way or another in a conversation about software maintenance? Scope pls).

One very revealing assumption by the author is the idea that a major release of a community project such as GTK3 should be free of bugs. This seems very obvious to an end user. 'Of course a major release should be stable! What are you saying? I need my servers to be reliable so I can feed my kids! Not everyone lives in fantasy land like you, plaguepilled!'

But there are actually two assumptions being made here. The first is that major releases of FOSS should be free of bugs and not corrected with time, and the second assumption is that this is no harder to accomplish than for a commercial entity.

However, crucially, the open source software community has had a huge shortage of maintainers for at least a decade now. Look up your favourite utility, then look up the team supporting it: often it is one person who is very, very over it. This can contribute to the phenomenon of fixes that come after the major release, not with it.

In fact, this single fact can explain why major projects that 'weren't broken' seemingly 'choose' to self sabotage. The GNOME team are not stupid. They are heavily under-resourced. The same can be said for KDE and even the Linux kernel teams. If that bothers you, it is far more productive to actually contribute your time to fixing the issue than simply describing it while waxing lyrical about how all these modern devs have simply 'lost the way'.

But I am here to contend that the idea of flawless FOSS is itself bizarre. Wealth inequality is a major issue in 2022 and proposing that the existing body of skilled software engineers should provide enterprise level software, consistently, without pay, is insane. If you care about software support and are not paying, and you do not yourself write FOSS software, you are at best a hypocrite, and at worst attempting to exploit people.

I am a fan of FOSS and want it to be better, but this article is not the way to that future.


> If that bothers you, it is far more productive to actually contribute your time to fixing the issue than simply describing it while waxing lyrical about how all these modern devs have simply 'lost the way'.

I agree with that sentiment in principle. In fact, I advocate the same myself.

However, playing the devils advocate, all an individual non-core developer can do is try to smooth out the rough edges.

If the complaint is "you removed this option", it doesn't matter if the complainer provides a patch to put it back - the powers in charge of that project already have the code (it was there before they removed it, after all), so providing a patch to add it back in is pointless.

The non-bug complaints in general (still devils advocate) are not that the software is missing a feature, it's that the direction it is going (or has gone) in is alienating the existing users.


Thanks for responding. I can see your angle but I don't think I agree. I'll try to respond in good faith but let me know if I've made a misapprehension.

In regards to the topic of user base alienation: I do think it is an important metric to keep track of, but whether it should be prioritised is contextual. In the case of GNOME, they have frequently made a point that accessibility is a large focus for them. Accessibility is not the same as making their core user base familiar with their software.

Said another way, I believe that making an argument that even with their limited resources, GNOME are not sufficiently servicing their users is ignoring the purpose they wish to serve. Their goal is not to minimise change - that is more in line with xfce, i3, or even DWM.

In contrast, I believe attending to existing users is important for KDE, because their core goals are centred on choice. Therefore, it would be antithetical to their goals to restrict choice.

To summarise my angle, alienation is sometimes important, but not here. It would be more expected if GNOME were better staffed (and so extended scope and service guarantees), which brings me to my next point.

You made an interesting point about how a cranky individual that lamented the loss of a feature would not be able to 'change' the upstream attitude. This is true, but it slightly misses what I am trying to say. I am not advocating for the submission of singular patches in response to singular grievances.

Instead, what I am suggesting is that people who are interested in the survival of projects become involved with the long-term (or even just the mid term) running of the project.

This will not only give them greater influence in how the software is written, it will also mean that, as I alluded to before, scope and service can expand.

Being involved with a project also means you will become better informed on how the code base operates. With that knowledge, you could more easily maintain a fork of the project that supports your chosen feature.

Is that a lot of work? Yes - which returns to my original point. Regular contribution is a huge difference that unhappy people should earnestly consider. It may require contributors to go along with decisions they don't like, but with time comes practice, then experience, then trust, then influence.

Or, in a sentence: you can improve these projects in the way you see fit if you take the steps required.


> Or, in a sentence: you can improve these projects in the way you see fit if you take the steps required.

Well yes, but actually, no. If upstream denies or ignores your pull requests, you're dead in the water. Just try and make a pull request to bring back a feature that Gnome or GTK dumped a few years prior.


> If upstream denies or ignores your pull requests, you're dead in the water

No, this is very wrong. You can maintain a fork. You can do that while maintaining positive relations with them to see if eventually they do decide to take the PR.

Of course nobody wants to do that because it's a lot of work. So what's really happening here is you're trying to play hot potato with a feature that nobody wants to maintain, not even you. I sympathize, I also have patches sitting around in various projects that went nowhere. There just isn't enough time in the day for unpaid volunteers to look at every patch.


Thanks for commenting: I sort of agree, as this sort of feeds into my point.

The 'meta-argument' I was making is that delivery requires coordination, and the more you deliver the more work it is.

That's all well and good, but on a singular level I agree that simply throwing a patch request without doing the due diligence won't get you far.


> Or, in a sentence: you can improve these projects in the way you see fit if you take the steps required.

When the one and only step is "hard fork the project", that's not really an easy one.


There's something missing from the analysis, which I believe is quite important: There are only a few applications complex enough to push these toolkits to their limits, and that's not a case of "developers craving such a toolkit and not getting it". Most of the things people were going to write native for in the past have been absorbed into the browser; and many new applications are written with a more ground-up concept of UI that needs to start from the beginning with a lower level framework(e.g. game engines). Without that ambient demand pulling toolkits forward, they're going to revert to the needs of desktop environments - a problem which was already solved mostly adequately back in the 2000's and now is in a product churn cycle. The complaints of Linux desktop users haven't been about the desktop UI in quite some time; it's been other parts of the stack with friction that sometimes surfaces to the desktop(e.g. input devices under Wayland, or the entire landscape of audio session management), but not the basics of windowing and presentation.

Every time Linux faces big coordination challenges to stand up a more robust overall system, a "Ship of Theseus" method of getting first a political consolidation and then replacement has appeared. This can be traced at least as far back as the appearance of udev, and more recent examples include PulseAudio, systemd, Pipewire, and Wayland. One can see this taking place on both the GNOME and KDE fronts too: both had their toolkits come up out of an environment where GUI hadn't yet been commoditized, and therefore the stakeholders were broad. Over two decades on, consolidation has taken place, and that's gradually reached a point where it does impact the "alternative desktops" like Xfce.

The overall maintenance budget for old code isn't infinite, though. We can leverage the past, but only if we're still doing the same things we needed in the past. And I don't think the desktop is staying the same; the move towards touchscreens was a fashion, but the approaches to UX are generally going away from creating a space shuttle control panel, and instead looking for a way to configure more targeted workspaces. Which means that toolkit needs are changing as a result.


Apple doesn't change. Chrome doesn't change. They both look exactly the same as when they were introduced. What makes platforms strong is stability and consistency. Linux desktop projects are more likely to get funded the more disruptive they are and the more likely it is they'll alienate supporters and cause drama. Like gnome's big shift circa 2014 to tablet first interfaces. If we consider that Linux desktop users are the smartest and most technically advanced computer users in the world, then it's quite a slap in the face to force them to use a gimmicky gui intended for toys as their workstation. The most advanced computer users deserve the most advanced desktop. Sadly that's not the world we live in. Hopefully someday we can liberate the technical class from this orwellian treatment of continual destruction of identity.


I wholeheartedly agree. It seems that there is no middle ground these days between Web- and mobile-inspired GUIs that have taken over the desktop (even in the macOS world) and doing everything via the command line. I feel the same way about GNOME 3's shift to mobile-influenced UI/UX paradigms; sadly this shift also occurred in Windows and macOS.

What I believe is needed are UIs for power users and developers. Nobody stays a novice forever; we need UIs that facilitate the tasks of technically-inclined users, something more ergonomic than CLIs but not oversimplified like modern UIs. Some examples of UI/UX that addresses the needs of power users are support for scriptability (such as AppleScript and Visual Basic for Applications), composability (such as OpenDoc [https://www.youtube.com/watch?v=oFJdjk2rq4E]), WordPerfect's Reveal Codes that allow writers more fine-grained control over formatting, and a demo I saw of Symbolics Genera where the CLI shell assists the user in completing the command (see https://youtu.be/7RNbIEJvjUA?t=380 for a demo of how that interfaced worked; while it's a CLI shell, it's much more ergonomic than any Unix shell I've seen). I would like to see more UIs that fit the needs of power users.


> Like gnome's big shift circa 2014 to tablet first interfaces.

It took a long time to work itself out, but today Linux+GNOME is the only mainstream system that's usable for real productive work on a pure tablet or palmtop device. Far more so in fact than even Apple's iPad. And it got done without having to write and test separate apps, everything has a responsive interface that works throughout the range, from a small handheld to a big desktop screen. That's a remarkable feat.


To be frank, I believe that's mostly thanks to the work put into it because of the Librem 5 project and related initiatives. Before that, GNOME didn't really work all that well on touchscreens at all. 2014 GNOME felt like it would work well on a tablet, but it didn't - as you noted, it took some time to make it work. (disclaimer: I work on L5)


> Most of the things people were going to write native for in the past have been absorbed into the browser

The use cases for native applications vs. in-browser frontends are still very different - not everything depends on a network connection. Complex applications that might rely on non-trivial toolkit features are more, not less likely to be native,


> My ultimate prediction is that within about three years after GTK 5 is released, touchscreens will go out of vogue

haven't they already ? I remember buying much more easily a computer with a touchscreen in 2011 than today, at some point every other model had it, but now it feels like NVidia's 3D Vision in its end times.

Same for tablets, when I check analytics for my website, tablet use is pretty much zero.


Tablets and mobile phones are computers with a touchscreen, and they're more popular than ever. You might not see many in your niche, but the laptop and desktop PC are a smaller and smaller pie of everyday computing.


Many high-end laptops these days have touch screens that can be flipped over to use them as tablets. That's exactly the kind of use GNOME 3 was developed for.


Most iPads will be detected as Macs, while other tablets are… not very relevant, like you've noticed.


I think there's a future for STLWRT. All was right with the world when Linux looked like this: https://www.linglom.com/images/Linux/ChangeIPAddress/1.png I still run RHEL5 and use it everyday. I don't even consider it that old.


This is also the reason why I use XFCE. It hasn’t materially changed for years and is just fine.


Hot take: Only having menus are bad UX because they force the human to search for things instead of making the computer search. I have zero fond memories of trying to find options in the giant menu structures of larger applications. I especially don't understand the reverence of the old-style start menus without a search field many seem to have.


The way I use XFCE is to mark all the applications I use as favorites so they appear on the top level menu, then it’s just two clicks to launch anything, way quicker than click-then-type-to-search.


All I got out of this blog post is that its author appears incredibly toxic. There's no need to waste your time with it.


"Is the Linux desktop heading south?"

There is a whole world out there. I use fvwm as a "desktop environment" with MC as file manager. I also find the idea of "desktop environment" a bit old. Even on windows 10 every programm behaves differently. Even MS is not able to have a unique look and feel in its own programms.


I use Fluxbox (slightly tweaked Zukitre theme), lxappearance, Tango2 icons and Rox filer with a bunch of plugins among Rox-Lib under ~/lib.

90% CLI oriented but with common sensical GUI plugins and settings.

The best of both words, and a "DE" with unmatched speeds.


Another fluxbox user here. I've basically ran the same configuration for nearly 15 years now. Everything is in muscle memory and never changes. Totally the best UX


Yes, Xfce is working perfectly as it was for the last 7-8 years


<<So, if GTK+ 3 flopped, how are we where we are today, with GTK+ 3 suddenly being forced down our throats? It turns out it was probably started by Red Hat.>>

Systemd, gnome 3, I think that it is Red hat that ruined the desktop experience in the last decade by imposing terrible choices on everyone else...


As for systemd... while I believe that there have been questionable decisions (why was a replacement of ntpd needed?), at least it's gotten easier to introduce dependencies as part of software startups - e.g. it's relatively trivial to add a dependency on an NFS mount for a server program, which was ... a complex mess to put it lightly before.

My biggest gripe with systemd unit files is that there are at least four ways of specifying dependencies (Wants, BindsTo, After, Requires) and the semantics are not easily understandable.


I agree. Red Hat to me almost feels like a corporate trojan horse taking over open source projects to undermine them from within.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: