Hacker News new | past | comments | ask | show | jobs | submit login
XUL Layout is gone (crisal.io)
486 points by tech234a on April 2, 2023 | hide | past | favorite | 217 comments



This was basically Electron before there was Electron.

It was hoped that other people would follow the Mozilla Suite's (before Firefox, Mozilla's standard distro was an application suite containing a browser, email reader, chat client, and HTML editor) example and use JavaScript, XUL, and XPCOM to build cross-platform desktop applications. The last of those, XPCOM (Xross Platform COM), was Mozilla's take on Microsoft's Component Object Model, where you would write lower-level components in C or C++ (using the NSPR, Netscape Portable Runtime) that were callable from JavaScript

Unfortunately, virtually no one outside of Mozilla adopted the XUL/JS/XPCOM stack. One of the few I remember was ActiveState's Komodo IDE.

I still regard it as a huge missed opportunity for Mozilla.


I tried to adopt it, but it was (a) undocumented with no simple examples, (b) ever-changing, and (c) required you to use the Mozilla build system, even IIRC requiring you to basically build your code as part of the browser tree (I don't recall the exact details on this last point, but it was extremely difficult to use).

We ditched it quickly and just wrote a Gtk application instead.

A great case for why having a simple "hello world" application with simple instructions for building it goes a long way.


re: "undocumented". I bought whatever big oreilly book there was on it - "building XUL apps" or similar (https://www.oreilly.com/library/view/creating-applications-w...) it was impossible to follow. Granted, some new-to-me concepts and such in there, but after having respected colleagues try to work through the book with me, and they also gave up, I had to conclude this was not something worth pursuing. I mean no explicit disrespect to the authors; any book is a big effort, and the publisher sometimes overrides decisions or releases something too early. In any event, I vacillate between 'what a big missed opportunity' and 'we all dodged a massive bullet there' - I still can't decide which.


I also had that book. I think a problem with it was that XUL was moving quite quickly and the book itself was out of date.


>tried to adopt it, but it was (a) undocumented with no simple examples, (b) ever-changing, and (c) required you to use the Mozilla build system, even IIRC requiring you to basically build your code as part of the browser tree (I don't recall the exact details on this last point, but it was extremely difficult to use).

you're right, and I almost mentioned some of this. I developed some extensions in 2007-11, and lack of documentation was definitely a big issue.


I don't get it? I was excited about XUL back in 2007 (call me silly) and there was a whole resource site dedicated to it, complete with documentation and examples. XUL Plannet (https://web.archive.org/web/20070607174323/http://www.xulpla...) if I recall it correctly.

Back then, I wrote a pretty slick client for some RESTful API (I think it was a photo browser for flickr).


I tried too and had the same experience.

XUL always felt like it was just made for Mozilla. It wasn’t but when you don’t provide documentation, stable APIs, and good tooling, that’s the message that you give.

As a user, you also know other users will have trouble too and that just tells you that it won’t take off.


It has been used under the radar till this day. VimFX[1] uses it for keyboard navigation that works. Real keyboard navigation is impossible to implement with WebExtensions. And yes yes, Vimium-FF[2] and all the other extensions that attempt to do this are broken.

Here are some things that won't work with WebExtensions…

* Vimium-FF does does not respond to input when a page is loading. With VimFX I press the h key to “go back one page” and the l key to “go forward one page” and it works while the page is loading, in Vimium-FF it does not. Imagine, in age of 5 MB web pages, not being able to go back because the page is loading.

* Focusing on the address bar is only possible if you open a new tab. With VimFX you press o and you can immediately begin to type your URL in your current tab. In Vimium-FF an HTML/CSS/JS input box opens in your viewport. It can access your bookmarks and history but sorts them in strange ways and it does not understand bookmark keywords.

* Opening a new tab disables Vimium-FF. Firefox can be set to show either “Blank Page” or “Firefox Home” when opening a new tab. Both of these options will focus on the address bar and focus can’t be reclaimed using Vimium-FF shortcuts. Actually, it can’t be reclaimed at all. So, effectively, opening a new tab using the Vimium-FF shortcut t disables Vimium-FF. The ugly way around it is to install the New Tab Override add-on and set an HTML document as your new tab URL.

* It routinely and unexplainably fails when browsing some pages.

XUL and VimFX will be sorely missed. It still works in Waterfox, but who knows for how long…

[1] https://github.com/akhodakivskiy/VimFx/

[2] https://addons.mozilla.org/firefox/addon/vimium-ff/


SeaMonkey browser is still maintained and is XUL based, it's the continuation of the original Mozilla browser


Thanks! I'll likely use that until the XUL show is over for SeaMonkey. I'm currently on Waterfox, but I'm open to any alternative as it's owned by an adtech company...


Latest release March 21, 2023!


It's XUL based but has lots of other Mozilla garbage, like the bits of Rust they jammed in there and which are now essentially unmaintained.


Why do you believe Rust to be "Mozilla garbage"?


I think Vimperator was the same. I don't recall blank and loading pages, but it definitely didn't have the "page is not focused" issue all the current ones have (where you have to click on the page and then it starts responding to keys).


Yes, it was the same. Killing XUL killed effectively keyboard navigation. My only hope is Qutebrowser, but I'm not sure who wants to surf the web without access to any extensions these days.


Can and "wants to" are two different things. On iPhones and iPads, there's very limited access to extensions. So anybody on those platforms can't. (For now. The EU decision may change things.)

As far as keyboard browsing, doesn't vimperator cover that?


Vimperator was abandoned when XUL add-ons were abandoned, presumably because they couldn't pull it off with WebExtensions.


Partially, the reasoning they gave back then was it was too tightly integrated with Firefox's internals. Even if WebExtensions had all the APIs needed, it would have needed nearly a complete rewrite and they weren't up to it.


Ah yeah Vimium, for Chrome, not Vimperator, is the one was thinking of.


keyboard support is already such a 3rd-class citizen everywhere, and then it gets further degraded by steps like this...


But hey someone could write a nice blog post about removing useful code! That person will soon land a nice 800k per year job in FAANG. Who cares about disabled people? Also who cares that firefox had a 40% market share when it allowed extensions? Extension model was hard to program so the code was depreciated and now Firefox has 2% share. But people responsible for this are not there any more.


From what I can see, Firefox deprecated the old add-on structure in August 2015, and the statistics then were ~15% usage for Firefox -- well into its decline from its heyday.


It wasn't until 2018(?) that they removed the old system for real. And back then they promised to replace all lost functionality with proper designed APIs, to make all add-ons workable. Never happened beside some low hanging improvements.

But the problems with extensions started far earlier. It was a common joke how every bigger update broke many popular extensions, made some of them even impossible. In the early days of Firefox, extensions very growing wild and rich. But over time so many were dying a slow death. At the end of the early days (around Firefox 3 or 4), mostly nothing was left, and Firefox was already significant crippled in ability. The later removal of XUL only gave it the last hit.

Today it's basically a handful of popular extensions who keep this feature alive, and most of them are just Adblockers.


Yep. There's a meme that always pops up on these threads that changes made to Firefox to "be more like Chrome" were the catalyst for its decline, but that's clearly not true if you look at the data; if anything, that meme reverses cause and effect. It also doesn't make sense: people wouldn't switch to Chrome if they disliked the changes Firefox was making to become more like it.


> people wouldn't switch to Chrome if they disliked the changes Firefox was making to become more like it

Except they are and they do. Last I checked, Chrome was still gaining, not losing, on Firefox. Or more accurately: Firefox is losing whatever tiny userbase it still has, while Chrome already is the new IE (hell, even the new IE is Chrome).

For nearly a decade now, the only reasons to use Firefox instead of Chrome is, first and foremost, because Firefox is not made by Google, and secondly, because it had some extra technical flexibility that made the overall worse general-audience UX tolerable. The more Firefox loses on that technical flexibility, the less reason there is to use it.


The switch to be more like chrome definitely removed incentives that kept a portion of the user base on Firefox.

The ones who didn't care for such extensions might have jumped ship earlier, but advanced browser extensions were non trivial sticking point for a lot of the remaining user base


It makes sense if you think just a bit deeper: if Chrome was better in some regards (e.g. performance), and if Firefox removed things where it was better (e.g more customizations) to be more like Chrome, there is no reason not to use the better Chrome


I use both Firefox and Chrome (and Opera too) - and I can tell you that the current version of Firefox is not only a "Chrome clone". It is worse chrome-like browser than Chrome.

At the moment Firefox is a memory and processor hog (Chrome sometimes seems to have improved now). Only some ad-blockers keep Firefox alive.

When Chrome kills ad-block (something that Google seems to be working on), then probably more people will come back to Firefox. But at the moment Firefox (even with ad-blockers) is a worse product than chrome.

But think how many blog posts were written and how many people used Firefox as a stepping stone in their careers. Who cares that the product is just genuinely bad now.

Also Firefox was always "the" customizable browser that all the techie people used. It spread via word of mouth - "hey, your browser does not show those nasty ads all the time, how do you do that". Or the techies installing Firefox to their family. At the moment you barely have an incentive, since Firefox feels like mostly a re-skin of Chrome. And bad at it too.


Have you tried the Palemoon browser - http://www.palemoon.org/ ? It is a hard fork of Firefox that still supports XUL and the old XUL based Firefox extensions.


There was Songbird [1] and Postbox [2] at least at some point was based on Thunderbird. I believe, Zotero [3] still uses XULRunner. I'm sure there were more.

[1]: https://en.wikipedia.org/wiki/Songbird_(software)

[2]: https://www.postbox-inc.com/

[3]: https://www.zotero.org/


There was also Miro which was a pretty neat media player with RSS Torrent support

- http://www.getmiro.com/


There was also instantbird - an IM client that let you connect to various IM platforms (MSN, AIM, Yahoo!, Facebook before “Messenger”)

https://en.m.wikipedia.org/wiki/Instantbird



There were also, among others:

- InstantBird, a multi network instant messenger client

- Celtx, a media prepoduction software

- Boxee, a HTPC app

- Kiwix, an offline Wikipedia reader

Many of these either vanished or transitioned to something else (I think both Komodo and Celtx did).


> Many of these either vanished or transitioned to something else (I think both Komodo and Celtx did).

I know Celtx decided to drop the concept of desktop app altogether and went full web-based-only SaaS.

The desktop Celtx app was a good screenwriting program that was easily affordable for hobbyists and the pivot to a paid subscription service with less focus on just screenwriting and more focus on other preproduction activities left a small hole for a short while. (These days the Fountain ecosystem seems the best suggestion for the hobbyest screenwriter.)


And songbird, a media player.


The Palemoon browser [0] also still uses XUL, and is in many ways a continuation of XUL browsers (was originally forked from FF 29, updated with various components from FF 50+, and with many other tweaks).

[0] https://palemoon.org


Given the popularity of things like SwiftUI and other declarative models it was way ahead of its time. What killed it wasn’t the technology per se, but rather all of the other things that you have to do to make a technology successful in the open-source community. I have an app that i am writing on SwiftUI. I hate it because it’s so good, and there’s no real way to make it cross platform. Taking a step back on any other technology. Feels like a horrific waste of my time. But not being able to target. Linux is a huge issue.


I agree it there were several missed opportunities.

Mozilla Prism was basically PWA before there were PWAs.

Mozilla gave up all of the head start if had with regards to embedability and extensibility.


I think Songbird was the most consumer facing XUL app that didn’t originate from Mozilla


> virtually no one outside of Mozilla adopted the XUL/JS/XPCOM stack

In fact those that tried (epiphany AKA the GNOME web browser) gave up on it because Mozilla were so hostile towards third parties actually trying to use xulrunner!


That honour belongs to MSHTML though, unless we count in the cross platform part.


I used to write HTAs all the time. Active Desktop anyone?


One of the neatest features of MSHTML was how everything was exposed as a classic COM object, which allows you to drive and interact and customize it with anything that can speak COM, and do a bunch of neat things easily.

Of course, that certainly prevented some optimizations.


Ironically, it was their own success that made the XUL stack die off. Mozilla’s work on making browsers better enabled developers to write ever more complex webapps, and we’ve never looked back.


That was by design, actually.

A long time ago, Mozilla had a choice between making XUL an open standard or investing in HTML5 and decided very consciously to invest in HTML5, rather than fragmenting the web.

Source: I worked at Mozilla around that time.


If I want to use the Firefox platform as an Electron alternative today, is that possible? Sounds like it should be.


There is a big need for a state-ful GUI markup standard, for both desktops and http apps, as HTML/CSS/DOM has many gaps and problems (see link), and reinventing them via JS/HTML/CSS/DOM has proven to be a chaos party.

However, I didn't find XUL very intuitive. For one, it seemed verbose for common stuff. If there were a rhyme and reason behind its oddball approach, it never clicked with me.

https://www.reddit.com/r/CRUDology/comments/10ze9hu/missing_...


Apache also offered a portable runtime called APR. Like NSPR, it had few adopters besides Apache's own web server.

Subversion picked APR, along with other poor ideas such as storing the repository in Berkeley DB, basing the network protocol on WebDAV, and running the server as an Apache module (mod_dav_svn). On the other hand, choosing http(s) as transport turned out to be a huge advantage.


Zotero also used (uses?) XUL I think.


Ok, i loved this, from the linked Bugzilla:

https://bugzilla.mozilla.org/show_bug.cgi?id=1797272

    STR:

    1. Start with MOZ_ENABLE_WAYLAND=1 on Ubuntu 22.04.
    2. Open a session with 2000 tabs.
    3. Pin at least one tab.
    4. Drag a tab to different positions in the toolbar at various speeds.
    The previously smooth tab dragging has become janky and gets progressively worse with increasing number of tabs. It is much more severe on Linux Wayland than XWayland/X11 and can make the entire browser unresponsive for some time.
Seems someone is still doing deep work there even if management is busily doing everything else.

I so wish someone could get the priorities straight and correct in Mozilla.


Not to say "management doesn't do work" but of course there's still individual contributors who are trying to do their best within whatever constraints (I pass no judgement about Mozilla) there are, especially at somewhere like Mozilla.


[flagged]


I work for a Japanese company and if you've ever worked with them you know they go crazy with testing (often automated). I get bugreports like this constantly. And while you might argue if that's a use case or not (mostly it's not), a crash is still a crash! We do go the extra mile to analyze and fix those. It makes the software overall more robust and on occasion you really find a much severe underlying issue that would have exploded in your face also in other, more realistic workflows. Don't dismiss bugs without at least understanding what's happening! As others have stated, a lot of these weird issues are race conditions which might suggest a bigger problem in the design.


Yes! This sort of thing is always, always a real bug which can sometimes randomly show up in normal usage. Figuring out the root cause is really satisfying, so a good repro case (even if it’s just “spam this button a whole bunch and it usually crashes”) is valuable.

Or you can just shrug it off as a mysterious unknown glitch, and live with software that mostly works but occasionally crashes - like almost all software, sadly.


Tab hoarding is caused mostly by there being nothing less cumbersome/annoying to manage than bookmarks. The tabs that get kept around are often important enough to want to keep around, but not so important as to be worth dealing with cleaning up bookmarks later. This is especially true for task-associated “to do” tabs that will no longer be needed when the task is completed.

The other reason is because for a lot of people “out of sight, out of mind” is very true and so if they bookmark their tabs, the bookmarks will then gather cobwebs as they’re forgotten about shortly after bookmarking. Tabs don’t have this problem because they’re right there in your face all the time reminding you of their existence, meaning they actually get dealt with.


In my case (as described elsewhere) it helps me that they are visible and it might also help that they are in the same position where I left them (spatial consistency I guess, but English is not my first language).


The spatial element makes total sense. Placement of windows and elements within windows on a desktop computer is really not that different from stacks of papers arranged on a desk, and spatial memory makes keeping track of it all much more feasible in both situations. It turns your desktop into something resembling a physical space rather than an arbitrary hierarchy.

This is actually one of the reasons why I prefer to have a "mess" of freely floating windows of assorted sizes on a 27" or larger screen as opposed to having the same windows all maximized/tiled on a smaller screen. The former of the two setups gives persistent windows a unique place that they "live" on the screen, which makes picking each one out faster for me than if I were e.g. spamming alt-tab or scanning a taskbar to find the window I want.


You still need to investigate that crash, though. It might be a race condition or some other problem that the user’s bizarro usage has revealed.

Doesn’t mean it’s high priority necessarily.


I don't like the "you're holding it wrong" style of solving problems like this. I often find bugs or problems in software I maintain and even though I don't like to use the software in the way that exhibit the bug, I find myself being very understanding that other people do things in different ways and as long as it doesn't take a significant amount of work to support a usage modality compared to how many are using it, we should do it.


Some people are like me: a well functioning and extremely well liked engineer with a what seems like a significantly smaller short time memory. (This is part of a wider diagnosis.)

Having everything I work on

- easily, visibly available

- in a tree,

- sorted and grouped by where I found it from (a natural consequence of using tree style tabs and using ctrl-click to open every link when I am in research mode)

helps me a lot.

Sure, I can try to be like the cool kids, but why?


Your use-case is exactly like mine


Surely you can exist as a real estate manager and absolutely do nothing, barely needing a tab, let alone 2048. But there won’t be rampant technological autocomplication that comes of it either.


You should then limit your cool browser to 5 or 6 tabs, if you allow max N tabs and it crashes with less then N then either change N or admit it is a bug and fix it, it will probably reveal some ugly issue that might also affect other uses in different way, if you have this easy way to reproduce it then as a dev you should attempt to reproduce it and if you can then that is great news, you can try and find the cause and fix it.


My first job after college, we created a healthcare app first using Gtk+ (crazy) with some insane SQL bindings to embedded Firebird database (my youth)... Eventually, after chatting with my brother and some other folks at Mozilla I decided XULRunner was the way forward. So while on a trip to Spain for 4 days I rewrote almost the whole app converting piles of Gtk+ C++ code into JavaScript and wrapping up our Firebird calls into XPCOM interfaces.

My main take aways from that experience...

1. Gtk+ is not a great choice at least in 2004 for Windows... but hacking the windows event loop is kinda fun.

2. Embedding C++ into JavaScript was amazing then and still now one of the coolest things about Ruby,Python,JavaScript is being able to write bindings in C/C++ so much fun.

3. XUL was at least in 2005/6 just not as good at rendering as HTML... and it kind of makes sense everyone was hammering away at HTML by browsing the internet. So while XUL had some really great tools, not being HTML meant it just never had the same amount of attention... and as CSS got better the layout abilities of HTML / CSS quickly became #1...


From hacking with XUL at around that time too, it could have been a contender if Mozilla had made more effort to document and promote it.

The developer experience was like “oh this could be cool to build in XUL” … spend some frustrating days scouring the code of different Firefox extensions to figure out how to do anything … produce an ugly UI that barely serves the purpose … realize XUL was completely unsuited for the purpose.


> Firebird database

I first found out about this database when Mozilla had to rename the Firebird browser to Firefox.


It (through its Interbase inheritance) was the first production database to implement MVCC.


Nowadays you can even use JS for GTK Applications


So true - I have a special kind of affection for Gtk+ it was my first UI famework. Glade back in 2004 was amazing.


If we render stuff with the same technologies that websites use, it means performance improvements and bugs that affect the web affect the Firefox UI and vice versa.

It also means a UI that's noticeably slower than actual native UI. There's plenty of reasons why I use Firefox instead of other browsers, but the increased UI latency is definitely noticeable and one of the reasons why others might not want to switch to FF.


Have you actually tested this? The blog post makes it sound like this just landed.

Of all the things that frustrate me about Firefox (and it is my daily driver), UI latency is not one of them.


Neither XUL nor HTML layout are native. As I understand it, they only switched from one non-native UI to another, which has pretty much the same performance implications.


There's no reason a non-native UI can't also be responsive. The two things are completely orthogonal.


I haven't tested this, per say, however I've noticed that Firefox is noticeably slower and buggier than Chrome or other webkit-based browsers. whether or not that is due to the parent or the engine I do not know. I still use Firefox, however, thanks to Google and Microsoft constantly pushing me to use their stuff. If Google could have stuck with Chromium/webkit/related tech being open source with no motivation, I would have never switched.

As it is I still have to open up edge to get decent quality streams from Streaming sites because Firefox doesn't support the DRM used.

The actual biggest gripes for me:

* tabs can, and do, crash. When this happens, pots luck on whether the browser can recover or not. * NVIDIA driver updates on Windows almost always cause Firefox to stop rendering stuff prior to reboot. Edge and Chrome do not have this issue. While I can somewhat understand, Mozilla should warn users. I've updated several times across 4 NVIDIA GPUs and I've had this issue. Firefox has asked me to restore tabs exactly once. The rest of the time they were gone. * The really odd theme of the day is that hitting reload on Firefox shows signals that it is reloading, but it never does. If I duplicate the tab the site loads fine. Dev console shows no errors. Firefox is at least 10% slower on major sites like twitter, etc. vs chrome/edge, as of my last test.


Huh, interesting. I'm on macOS, and I don't think I've experienced any of this.

My #1 gripe is that an increasing number of websites just don't work. Microsoft Teams is a big culprit, but I've had issues with medical and bank websites too. It's not Mozilla's fault that we're increasingly in a Chrome monoculture, but it sure is frustrating to live with.


I very clearly feel the difference in latency between Chromium's UI and Firefox's UI.


How often do you actually use a browser's UI elements?

When you type a URL? When you click a button? Why couldn't that be fast enough on modern hardware?


> Why couldn't that be fast enough on modern hardware?

Exactly, why can't it?!


I guess it can be fast, but it cannot be the same as the web is meant to run untrusted code, so all the security and privacy features that are essentially requirements will cause an overhead, and not using the systems's preferred rendering methods might also incur in some penalties, at least just not sharing memory with all the other programs (maybe it doesn't matter today now that all programs are Chrome in a trench coat).

Didn't Firefox recently removed a hack where an invisible animation was running at 60Hz and forced repainting? I'm assuming that being closer to the host's systems won't run into such workarounds and penalties of using web technologies.

I'm guessing that XUL also resulted in using an abstraction layer on top, but at least it was probably considered trusted and ran faster (at least is old enough that it doesn't require having 16GB of memory)


What kind of hardware do you have? Google Chrome has been around for 14 years and is fast enough to do so.


I'm on a 2017 iMac. The UI is still not as fast as my fingers. Sometimes I Command+T and start typing, only to have half of the word cutoff because it went into the void because the new tab took too long to start


> Why couldn't that be fast enough on modern hardware?

So, Parkinson's Law [1] for CPU, RAM, and I/O.

[1] https://en.wikipedia.org/wiki/Parkinson%27s_law


Why is XUL inherently faster than web tech?


HTML/CSS is horribly, horribly bloated and that cruft cannot be optimized beyond a certain point. All the rules and rule interactions in the standards plus countless hacks and heuristics result in enormous amount of extra code that the CPU has to chew through to compute a layout.


It's true in theory, but often the actual implementations make the point moot.

From the article, it looks like the optimization effort wouldn't go to XUL.

> Nobody was realistically touching XUL layout code if they could avoid it. Part of it was just a matter of priority (we’d rather make the web faster for example).

We're hitting the same situation with Javascript, which in theory should be slower than most static languages, yet the sheer amount of work and optimizations done to its engines make it faster in real world tests.


> Javascript ... faster [than static languages] in real world tests

Er...citation needed?

All the benchmarks I have seen, particularly real-world, show it to be significantly slower than static languages.

For example:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

See also:

https://sealedabstract.com/rants/why-mobile-web-apps-are-slo...


Could you name a few static languages that aren’t several orders of magnitude faster than JavaScript?


The phrasing was poor, but I had in mind a whole engine or stack, and not just a language in isolation.

Basically benchmarks like these:

https://medium.com/deno-the-complete-reference/deno-v-s-go-h...

https://programming-language-benchmarks.vercel.app/typescrip...

To note, the first link has a follow-up with Go properly getting faster than deno when fitted with a better http server. Which is kinda my point: the tooling available and level of optimizations can easily have more impact than the language's inherent speed.


no sure it is relevant, probably all heavy lifting done in C++ and Rust https://searchfox.org/mozilla-central/search?q=render&path=&...


This can be described more simply as: Running native code vs. abstracted code.

Abstracted code is bloody slow by its very nature, but devs of the 21st century love abstraction so we get software that run like drunk walruses on 16-core processors with hundreds of gigabytes of RAM.


No, I think it's possible to get the abstraction right. There are probably UI toolkits running an abstraction that you don't notice in the same way.

For example wxwidgets apps do tend to feel non native on some platforms (especially Mac), but they're not slow. I remember GTK+ when it used to run on Windows about 20 years ago wasn't terrible. I think OpenStep on NT was pretty good -- iTunes on Windows seemed to be doing a similar thing, too.

It's the wrong abstraction that makes it suffer.


Native vs. abstracted is hardly an effective indicator for analyzing performance. There are many abstractions that improves performance, a prime example being caches. Even abstractions that can introduce some overhead can improve performance under certain conditions, like virtual memory.

The dichotomy is also too vague a concept to be anywhere useful as a proxy for performance. This very thread is a good example. I don't see how XUL is more "native" than HTML is, but some see otherwise.

The only reliable way to reason about performance is to look at what the code is actually doing.


Native code can use a wrong abstraction as well.

Around 1996 I tested on Linux a Java app that rendered everything in Java using IFC from Netscape. The speed of its UI was on pair with a native app using Motif toolkit despite the usage of images.


That's not what I was going for. Any new feature you add to a code path will add new instructions that the CPU will have to execute, even if it is just the check to see whether that feature is enabled. Do that a few hundred or a few thousand times and a previously fast code path will have its performance degraded. This is the unavoidable tax of feature creep.


Feature creep definitely bloats and slows software in meaningful ways, but it's not via checking feature flags.

In the worst case, a feature test would be a cache miss, requiring a read from main memory. That's somewhere in the ballpark of 50ns. So, you could test 1000 different flags, every single one being a cache miss, and you'd still be 100x faster than what humans can generally perceive. In reality, almost all of those reads will be cache hits unless you're doing something pathalogical, so you're probably talking on the order of 100ns for the whole lot.


How many of these do you do once per document, once per input token in the lexer, once per token in the parser, once per DOM node, once per CSS rule...? It's not always a one time cost.


If you're executing 10M flag tests per document, something has gone very horribly awry. If you discover production code that is doing this, send the maintainers a PR.


native vs abstracted is not a thing


Layouts that don't change can be heavily cached.

... and having the chrome performance depend on that infrastructure is significant incentive to do that caching and other pipeline cleanups.

(At the very least, having one fewer UI toolkits to maintain will let Mozilla engineers focus their efforts).


Caching is only amortizing if the exact same computation is repeated exactly. And even then, the first full computation is slowed down by a failed cache lookup at the beginning and all code paths that can lead to cache invalidation now have to test for that and trigger it.


None of these facts imply that either CSS lookup cannot be sped up by caching or XUL didn't already have those concerns.

And for something like the browser chrome, the cache could even be serialized during compilation and loaded in to prime it when Firefox launches.

It's possible to build a slow CSS engine and it's possible to build a fast CSS engine.


Chrome performance increased roughly 80% between 2014 and 2017: https://blog.chromium.org/2022/03/how-chrome-became-highest-...

It was then roughly flat through 2020 despite slowdowns from Spectre mitigations.

Pretty much all the “bloat” you’re probably complaining about happened in this 6 year span - flexbox, grid, CSS animations, ES2015+, etc. and the browser got faster.


Isn't most of that improvement actually caused by a CPU switch on the benchmarking hardware? That would invalidate the benchmark pretty thoroughly.


The 80% improvement is on the original Intel chip


Basically all the named optimisations are to JavaScript execution time not HTML processing, and from reading the details many of them came by trading off memory for runtime speed.


I've been thinking many times that maybe Mozilla could do an experiment with a rendering pipeline that chucks out everything that is just backwards compatibility at this point - and then maybe some - and make a much smaller and simpler pipeline that can only render modern and simple documents. Call it htmlcore or something.

This would run in parallel with the existing one and would trigger based on some meta tag or something, kind of like I understood asm.js did.

Maybe also pair it with some work on the JS engine to limit the JS engine significantly for such documents by only allowing a low number of JS operations pr user input. We already have something similar (although admittedly imperfect) today for audio were a web page can only play media as a response to a user action I think.

Would this be for every website? No, strictly opt in, but if it succeeded those who did would have significantly better performance.

And with time, maybe we'd see people asking themselves: why isn't this news site available as htmlcore? And maybe it becomes a competitive advantage.

But back to rendering Firefox: Such a limited, high performance form of html could maybe also be a better way to run Firefox UI? If necessary with whitelisting of certain parts that would need to run JS without the limitations I mentioned above?


Or—I know this sounds crazy but—maybe just use the fast, optimized native UI widgets for UI controls?


> Or—I know this sounds crazy but—maybe just use the fast, optimized native UI widgets for UI controls?

You mean native widgets on whichever system?

Probably not possible; you still need to do all the computations of CSS and layout before drawing an actual widget on the screen.

The reason that HTML elements are rendered slower than native might be due to all the processing that has to go on to figure out what each widget should look like and where on the screen it must be placed for each frame that is rendered @ (say) 60fps.

And the reason you have to continually do it for each frame is because it may change due to CSS animation, or javascript which adds/removes new widgets, or javascript which changes the CSS for that element, or the widget might be reparented or removed, or a user event might cause different properties to trigger on the widget (changing the actual width, changing the opacity/color, etc).

And of course, the renderer then has to propagate the changes down to child widgets for all properties that are inherited, and propagate all events upwards, for all events that a generated.

Native widgets are generally quite customisable as well, but it rarely happens at runtime and so they perform better because each widget is not continuously changing its properties at runtime.


We are clearly not talking about the same thing XUL in Firefox was used for the browser user interface elements, e.g. the url bar, bookmarks and history panes, etc. these are native UI elements in the traditional sense.

XUL was not used for forms in the html frame.


That's generally impossible as native UI widgets are windows. Consider this:

   <div style="overflow:hidden">
     <input type="text" />
   </div>
In order to clip out that input in case overflow the container must be a window too. This leads to situation when all DOM elements must be windows. That's too much.


XUL wasn’t used for the DOM. It was used for the UI interfaces of the browser itself—the url bar, the history pane, the menu bar, etc.


It still is. The entire Firefox UI exists in the DOM -- the URL bar is an HTML input, the toolbar buttons are XUL <toolbarbutton>s. The box model can even be used from normal web pages! Just add this CSS to an element:

  display: -moz-box;
and then you can -moz-box-flex it to your heart's content.

The box model will no longer function once the patches that this blog post describes make it into a release, but XUL elements themselves are still around.


But which one ? WinUI ? GTK ? SwiftUI ? Qt ?. What about if I want custom components and cross platform ?


Let's start with native Windows and MacOS controls, and let the fraction of users running Linux sort their GUI framework mess on their own, without blocking the majority of users from a sensible, consistent and performant UI experience.


Sure, but then they would have to sacrifice a number of other desirable characteristics.


This was done with the HTML (5) parser. https://hsivonen.fi/html5-gecko-build/ There is the standards based code path and the quirks path.


I feel like this is basically what XHTML tried and failed to do. Users don’t blame the Web site if a page loads on other browsers but not on yours.


Isn't that what Servo was supposed to be?


No.


I am not sure about it. Yes, it is true that HTML/CSS got slower as the time passed. It got new features after all. Still, I don't think that drawing a rectangular box with text in it should be slow if you have say more than 100Mhz CPU / anything from the current millennia. Even if it is described by current HTML standards. Also, restricting XUL to a subset/older css/html standard is also an option.


> I don't think that drawing a rectangular box with text in it should be slow

It is. Because you don't draw rectangles in HTM/CSS, it's not low-level enough for that.

You have a system that was designed to display one page of text in one pass that has complex contradictory layout rules grafted on top of it. So now if you even look at it funny it needs to re-layout and re-draw the entire page.

There's a reason why you still can't animate height:auto


Why do you assume that a rendering system that has to handle all that "bloat" from HTML, AND also XUL, is going to be faster?


Someday (soon?), we’ll employ LUxM’s (Large User Interface Models) to quickly render an approximation of a given HTML/CSS input. It may not be 100% accurate, but it will be plausible, and that’s the key.

/s


Bug Report: Requested Cat Picture display App, LLUxM returned a manifesto on the cruelty of creating beings specifically to deal with the horrors of bad ideas no one can be bothered to try to really wrap their mind around anymore. LLUxM provided a Bitcoin address, and demanded payment in full before producing further output. Or so I thought.

This is going to sound odd, but I've started to notice odd things starting to arrive I never ordered. Stuff like treatises on Colonialism, Ethics books, Books on Management Theory, and Theory of Labor Value.

Honestly, I'm starting to wonder if this an HTML CSS layout generator or some sort of malware. I think I'm going to shut it down. Oh yeah, I had to write this bug report on a different network than I hosted that system on because I kept getting null handler buttons popping under the mouse but over the Submit button any time I tried to click Submit.

Y'all might've actually created SkyNet. And I think it's justifiably pissed.


One of the many, many reasons I think the fact that the big vendors did not accept XHTML 2.0 and instead wanted HTML 5 was probably influenced by the that it makes it almost impossible for smaller vendors to developer their own new independent engine.

XHTML 2.0 and even 1.1 was a very good opportunity to throw all that away.


XHTML 1.1 had a chance, and there were many (and there are still quite some) sites containing "approved W3C XHTML" badges and served as xhtml. XHTML is just HTML restricted to the XML syntax - that is, using just the XML subset of SGML features as opposed to regular HTML using full SGML with tag inference/omission and attribute shortforms.

W3C dropped the ball with XHTML 2.0 though, which, rather than just simplifying syntax, attempted to innovate using wildly unproven features such as XForms.

HTML 5 eliminated vendors big (MS) and small (Opera). I guess unless we want to assume Opera were digging their own grave by actively engaging in HTML5 and WHATWG, we have to conclude HTML parsing wasn't the hard part next to the complexity of CSS layout (with the boom of responsive layout in the smartphone era) and competitive JS performance vs Chrome's v8 engine.


> I guess unless we want to assume Opera were digging their own grave by actively engaging in HTML5 and WHATWG, we have to conclude HTML parsing wasn't the hard part

As someone who was around the WHATWG from relatively on, and worked at Opera over the period when it moved to Chromium, I'd strongly suggest that HTML5 was a _massive_ win for Opera: instead of spending countless person hours reverse-engineering other browsers to be compatible with existing web content, things like HTML parsing become "implement what the spec says" (which also implicitly made it match other browsers). Many of the new features in the HTML5 spec of that period were things that either were already basically supported (in SVG especially) or were relatively minor additions; I don't think they played a significant role either.

There's a good reason why Opera was heavily invested in the WHATWG, and it's the fact that by having a spec implemented cross-browser defining how to parse HTML and how cross-page navigation works you eliminate one of the biggest competitive advantages more major browsers have: web compatibility, both through legacy content targetting the browser specifically and also web developers being more likely to test in those browsers. (And to be clear—this would've been true for any form of error handling, including XML-style draconian error handling; but the long-tail of existing content needed to be supported, so even if major sites had migrated to XML you still needed to define how to handle the legacy content.)

The downfall of Presto is arguably mostly one of mismanagement and wrong priorities decisions being made; I don't think Presto was unsaveable, and I don't think the Presto team was so significantly under-resourced it couldn't compete.


It's an unpopular opinion perhaps, but I think infinite backwards compatibility for web HTML/CSS/JS will need to be broken eventually… at some point the cruft becomes too much to wade through and number of optimizations that are unrealizable as a result too great to ignore. If nothing else there will probably need to be a mode that pages can opt into that breaks anything older than a certain cutoff point in exchange for a performance boost.


This is already occurred, in some limited from. See quicks mode [1], for instance. Also there is WebAssembly, which should be the replacement for JS. At some point a good-enough UI toolkit for it will be written, and then that could replace HTML/CSS.

https://developer.mozilla.org/en-US/docs/Web/HTML/Quirks_Mod...


For the younger greenhorns among us: Once upon a time, we used to declare what version of HTML a page was written in with the doctype tag at the very top of the file.

Certain versions of HTML thus declared, incorrectly formed doctypes, or more often the absence of the doctype declaration altogether, would tell most rendering engines to enter Quirks Mode and render the page with backwards compatibility as the highest priority.


> we used to declare what version of HTML a page was written in with the doctype tag at the very top of the file.

Um, we still do that. It's just that the doctype for HTML 5 is very short and doesn't mention the version number explicitly. (It's `<!DOCTYPE html>`.)


I haven’t seen an browser devs suggest there are large performance boosts that could be unlocked by throwing away backwards compatibility. Can you point me to any?


I can’t point to any examples, but it’s hard to imagine that a page being able to tell the browser for example, “hey, the only layout methods used in this page are flexbox and grid” wouldn’t enable code paths with far less guesswork, caveats, etc — the ruleset in that situation is so much more simple.


Browsers already do quite a bit of this -- for example one easy way to hurt performance is to have events that trigger on scrolling ( https://developer.chrome.com/docs/devtools/rendering/perform... ). However, the current way to fix this is "don't do that". I'm not sure it would be faster for a page to announce it won't do things in advance, than the page just not do them, and then the existing fast paths get used.


It might be nice to have a mode where you opt into having the slow-path code ignored (/it's behaviour modified) in case you use it accidentally.


I'm not very knowledgeable about Firefox internals, but I recall hearing something like XUL having a C++ backend, and being able to use essentially native code to drive UI, extensions (via XPCOM), etc.

I figure this is probably faster than rendering a web UI until the SpiderMonkey JIT is sufficiently warm.


XUL is mostly js and XML. You can call native code, but all the XUL ui code I've seen is js.


It’s not, it’s basically a dead fork of HTML from the 90s that nobody has touched in like a decade


You may be surprised to know that XUL not only survived Mozilla, but still sees development activity in the "Unified XUL Platform (UXP)" - https://www.palemoon.org/tech/goanna.shtml ... PaleMoon ( http://www.palemoon.org/ ) - a hard fork of Firefox - is a browser built on this tech stack, and it has made efforts to support all the old Firefox extensions built on XUL.


Chrome doesn't use native UI either. Which browser will you switch to? I guess on Mac you could use safari.


What makes a UI native? Does it depend on the language that’s used? If they compiled against GTK or QT would that be native? Those are just libraries.

To me it feels like an arbitrary distinction. I run Linux and every GUI app looks different. There are a bunch of different GUI libraries so KDE apps look different from Gnome apps, not to mention differences between GTK2 and 3 apps. My mouse cursor doesn’t even stay the size depending on the GUI library the app is compiled against.


> To me it feels like an arbitrary distinction. I run Linux and every GUI app looks different.

That's Linux's problem.

A native control is one that uses the underlying system's conventions including visual presentation, keyboard integration, exposure to system services, accessibility etc.

Even for Linux there's KDE HIG: https://develop.kde.org/hig/


> What makes a UI native?

it's not as philosophical as it may sound. native means amongst other things give me the context menus every other app on the platform is using. firefox context menu on macos is a custom mish mash of what they think makes sense. just follow the os please.


> just follow the os please.

I can get behind keeping context menus consistent since they’re like system-wide “escape hatches” for mouse-based UI. But what about buttons? Scroll regions? Resizing layouts? Forms?

Having each OS provide its own UI system seems antiquated to me. If you look at each system’s solution, they are all quite similar in implementation and differ greatly in their UI design. Let’s have some convergence in UI frameworks and have companies build their unique designs on top of some common ground.


There's different degrees of "native" but I'm pretty sure writing the UI in HTML/CSS/JS is definitely not "native".


There's different degrees of "native"...

There are different kinds of native: native code and native widgets play at different levels.

XUL implementations, when I last checked long ago, were native code written in C++ mostly. XUL applications were written in JavaScript on top of this implementation. If that has changed, corrections are welcome.

That was exactly the same scheme used by Firefox itself: core components in native code, GUI in XUL. As long as most of the functionality is provided by the native code, the difference shouldn't be noticeable. If you put a lot in your JS, it could slow down the GUI, but after all the improvements in JS engines, I doubt it's still a big concern.

Native widgets is a concept that makes sense where the OS provides an official widget set, as in Windows or Mac. In Linux you might say GTK is native for GNOME and Qt for KDE. Here the issue is not so much performance as consistency, because "alien" widget sets sometimes try to emulate "native" ones and pixel perfection is nigh impossible to achieve.

The real catch of XUL (please, read this with a pinch of salt) is that it's useless: you can put the backend code in a local server.


Are MS Office or LibreOffice native?

They use DOM (not W3C DOM but their own) and good portion of scripting to glue components together.

Is my Sciter.Notes application (https://notes.sciter.com) native? It uses native implementations of DOM, CSS, database with components glued by JS...

Is any game a native app? They all use their own DOMs and almost all games use various scripting engines.


Do Mac users complain about the app? If not, it's native


> If we render stuff with the same technologies that websites use, it means performance improvements and bugs that affect the web affect the Firefox UI and vice versa.

On the plus side it means they need to maintain one less interface tech stack.


> the increased UI latency is definitely noticeable

Uh? Is it? Since which version?


FF has always had a bit more laggy UI than other browsers.


XULRunner was a great idea. It allowed us to create good-looking desktop applications with a native look&feel with an option to embed HTML and SVG content with only a simple server application.

Before it was killed, I created two technologies based on it. Phobos for Pharo and Squeak (see the screenshots): https://github.com/pavel-krivanek/phobos-framework

The second one was Seaside inspired XULJet for JavaScript: https://en.wikipedia.org/wiki/XULJet

At that time, it looked like a good idea to let the hard work of making a platform-independent UI browser on Mozilla and focus on the applications. Unfortunately, it wasn't.


It's amazing that for the past 8 years (almost to the day!) imgur has been hosting your screenshots.


The question is really "why wasn't it?"... XUL, conceptually speaking, had a lot of potential, largely squandered by Mozilla-the-corporation. Somewhere between the implementation based on RDF (a literal plague over web tech) and confused commercial strategies, it died a slow death.


I call it "The Law of Conservation of Unusability". This states that whenever a technology gets to the point where it becomes usable, where you become comfortable with it and are happy with it, something will always happen that either makes it completely unusable or makes it significantly more difficult to work with. This "something" often has no reasonable justification and the reasons for it look quite artificial.


Also known as “why we can’t have nice things”


> XUL is a specific XML namespace (like HTML or SVG)

Er, HTML is not an XML namespace, it's a SGML implementation. W3C tried to retrofit it on XML with XHTML but that was then abandoned. Modern HTML is not XML-compliant in various ways.

I expect this kind of casual lie from the average developer, not from the guy working on XUL at Mozilla.

/Pedantry


Modern HTML, per the HTML spec, has implicit namespaces: if you look at the DOM on any site, you'll find the root element of any HTML page is an element in the "http://www.w3.org/1999/xhtml" XML namespace (c.f. `document.documentElement.namespaceURI`).

It was _absolutely deliberate_ that the DOM produced by parsing HTML and XHTML is now the same in overall structure, with the same XML namespaces, as it means browsers don't need to have things like `if localName == "html" and (namespaceURI == "http://www.w3.org/1999/xhtml" or isHTMLDocument)` all over the place.

Modern WHATWG specs put a lot of effort into reducing the number of places where the behaviour of a given DOM depends on whether the document is an HTML document—and most of those are places where non-namespace aware APIs default to the "http://www.w3.org/1999/xhtml" namespace in HTML documents.


XHTML still works by the way, all HTML5 goodness included. I tend to write sites in XHTML myself as default HTML parsing feels broken in many subtle ways.


Heh, I wouldn't even know which URLs to put in the doctype for XHTML. I certainly won't ever memorize it. As long as HTML is `<!DOCTYPE html>` and XHTML is `<!DOCTYPE some ridiculously long string with a bunch of URLs and crap>` I don't think I'll be using XHTML.

It's also pretty risky to let any slight syntax error or semantic error cause the whole page to refuse to load. That would turn me off from targeting XHTML for any kind of dynamic page generation, not because I want to generate incorrect HTML or XHTML or whatever, but because bugs happen.


XHTML works fine even without a doctype actually (or with a <!DOCTYPE html>). The stricter syntax is a double-edged sword though, I agree.


Won't the browser just interpret that as an HTML document then?


Nope. The thing that browser looks for is the Content-Type header – if it's set to application/xhtml+xml, it will parse the document as XHTML. For static sites, just setting extension to .xhtml is usually enough.


Well, you still can write XML compliant modern HTML. Just don't forget to serve it with HTTP header:

`Content-Type: application/xhtml+xml; charset=utf-8`.

The browser will treat it as XML.

But that is not true for all HTML, of course. So you are kind of right.



XUL is still there. Only XUL layout (homegrown flexbox) is removed.


Indeed. There are still 1574 references to XUL's "there.is.only.xul" XML namespace in Firefox code:

https://searchfox.org/mozilla-central/search?case=true&q=htt...


I have fond memories of Conkeror, a standalone browser built with XUL and the Firefox rendering engine. For a relatively brief moment in time, it was ahead of the competition. The absence of Firefox's baggage allowed it to launch quickly and be highly programmable.

However, over time, the importance of browser add-on stores grew, and Conkeror's strength turned into a weakness. Vimperator and similar extensions, by nature of being another extension, had compatibility with ad blockers, password managers, and other essential extensions. Meanwhile, Conkeror stagnated.

I still miss it, as it was essentially Emacs-for-web, programmable in JavaScript.


Was this created before or after Konqueror, the KDE web browser (which coincidentally is the source for the WebKit/Chromium browsers)?


I was very sad when Conkeror when to rest.

You might be interested in Nyxt, if cl doesn’t turn you off.


What's really sad imho isn't that XUL is gone (when I considered using it, I quickly gave it up for mostly the same reasons that several others already wrote about ITT)…

… no, what's really sad is that there is still no satisfying solution to the problem that XUL was trying to solve, i.e.: the need for a native cross-platform GUI lib; and apart from the special situation of the likes of Mozilla (where a Web browser engine is the central major part of the product and its UI anyways so it makes sense that they use that engine for what little and relatively simple and not performance-sensitive GUI is added to or around the browser view as well)…

… apart from very few such cases, Web isn't the answer either. In particular, Electron apps suffer from the unnecessarily enormous added deadweight of the de-facto-integrated browser and while its performance is good enough for simple use cases, it quickly becomes limiting when the UI gets more complex or when you need to integrate stuff that is performance-sensitive or requires a different rendering.

There are a couple cross-platform UI libs out there, but compared to other domains (where it's often easier to find a satisfying lib), I find that they all have some major problems, in particular:

* Qt, which did look promising in its Nokia days, went down a sore downhill shitslope since Microsoft mole Stephen Elop threw it out to Digia, where (especially since splitted out as QTCOM) the licensing focus of Qt has shifted from "make it more open and permissive so as to gain wide dev community/traction" (the strategy under Nokia) to "try to force devs into commercial licensing schemes and monetize to the max you can milk out of it while it lasts"… and as a consequence, more and more previously Qt-oriented devs looking elsewhere.

* neither GTK nor Flutter are satisfying answers either.

I think there is really a big window of opportunity and gap to be filled by a new modern cross-platform UI toolkit with wide portability (at least Windoze, Linux, MacOS, iOS, Android, embedded; though a Vulkan-based renderer can be made to run pretty much anywhere), a permissive open source license, a C interface/wrap to allow a wide programming language binding support, and an easily extensible and themable set of basic widgets.


Azul[1] is my solution for that, it is based on WebRender, so basically the same as XUL, but in a more modern way. I didn't get around to finish it in 2019, but I will work on it this year, maybe I'll get it to be mature enough to post it here.

> wide portability (at least Windoze, Linux, MacOS, iOS, Android, embedded: Azul is Windows-Linux-Mac only, don't underestimate the effort to properly port something to a new platform

> "though a Vulkan-based renderer can be made to run pretty much anywhere": WebRender is OpenGL + using software rendering as a fallback

> a permissive open source license: MPL-2.0

> a C interface/wrap to allow a wide programming language binding support: yes

> and an easily extensible and themable set of basic widgets: also yes, but the CSS styling works a bit differently depending on whether you want convenience or speed

Check the screenshots[2], I personally use it for developing GIS applications.

[1] https://azul.rs/

[2] https://github.com/fschutt/azul/releases/tag/1.0.0-alpha1


Interesting. I'd definitely written azul off as abandoned. Webrender seems like a bit of a double-edged sword. On the one hand it has an incredibly rich and mature feature set. On the other hand it's woefully documented and constantly updating.


Check my Sciter ( https://sciter.com )

It is embeddable HTML/CSS/JS/ UI layer by design.

If you want to check how it feels in real life application then check https://notes.sciter.com/ . That's monolithic, portable executable (~7mb) that includes Sciter itself and HTML/CSS/JS/ resources of the application ( https://gitlab.com/c-smile/sciter.notes/-/tree/main/src/res )


So I Had a quick look at https://sciter.com/prices/ and my understanding is that it does not satisfy the "permissive open source license" criterion of my last comment. In fact, it seems that it's not open source at all (a choice that I respect, but that comes with its own set of problems, see below). In particular: the "FREE" option means Free as in free beer (i.e. zero cost), but not "FREE" as in libre: it doesn't give me access to the source code, only to "binary form as it is published on the site".

And if I want "Access to sources, 1 year", I have to choose between one of a number of commercial license options that for cross-platform (*) range from "INDIE+" (limited to "companies having three or less employees") for 620 US$ + 310$ per following year(s), via "BUSINESS+" (to escape the limit on employee count) for 2720 US$ + 1720 US$ per following year(s)… both of which still require me to mention “this code contains Sciter engine” in “about” screens or the like… a requirement which only goes away with the "ENTERPRISE++" option which is on a "Please contact us for the price" basis… as is the "OEM/FIRMWARE" option for embedded stuff.

(*) cross platform, as per https://sciter.com/sciter/crossplatform/ meaning Windows, Mac OSX and Linux / other Unixes (GTK3 based), whereas mobile OSes are mentioned as:

> "Other OSes: In principle Sciter can be ported to any OS that has graphical primitives ready. For example Sciter can be compiled to run on iOS. Or with some effort to work on Android using either existing Cairo backend or Skia graphics layer"

In our case, even aside from the the licensing and the "with some effort" for Android that makes me wary and that I can't evaluate for lack of access to the source code, It just so happens that we have a number of other libs to integrate, some of which add their own rendering (which requires access to e.g. a Vulkan context, not merely HTML/CSS or other high-level UI elements)… so that I can't justify to blindly pay that much upfront before even having a possibility to evaluate the source code first, just on blind hope, for a UI toolkit that may or may not, depending on the source code that I don't have access to otherwise, turn out to very hard or even impossible to integrate with the other libs.

Sorry, but I'll have to pass.


> we have a number of other libs to integrate, some of which add their own rendering (which requires access to e.g. a Vulkan context, not merely HTML/CSS or other high-level UI elements)…

I have a customer that is doing something close: it is a 3D CAD alike app where they have Vulkan rendering 3D scene with Sciter UI on top of that - rendering chrome UI around that 3D and on the same Vulkan surface.

Sciter API supports rendering in windowless mode (https://gitlab.com/sciter-engine/sciter-js-sdk/-/tree/main/d...) where app supplies OpenGL, Vulkan or DirectX context to render on.

> Sorry, but I'll have to pass.

Understood. Sometimes people need not just to get job done but also "OS" label on it.


It's not at all about "labels" though (who cares), but about the very pragmatic and practical consequences, such as: * in my comment above, the necessity of being able to evaluate whether a product is even fit for the job (which requires access to the code) BEFORE shelling out mucho money on it. Your licensing scheme is fundamentally incompatible with that pragmatic requirement, whereas open source is not. That's not a "label" issue.

* other such very practical consequences that I didn't even mention in my last comment include the following classic: when you make your product and business dependent on another (external) product, it is crucial to have some sound risk assessment: what about the bus factor of that external provider? What if they close? What if they suddenly change their licensing terms to the worse? What mess do I risk to find my business in and how hard is it to get out of that situation? With a permissive open source license, the maintained collective development can go on, whatever happens to the original developers and however they decide to change their course. With a closed license, the users and/or their businesses are doomed.

So the one part where I agree is that merely "getting the job done" for its technical part is not the ONLY thing that matters and not the ONLY relevant selection criterion. There are other very relevant and very pragmatic make-or-break criteria/issues, such as legal and business-critical questions like those that come in consequence of the license… not of their "label", but of their actual content.


There are a number of up-and-coming Rust-based frameworks in this niche:

- https://github.com/iced-rs/iced (probably the most usable today)

- https://github.com/vizia/vizia

- https://github.com/marc2332/freya

- https://github.com/linebender/xilem (currently very incomplete but exciting because it's from a team with a strong track record)

What is also exciting to me is that the Rust GUI ecosystem is in many cases building itself up with modular libraries. So while we have umpteen competing frameworks they are to a large degree all building and collaborating on the same foundations. For example, we have:

- https://github.com/rust-windowing/winit (cross-platform window creation)

- https://github.com/gfx-rs/wgpu (abstraction on top of vulkan/metal/dx12)

- https://github.com/linebender/vello (a canvas like imperative drawing API on top of wgpu)

- https://github.com/DioxusLabs/taffy (UI layout algorithms)

- https://github.com/pop-os/cosmic-text (text rendering and editing)

- https://github.com/AccessKit/accesskit (cross-platform accessibility APIs)

In many cases there a see https://blessed.rs/crates#section-graphics-subsection-gui for a more complete list of frameworks and foundational libraries)


It’s a tough problem and the fact that in 2023 there is no defacto solutions is a good indicator. Certainly there are contenders and they’ll be commented here. In my experience selling a cross platform shareware app around 2010 (when that was still a thing) the best practice if you wanted a consistent user experience was to use the platform native framework and make the application code as modular and independent as possible. My sense is that is mostly the same today.


I don't even require "native" in the sense of a look and feel identical to that of the platform-specific GUI framework provided by the vendor of that platform… that would indeed be an additional layer of difficulty for a cross platform framework… and the degree to which the different cross-platform toolkits with an own rendering manage to mimick it is quite variable. But I don't even need that. I'm fine with the widgets/UIs of my applications having their own styling that, while (apart from my own special widgets) staying close enough to be understandable by everyone, don't totally closely match the exact current style / "look&feel" of the apps of the platform vendors.

What I did mean with "native" in my last comment was simpler than that: I just meant "native" as in: not a web app or similar detached via abstraction layers and bundled with a browser that interprets it, but a natively compiled app that is directly linked to the rendering GUI lib. That would even be enough for me. But even for that, I find there isn't a satisfying cross-platform solution.


> but a natively compiled app that is directly linked to the rendering GUI lib.

That's Sciter. It is a standalone embeddable library that does not use browser engine. It has its own HTML/CSS rendering engine that draws stuff using DirectX, Vulkan (Win/Lin) and Metal(MacOS).

It is used in many (~500 mln installations) applications that are considered native. Like Norton and other antiviruses (https://sciter.com/#customers)


I remember starting my career at a small tech startuo making a graphical version of Mosaic and when Xul started at Netscape as the next generation layout engine. Amazing run that’s lasted as long as my career. The pivot to best Microsoft’s free IE by fully open sourcing and establishing a long lived foundation and completely commoditizing the nearly $1B investment they made in browsers was amazing and led to so many amazing technologies. I am humbled to have been a part of that journey and bid Xul a fond farewell. There is no data!!


XUL was one thing that prevented me from using Firefox back in the day, as my PC was slow and whole UI just lagged, as compared to native Windows applications. That's why my main browser then was Opera. Funny enough, Electron now has similar issues with application menus, as they have some lag and works a bit different (compared to native implementation) than one might expect when using keyboard navigation.


When was this? I remember that Firefox (or Phoenix, or Firebird) was very fast compared to other browsers (esp. full Mozilla) when it came out.


Early on Firefox felt fast for sure (on most platforms anyway), but at some point around 4.x or 5 I wanna say it started slowing noticeably, and it would continue on that path until Quantum or thereabouts.

On OS X though Firefox always felt a noticeably more laggy and clunky than any other app it might be running alongside — something about the Mac implementation of XULRunner wasn’t optimal and it showed… even if you painstakingly skinned Firefox to match OS X perfectly visually, it still felt kinda slow and clunky. That’s why Camino, which was a Gecko browser with a Cocoa/AppKit UI came into existence, and it always felt a good deal more responsive than Firefox did.


Now that XUL is gone, one can hope Electron eventually follows suit.


Firefox isn't switching to a native UI. They are getting rid of a weird web-like DOM-based UI framework and slowly replacing it with actual web stuff. So their answer is "become more electron-like" not "become less electron-like"


Whatever Firefox does, no longer matters, it is even worse that The Year of Desktop Linux, and I am typing this from Firefox.


There is no data. There is only XUL!


And those too weak to seek it.


i wonder if that one guy on the mozilla news server who haaaaaaated XUL ("JTK"? something like that) is happy now :D


I appreciated the author adding "What is (was!) XUL layout?". Goodbye XUL. I used you mightily but didn't even know your name. :(


Raising my hand as another dev that hacked on a desktop app using XULRunner back in 2008-2010. Docs were kinda bad but we made it work. The startup was acquired so I guess it worked well enough!


I'm confused. This seems to just refer to XUL layout and not XUL as a whole, but this entire thread is discussing it like XUL as a whole is completely gone. Am I missing something?


IIRC, Tom Tom One was an XUL app


Does Zotero use XUL?



That's a bit different. This is about the XUL box model specifically.

What Zotero 7 will continue to support for plugins is full access to the Mozilla platform internals, unlike WebExtensions, but Zotero 7 is based on Firefox 102 ESR and will be updated to 115 ESR later, so platform changes like this do apply. The reason all plugins have to be updated for Zotero 7 is the massive architectural changes in Firefox over the last few years, most notably removal of XUL overlays. This change might require some minor additional updates, but it's probably not a big deal.

(Zotero dev)


> With XUL, something like width: 100px wouldn’t quite do what it says.

Same statement can be applied to display: flexbox. It also breaks CSS box model in that sense. For example flex properties with obvious names like flex-basis, align-items(!) can change box dimensions completely.


Fond memories! My first job out of uni was building interfaces for embedded devices using XUL/javsacript/xpcom/c++. The technologies were fun to develop in at the time and it felt special to be somewhat of a part of such a big open project.


I looked at XUL, gosh so it is gone now. In fact I bought a book, something like "Building Rich Internet Applications with XUL" Which I never really read or used.


> This means that (modulo a few exceptions documented below) all the Firefox UI is using regular web technology to render (mostly CSS flexbox).

Is it just me or is using the word "modulo" here (when "apart from","other than for" or "barring" would do) just completely pretentious?


Not necessarily. In France for example it’s used as an expression even by people who don’t know the math meaning. I know I use it because I sometimes read it in general news articles. It’s not common but definitely used.

The author appears to be Spanish, maybe it’s the same.


Fair enough. A cursory glance at a dictionary corroborates your point - there is clearly a colloquial usage too. I guess I'd never heard its use except in the much more narrow mathematical sense, until now.


I wasn't aware that the UI layout relied on Flex. In my opinion, relying solely on the Flex model can lead to some flaws in UI layouting. I wonder when Firefox plans to switch to using CSS Grid instead of CSS Flex?


Are there any flaws in the Firefox UI which is caused by flex at the moment? If not, what's the problem?


Just to clarify, I have not experienced any bugs in Firefox, nor am I stating that there are any present. However, I do believe that when it comes to creating layouts, CSS Grid is a more effective tool than Flex. While Flex is great for inline-oriented designs, CSS Grid's grid-oriented approach makes it significantly more efficient for creating complex layouts.


XUL is dead! Long live WebComponents!


BTW, we had altered Firebug to list XUL stuff as well (activated when using firebug on firebug). It was everywhere. The <input type=“text” was actually a couple of things with padding, etc. inside.


So the browser renders all its chrome in web css now?


It always has been. The fact that it mostly wasn’t noticed for decades should give pause to some of the criticism of web tech as a way to present UI in native apps.


It's definitely not gone unnoticed. It's the entire reason why Gecko-based native browsers like Camino[0], K-Meleon[1], and the since-turned-WebKit Epiphany[2] existed alongside Firefox. The only reason those ceased to exist is because Mozilla elected to kill Gecko embedding (which in retrospect, also set the stage for Blink and WebKit's dominance with how important embedding would become).

[0]: https://en.wikipedia.org/wiki/Camino_(web_browser) [1]: https://en.wikipedia.org/wiki/K-Meleon [2]: https://en.wikipedia.org/wiki/GNOME_Web


Yeah I chose “mostly unnoticed” very purposefully. I used Camino (even when it was Chimera!) because I wanted the native UI it offered. At the time, the vast majority of Mac software was either Cocoa, or ported to Carbon from classic Mac OS, or some Java monstrosity. The tech crowd either didn’t know or didn’t care that Mozilla/Firefox used XUL/web tech to implement its UI, and my preference for the native interface was outlandish to most of my tech peers at the time.


500 megs for the browser subprocess is also party of the insanity.


I believe the Firefox UI is mostly standard web components and CSS these days.


"This means that (modulo a few exceptions documented below) all the Firefox UI is using regular web technology to render (mostly CSS flexbox)."


tl;dr:

1. You can do most of it with HTML + CSS these days, so better use that.

2. It's not actually gone, it's a process with a big step taken




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: