C is generally faster, than hand-written assembly, so it doesn't make sense to write software in assembler.
For macOS desktop applications are written with Objective C, which is C with fast message-passing, and doesn't trade much to speed. Swift is modern alternative, but it doesn't trade anything for speed either.
For Linux applications are traditionally written with C and Gtk or with C++ and Qt. Those options are both very performant.
For Windows main language for a long time was C++ and it remains supported language. There's movement to .NET, so Windows is an outlier here. But .NET is generally very performant language, it makes some tradeoffs for safety, but it has enough features to stay fast and its implementation is specifically tuned for desktop applications.
The only terrible platform with slow language is Android and it's well known for its lags.
There's very little desktop software written with Java and Python and usually those are specialized applications, when users don't really care about experience, but rather care about functionality.
So JavaScript invasion here is unexpected and not wanted. I won't use any JavaScript desktop application, if I have choice. I don't like this technology. JavaScript and V8 made for browser with advanced sandbox capabilities. That's fine. But for desktop it's just not needed. And there's no other advantages. UI is terrible and does not conform to any standards. Performance is not good. Memory consumption is abysmal. Energy consumption is abysmal as well.
If I'm about to buy application for macOS, I'm always carefully inspecting its bundle and trying to determine which technologies were used. Unless it's pure Objective C/Swift, I'm usually won't buy it. I hope, more users would do it.
Yeah, but Python still doesn't seem to have a great cross-platform desktop GUI, as far as I can find. I'd prefer to code entirely in Python if I could, but I haven't liked a single GUI library I've tried for Python.
Although to be fair I haven't really tried PyQT out still, but I don't like the idea of having to buy a commercial license for it.
But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.
Eventually all GUI app toolkit ends up with a custom MVC framework, a client/server architecture, some kind of db for persistence, their own implementation of asynchronous event and communication models and a declarative layer to create the UI without code. For the most advanced this layer separate structure and layout.
Well guess what, this is what the Web has natively being doing for ever.
Since the web is now the most popular platform, with millions of libs and tutorials on it, people just reused that. It just makes sense.
The problem is not the concept. The problem is we should have driven this effort with a standard to sanely close the gap between the desktop and the web so that you don't have to spawn a freaking browser-engine-os for every one of your app.
But no, the web is the only platform with a standard. And it flourished while all the big players created closes gardens with proprietary shitty API. And this is the result.
Have you not seen Jurassic Park for god sake ? Life finds a way.
It's completely false.
The web at the beginning was a huge mess of static pages, hacked CGI scripts and no interactivity at all with the need to submit a form to trigger a full reload of the page.
Everything was absolutely synchronous and a lot of times the database was accessible directly from the public interface and some pages did actually have the connection string directly in the HTML for everyone to see.
I'm really not sure in which alternate reality you have seen web apps doing all that forever.
Yep, completely agree with everything you said. I'd love for their to be a standard on the desktop side, but until then, I'll keep using a hodge-podge of technologies depending on the project, I guess.
They should have adopted Qt; it makes it easy to write native C++ code that compiles on different platforms, so you get the performance and RAM usage of C++ but still most of the benefits that web coding offers, without so many drawbacks. (It's also quite nice to work with IMO.)
It's too bad it wasn't more universally adopted, by any of the 3 major platforms (including Linux, where the all-C Gtk+ has become the standard for the most part). Instead, it seems to have found its greatest success in, ironically, small embedded devices. Devices like this simply cannot take the performance hit of something like Electron.
Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago.
And it was proprietary and it needed a licence.
With all my good will I find quite difficult to believe that now PyQT is the silver bullet to write all UIs.
For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')
> Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago.
Shrug, I found it much nicer than anything else I'd used, but I've never used WPF (which is single-platform in any case).
> And it was proprietary and it needed a licence.
Neither Qt nor PyQt is proprietary in the usal sense of the word (nor were they 5-6 years ago). If you're using a non-standard definition it would probably be more productive to use a different word.
> For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')
I'm a huge fan of type systems. I wish I could find a UI framework that's anywhere near as nice as PyQt for an ML-family language.
I can't help but think there's a different flavor to this than with higher-level languages. This is adding a platform on top of another platform. The same objections existed years ago with Java when Swing was released. Java is a cross-platform high-level language but Swing is basically an entirely new platform.
It's this platform on top of a platform that is objectionable from a performance, memory, storage, and integration perspective.
We see the same thing with containers. Docker, flatpak, snap... It's just the right time for it.
Languages have evolve to change the way we handle constraints like memory, speed, readability, expressivity etc.
We are arriving at the pick of what languages can bring on the table. Sure we can improve things here and there, but the huge challenges now are integration, packaging, distribution, updates, communications, multi-tiers architectures and all that.
So we now tweak platforms to help us with that.
But because we didn't see that coming, it's not done in any structured way. It's done exactly the way we did everything since the beginning of computing, by stitching together stuff then hitting hard on it with a hammer until the job is done.
This is not new. IT is a joke of an engineering field. We hack everything, don't think about the future, and then end up using the status quote. It's always has been like that.
I agree. Containers should be unnecessary -- all that they could provide could be done at the process level with an operating system designed to isolate computing resources appropriately. But operating systems were not historically designed for that so another (somewhat ridiculous) layer is added on top.
actually all containers do is utilise the very design in an OS like LXC in Linux to form containers. Containers are not a platform on top of Linux, they are a wrapper around different isolation tools build into the kernel.
IT is a joke of engineering for non engineers.
If a pull request doesn't follow some principles agreed a priori it doesn't get merged.
If there are people that like to play the "IT cowboys" just hacking together stuff without any whatsoever process or unit test that certifies the behaviour of what they have written, jeopardising the entire team efforts, it is not a failure of software engineering, it's a failure of that specific team.
And please bear in mind that I worked in such toxic environments, but I never thought for a moment that software engineering and software architecture are jokes.
The joke was the team/organisation on which I was at the time.
>Computing for ever. We use C to avoid writing assembly. Use Java to avoid writing C. Use Python to avoid writing Java.
Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades. Which are all as close to the metal as can be. And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception), so the whole premise is moot.
The few Java desktop apps that were around, people used to hate as memory hogs.
But can you make a website with the same knowledge ? Can you make it portable to other OS ? Can you reuse 20 years of knowledge, resources and libs ? Can you hire tomorrow 10 experts to help you on it ?
Quality of the tech is NOT the drive for success here. You are missing the point.
People used crashy, buggy, slow software for years. Photoshop and Office lost you data on a regular basis in the 2000's. Windows BSD was a common occurrence then. We didn't see a massive exodus to Mac products because of that. The only reason people started to go crazy for Apple was after the iPod came out. And even then, it was still a small part of the market.
You can see everyday that people favor cheapness, easiness and convenience over quality. You would not have so much junk food otherwise.
> But can you make a website with the same knowledge ?
One of my first commercial projects was a web-content management system written in Objective-C. Customers included Siemens and the German Bundestag.
Another couple of projects were written in WebObjects. If I wanted to, I could use Cappuccino, but I am not a big fan of web//client apps, so I don't.
> Can you make it portable to other OS ?
This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
> Can you reuse 20 years of knowledge, resources and libs ?
In the sense you meant it: yes. Except it's more like 30 years. However, programming skills are (or should be) transportable. With virtually no previous experience, I became lead/architect on a Java project, which succeeded beyond anyone's imagination.
> Can you hire tomorrow 10 experts to help you on it ?
>One of my first commercial projects was a web-content management system written in Objective-C
You certainly didn't use any of your cocoa widget for the UI there. It was HTML + CSS.
> This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
Yeah, GNU steps for GUI on windows... This is what you think could be an argument for electron users ?
> In the sense you meant it: yes. Except it's more like 30 years.
Again bad faith. The world has way, way more code, snippets, tutorials and doc about any HTML + CSS + JS code than any tech based on Objective-C.
Programming knowledge is transferrable, but the knowledge of the ecosystem is not, and is always the most consumming.
> Is this a serious question?
Oh yes, it is. Because you see we are living an era where it's hard to find any good programmer at all for anything. They are all taken, and are very expensive.
So basically, on a tech limited to one ecosystem, finding them will be even harder, and even more expensive.
The simple fact that you are pretending it's no big deal (while any company will tell you otherwise, so much that the GAFAs are spending millions just in their recruitment process) illustrate how much a troll you are.
It most certainly is not. You just don't know what you're talking about and keep making up new stuff when confronted with actual facts that contradict your fervently held beliefs.
I'm curious how you get Smalltalk like productivity on objective C? I thought the productivity on Smalltalk comes from its live programming environment?
You bet ? I have 3 software opened right now using Python for their GUI: my rss reader, my torrent downloader and dropbox. And I got many more installed on my machine.
But the thing is, even when I write something for myself, I first write a command line app, then a web service. Never a GUI, because it's such pain.
Of course. Electron is just a layer around the chrome compiled engined as well. If you want anything to display a matrix of pixels changing rapidly, you need low level performances eventually.
Of course, but in your case, the C/C++ libraries you're calling into aren't doing any heavy lifting, they're just making calls to an X server (or equivalent) or perhaps to a graphics card. There's no reason GTK (or the GUI portion of Qt) couldn't be implemented in Python, it's just a huge undertaking and it was started in C (or C++ as the case may be).
>And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception)
This is generally true, but to be fair the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers, and aren't meant to be programmed in hand-coded assembly except for a few critical bits deep within OS code. Older CPUs weren't like this.
It still might be possible to write hand-rolled assembly that beats modern compilers, but you probably need to have seriously super-human mental abilities to do it.
> the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers
You got the causality wrong. Assembly programmer-friendly CPUs died because CPUs which weren't as friendly were faster and cheaper; those same CPUs were instead more amenable as compiler targets.
Hey, I used to use it! :P I came from BASIC background. VB6 was supposedly good for rapid prototyping of GUI apps (esp CRUD variety). I found it would boot up in 1 second, deploy new project in 1 second, and load new app in 1 second. It was also safer so no constant blue screens over common functionality. It also could wrap foreign code in less safe languages which I could still write in an industrial BASIC. One could also export the GUI to code in different language.
Became one of my favorite toys. I'd still use it for GUI prototyping if it was FOSS and kept getting extended. I found even lay people could learn it well enough to get stuff done. Long after, I learned what horrible things lay people did with it. Yet, they got work done and got paid without the IT budget and staff they would've preferred. (shrugs)
>C# has been the default way to write Windows apps since the early 2000s.
No, it really hasn't. It was just the way Microsoft proposed businesses to write bloated internal enterprise apps, what they used to use VB for.
Those are not the same as desktop apps -- and no, or very very few, desktop apps, ever turned to C#. Not even MS own apps, like Office, and surely nothing like Abobe's or countless others.
>It is no more "closer to the metal" than JavaScript.
Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.
Besides, it's not JS itself that's the problem (though it took millions and top notch teams to make it fast): it's the web stack on top of it. C# just runs on a thin CLR VM layer -- and the graphics are native.
I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.
> Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.
You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
>I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.
Windows Forms is a wrapper on top of MS Win32 API. And WPF is also based on native widgets wrapped (with some extended with managed code).
In any case, C# apps are not much represented in the majority of Windows desktop apps, most of which are written in C++ or similar, and surely all the succesful ones. Can you name your succesful C# desktop apps? (not in-house enterprise apps and no developer tools please. There where the users have no choice, even Java does well) I'll name the succesful C++/Delphi/native/etc ones and we can compare our lists.
>You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
A call to a native drawing lib that doesn't pass through 10 layers of abstractions and bizarro architectures is as good as a direct native call. Especially from something like C# that runs circles around JS performance.
But even so, few consider JS to be what makes e.g. Electron slow.
As far as I know there are plenty of XNA games running on both pc and Xbox in C#.
And that is pretty much the worst applications you can use c# for because, you know, latency.
I don't see any real blocker to have complex c# apps on the desktop, apart maybe the quite shitty clickonce and the continuous need to have an upgraded .net framework to use the new features.
But now for UWP apps the default is c#, they can be installed directly from the store, and with Roslyn you basically need only to target .net 4.5 to have all the features of the last version of the language.
And this is a huge win that admittedly JavaScript already had because of transpiling.
If I had to write a commercially distributed desktop application nowadays I would use for sure c# or f#, not JavaScript.
> Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades.
So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.
This seems pretty much in line with the general industry trend towards the adoption of open-source software.
>So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.
By "large", you mean some of the UI using WPF, the plugin system supporting C# and some of NuGet?. Not that those are small projects, but considering what is inside Visual Studio, they are hardly "large portions" of Visual Studio.
And because the IDE support (refactoring etc etc), compile time error checking and ease of use more than compensates for Java being a little bit verbose.
Yeah, and they used C because the compilator could optimized stuff the JVM couldn't but now it can. And now Python get type hints so you can have the IDE tools you have with Java like with PyCharm. It's the circle of life.
Not quite correct about type hint. They are only in python 3.
Everyone who adopted python 2 on sizeable codebase is likely stuck there forever, with zero annotations and none of the new tools available, and they'll never be ported back.
But let's be fair, type related tooling in Python are not close to the ones you have in Java yet. It's just that eventually, everything comes around. Java got faster. C++ easier. Python ...toolier ? Etc.
Python broke all retro compatibility and put all existing sizeable software in a miserable deprecated state with the breaking of python 3.
I don't recall C++ getting easier. The few tools and IDE still fail at decent refactoring and code completion. The C++11 movement is adding few stuff more or less useful, piling on top of the vast amount of already existing complexity.
C++11 makes a LOT of things much, much easier. Yes, it does pile on top of existing complexity because they're loathe to eliminate any backwards compatibility, but the nice thing is that you don't have to use older features or methods of doing things. In fact, if you look multiple serious C++ codebases, it'll almost look like they're different languages, as every project basically chooses a subset of C++ they accept. Realtime embedded code doesn't look anything like desktop application code, for instance, but they're both technically C++.
And as far as I remember even google is only supporting python 2.
Their python-go transpiler doesn't support python 3 for example (unless something changed in the last months and I missed it)
And because packaging everything in a jar is easier than pulling 1e3 dependencies for every deployment. Not to mention drpendencies that also require a C/C++ compiler boost or other native libraries.
Yeah and some use C because Java is too slow. The point is, there is nothing new here, the history of computing is repeating itself. It's just now we have better toys, a bigger market and the stakes are higher.
Rust doesn't bring much in that aspect. You could use C++ for performant high level abstraction for many years before Rust. Rust brings memory safety and that's huge. But it's not anything like Java, it's much harder.
Actually, I think it's very similar to Java in terms of what is being offered and at what layer of thought.
It does have a steep learning curve, but it's worth it. The number of concurrency bugs alone that I could have avoided if I had been able to use Rust years ago are sad to think about. Java has great concurrency tools, but doesn't do anything to make sure that your not shooting yourself in the foot.
Java also has JavaFX, which comes with an embedded Webkit browser. I can create my UI with React, or any other HTML/CSS/JS library, and make it interact with code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc. very easily. I think this provides all the benefits of Electron, but is even more flexible and powerful.
Do you need to run N programs on N copies of Chrome with JavaFX or use a single VM like all JVM apps do? Because that's the problem with Electron mentioned in the article and that's exactly one place where JVM languages are better.
This is basically a terrible argument in and of itself you would do better to flesh out why yourself instead of expecting everyone to conclude your self evident correctness.
> code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc
Your list of 7 JVM languages (both here and in your earlier comment on this submission) seems to be from most widely used to least. Yet in your HN comment from 2 days ago at https://news.ycombinator.com/item?id=14068664 you ordered that list differently, i.e. "Java, Scala, Clojure, Groovy, Kotlin, Ceylon, Frege, etc". Have you changed your mind about the relative adoption of Clojure and Apache Groovy in the last two days?
Not really. Actually, however, there is a recent survey that shows that Groovy is the second most popular language on the JVM, behind Java. Myself, I use Scala, and would like to learn Frege.
Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock.
Sadly I still didn't find anything comparable for BDD in all the languages that I use.
> Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock
There's something wrong when a testing framework hacks into a language parser to make the language statement labels have special meanings like function names do, and overload the `|` operator in expressions so data will appear as a table in the source code. "Lovely" isn't the word for that sort of thing.