Hacker News new | past | comments | ask | show | jobs | submit login
Web vs. Desktop Applications: The Neverending Debate (return42.blogspot.com)
17 points by tobiassvn on Aug 16, 2009 | hide | past | favorite | 28 comments



Previously I have been quite heavily invested in the The Desktop App Will Never Die camp. Based on my experiences in the last month from selling, essentially, the same application on the desktop and on the web... I think I owe PG an apology for doubting him.

http://news.ycombinator.com/item?id=503959

There is still a space for users of desktop apps. I know many of my users love the permanence/physicality/"realness" of it. (So much so, in fact, that a third of the ones who use it online pay extra to get a backup of the desktop application on a CD.)

I just am having a hard time making the numbers work for being a developer of them.

I'm in a sort of unique position where I can split test very similar people into the desktop or online versions of my software. I get more clicks on ads for the online version. It is easier to get them to sign up for the online version than it is to download. It is easier to contact them about the online version. It is even fractionally easier to get them to pay for it (!).

I'm sort of obsessive compulsive about tracking support emails. They're running 30:2 in favor of the desktop app since I launched the online version. Sales run 50-50... and that number is starting to heavily favor the online version as I see the higher ROI it gets and promote accordingly.

I was a skeptic. Reality is making me a believer.


There will always be some desktop applications which doesn't make a ton of sense (either logically or financially) to port them to the web.

For most other type of applications, we already have web equivalents (so coyly termed as "the lite version" in some cases).

Javascript may be the most awful language to write code in. But, if there are users (and a way to make money), you'll always find developers willing to write code in JS.


Javascript is not the most awful, it is the most misunderstood :)


Javascript is actually a quite good language: it has a dynamic object model and real closures. It has its deficencies, of course (like scoping rules).

It's really bad if you prefer statically typed languages, though, because on the client side on the web you have no other choice for the moment. Probably in the future we will see language implementations that use js as a target language become more mature and widespread.


GWT already offers static typing.

I recently added a runtime type checker (for method args/return values) to Objective-J, and intend on adding optional static typing at some point.


It is very, very interesting to see someone in a position to essentially A/B test sales of the same app online and on the desktop. Thanks for the data.

I am a bit less surprised when I checked the site and noticed that you don't seem to have a subscription model, but rather a one time fee. Am I missing something, and if not--are your worried about the lifetime cost of a customer being higher than the one-time sale revenue?

I don't usually see webapps for a fixed fee, so I was curious.


I hadn't read the original diatribe, but in this article I finally read it:

The reason most people want to program for the web is that they're not smart enough to do anything else. They don't understand compilers, concurrency, 3D or class inheritance. They haven't got a clue why I'd use an interface or an abstract class. They don't understand: virtual methods, pointers, references, garbage collection, finalizers, pass-by-reference vs. pass-by-value, virtual C++ destructors, or the differences between C# structs and classes. They also know nothing about process. Waterfall? Spiral? Agile? Forget it. They've never seen a requirements document, they've never written a design document, they've never drawn a UML diagram, and they haven't even heard of a sequence diagram.

Now I know I can dismiss this "Michael" fellow as an imbecile. Whatever point he may have, will no doubt be made more cogently some other day, if it's worthy of debate. I'll think about it then.


Well put. And the UML at the end is the cherry on top. But the striking thing about the post, which I had to go read [~], is how insular it is. He's saying obviously false things, not out of stupidity per se, but out of a false worldview. He's embedded in an environment that confirms this worldview back to him, so it doesn't seem false. (There's a dim awareness of reality in one comment, though: "where are all those awesome WPF applications I’ve been waiting for?")

Viewed this way, the rant is fascinating. It's the Microsoft equivalent of an IBM mainframe guy in the 1980s ranting against PCs. Of course Microsoft are doing things on the web today, but IBM was a leading PC vendor then too.

The post is the best thing I've seen shedding light on MS culture vis-à-vis the web. Culture is a hard thing to change. If the post is representative, MS are in big trouble. Perhaps they will reinvent themselves. They did it in the past, as we know. But they had quite different leadership then.

[~] http://michaelbraude.blogspot.com/2009/05/why-ill-never-be-w...


I had to learn about the development process, requirements documents, design documents and UML for a software engineering class I took this year, and the project in which we had to apply everything we learned was web-based and written mostly in javascript.


As a C# developer the irony for me was that just about everything he mentions is available in a web environment via Silverlight and ASP.NET. So it's odd that this diatribe would come from someone who works for Microsoft.


The author underestimates the value of APIs to provide cloud services, and over-estimate the value of client-server architecture for driving user interface.

If anything, the iPhone demonstrates the value of native software, safely run within a jail, delivered and billed immediately, connecting to broader network-based services to provide extensive functionality.

It's short-sighted to approach the future of web applications with such hubris. The web is currently a horrendous application platform, and there's clearly value in client-side state as well as rapid, safe deployment and 'cloud' state.

The future is very likely a merger of web, desktop, and api, but not necessarily using the JavaScript+HTML+CSS or Flash that web developers have come to expect. Perhaps we'll see application architectures like 280 North's Cappuccino take hold, or perhaps not.

Either way, I seriously doubt that today's web application architecture is the final evolution of the client-server model -- we've been engaged in this client/server evolutionary dance for nearly 50 years.


Agreed. The iPhone is a good demonstration of combining a powerful development environment with always on connectivity (one of the more powerful reasons for being on the web in the first place).

It's also a shining example of many of the things that are wrong with downloadable software, though some of that is this particular incarnation (the app store) more than the whole idea.

There's no argument to be made that all computation should be done on the server and just rendered on the client. That model is on the way out for web applications (not to be confused with web sites). I happen to think Cappuccino is going in a great direction ;-)


The difference should not exist in theory. Both are programs running inside a computer, with the difference that the web application is a doomed version of the full runtime environment that the Operating System is able to really provide, for a reason: compatibility.

Imagine something like POSIX but much more extended, covering APIs for GUI, database access and so forth (basically what the Java VM provides), the "Application Browser" can just be an url in the form app://foobar.com/email.app that will run as native application. Still it can be a server-side application where all your data is saved server-side, and of course the environment can be smart enough to automatically download and install a new version of the program when available (actually the local copy can be just a caching business).

The web is going to offer all this in the end, but currently it's ridiculous stupid compared to a real environment. You can't do free networking, for instance, things like opening a socket and sending/receiving data using a non-HTTP protocol. In the end the web will provide all the stuff that now are trivially done in the desktop side, but my feeling is that this was the wrong path since everything is already here, the technology and decades of desktop applications capable of doing very smart things.

We are reinventing the wheel using the wrong foundation (HTML+CSS for GUIs, GET requests for all the kind of networking, ...).


This is a stupid debate at this point.

For most applications the line has become darn near invisible at this point. Think about it. In the abstract, what really is the difference between a desktop app running off a database and an Adobe AIR app running off a web service? A seamless install process on the web. A couple of extra APIs on the desktop perhaps. But overall they’re exactly the same.

Moreover, the only reason for any difference at all is that the platform manufacturers haven’t caught up yet. There’s no reason, for example, that an AIR app couldn’t do everything a desktop app could (access hardware devices, run in the background, etc…). Adobe just hasn’t put it all in yet.

So at this point Desktop apps are just a specialized programming environment for those rare situations (utilities, games, hardware centric programs) where web apps don’t work. Otherwise there’s not much difference. So developers should do what they've always done, evaluate their requirements and pick the tool that is best for the job.


It's not a war, and they will live side-by-side.

Desktop applications will still be necessary for the foreseeable future. Developers will want to do what the web is bad at - whether they need speed, high resource size, high complexity, access to local resources, or background access. I'm sure there's other examples of what the web is bad. These problems all have no solutions in sight.

Web applications are better for most other situations, since they're cross-platform, instant-on, simple to deploy, available from many places, and allow for seamless integration of the social aspect, which is often compelling for end-users.


Disclaimer: Linked post was tl;dr. I only read the first quote and a paragraph passed that.

The debates regarding web vs desktop are all the same: The desktop has more power, concurrency, 3D, etc etc. The web has the advantage of working anywhere with a browser, persisting data in the cloud so any computer can restart where you left off, etc etc. Its a waste of breath to argue these points.

What I think is more important is that I believe computers ARE headed to a more web-oriented world. But to get there, there does need to be some changes. I think Google has the right idea with Google Gears and it will be interesting to see how Chrome OS plays out.

But web developer or desktop developer, I wouldn't feel threatened either way, since the concepts of desktop development aren't too far off from web development. Both of course have their differences that must be learned but if a software engineer understands the basic concepts of OOP, garbage collection (yes, it matters for the web too), memory usage, etc, then he/she will be fine on both camps.

Disregarding the web as a place where developers flock because they are revelling in their own medicority is a poor observation and a defensive argument by the quote in Tobias's article.


The web can never replace the power of desktop applications, and desktop applications can never beat the simplicity of web applications (with both in their current forms). We are currently blurring the lines between desktop and and web, but in the end it will probably resemble more of desktop applications. This is because current web applications run in the browser only because of the limitations of current web technologies. But if we could easily make web applications run like native desktop applications (with no installation necessary), we probably would.

Chrome is making that leap to blur the lines by being a web oriented OS. In the future we'll probably see more of using web technologies, and applications that run remotely coupled with easy (but more powerful) UI development on the client side that runs natively on the user's computer. So we'll have a mix between the best parts of desktop applications: control and power, and the best of web apps: easy UI development and interconnectedness.


The product that I really think blurs the line right now is Mozilla Prism (http://labs.mozilla.com/2007/10/prism/).

Sure, "we" know that a Prism "app" is really just a web app running inside of a stripped-down copy of Firefox, but the average person (who makes the market) doesn't know that. I've set up some non-technical friends with a copy of Firefox, as well as three or four Prism "apps" - Gmail, Google reader, NYT.com, etc. They don't know, nor do they care, that they're using web apps: they interact with them as desktop apps, which is what matters to them (despite the fact that I've explained to them what's going on).

The really interesting thing is that they distinguish between the Gmail they get by typing the URL into Firefox, and the Gmail they get by launching the Prism "app". Despite the interfaces being 100% identical, they still think of the Firefox-based one as the "web version" of it, in the same way that Outlook users see OWL as the "web version" of normal Outlook - doesn't matter that they're the exact same.

As for web vs. desktop, I think - like most things - it's a case where different tools have different strong points. Some apps will naturally work better in a web-based environment, but some will stay desktop-based.

Seeing as I have as little idea about the future as everyone else does, I'll offer two predictions about the two sides coming together:

In the shorter-term, we will see more desktop apps accessing web services behind the scenes. The main application will still be desktop-based, but some resources (or more likely, improvements on native resources) will come from web services. The big example of this, currently, is Mathematica's cloud computing initiative. To the user, Mathematica is Mathematica - it's on a desktop client, and it's as responsive as ever. But you can configure this desktop client to (silently) spin up an EC2 instance with the Mathematica engine pre-loaded onto it, and route complex calculations to that more-powerful instance, as opposed to computing it locally. To the user, there's no visible difference between locally-computed answers and remote ones; but the desktop app has chosen which source to use based on the calculation.

In the longer-term, the distinction between desktop apps and web apps will disappear. What makes something a web app? Currently, it's that it's written in Javascript that's downloaded and executed in a browser context. What about when browsers have more powerful languages built-in (like Google NaCl promises), and can access the hardware more directly? What about when your OS comes with a built-in ability to download and run these programs, without explicitly invoking a browser - your browser is conceptually like a VM for Javascript, so when this VM is integrated directly into the OS (like Java), what is it?

Let's say I've got a copy of Gmail that uses Gears for persistence, some NaCl to offload some of the processing, and is accessed via a Prism instance - is this a web app or a desktop app?


A web app is an application that requires at least a browser to run. Why? Because the web as we traditionally know it is accessed through a browser and any application that still requires a browser to run cannot be considered a desktop app.

As for prism it still doesn't actually make that app a desktop application because all it still contains the environment designed to run web apps and just simulates a desktop app. You still use it to run apps designed for the browser.

The blurring of the lines will come as web apps start to no longer need a browser to run. So when the day comes I can start up my OS, type gmail.com, and it runs the Gmail app, then the distinction between web and desktop app will truly have vanished. But until then anything that still requires a browser, or browser technology is still in the transition stage.


How do you define a browser?

If you strip away the visual chrome, a browser is simply a piece of software that interprets a certain set of languages and displays the result, not unlike the JVM or the CLR.

As for your example, I can do that now - when I type gmail.com into the Vista Run dialog, my browser opens up and runs the Gmail app. How is this different from typing in the name of a JAR file and having it run, aside from a different program being run to interpret the code? Also, what about the tight IE-Explorer integration in XP, where I could open a normal Explorer window, type in gmail.com in the location bar, and then Gmail would open in that window (and have the UI change to IE's chrome)?


Because your internet explorer does not allow apps to access low level functions. It still resembles its original purpose: the serving of static pages.

Web apps should break out of the prison that is the browser.


Here is a good / bad list I made of attributes of desktop sw:

Good:

-Drag and drop eg: Easier to attach a photo to an email

-OS integration eg: iPhoto => set as desktop, Excel => uses outlook contacts

-Access to hardware eg: GPU, usb, accelerometers, camera

-GUI responsiveness

-Fully customizable flow eg: Not forced into the "back" button paradigm.

-Control over version eg: If there is a bug, you can wait to upgrade.

-Control over data eg: I can't get my FB contacts

-No need to upload data to another app to edit it. eg: photos etc

-SPEED

-Works w/o connection

-Keyboard shortcuts

-Richness of GUI

-file assoc - mailto link etc. You determine what app uses what extension.

Bad:

-Installer eg: no "insta-play"

-Spyware

-Manual updates

-no HUGE data sets eg: youtube / wikipedia

-Uninstaller eg: doesn't uninstall potentially corrupt prefs

-expensive for developers eg: No knowledge of user's environment.

-!Cross platform

-No Data sync across machines

Now, before some wise guy says that all n things can be solved w/ n different plugins, let me just say this list is for the general case. Since most people don't have those plugins it's silly to build a business around them. Unless of course you have some way of spreading them, which you usually don't.


It's about the sweet-spot. A professional photographer uses Photoshop in a manner that doesn't work well as a web app. However, a party goer takes snaps and posts them to their web space of choice. Same core concept, different level of use and thus different environment. The platform choice is based upon the target audience. There's plenty of room for both approaches.

Similar dichotomy exists in most spaces, casual word processing works well over the web, but I'd rather write a thesis with a desktop hosted word processor. Ditto for IDE's, CAD, NLE, spreadsheets, etc.


I recently worked on a web based product that installed a server and a database on the users machine. All data was stored locally for privacy and locality of cpu resources, but could be backed up in the cloud and shared with whomever the user chose. Given this kind of setup the distinction between web and desktop applications completely disappears.

There are obvious scaling and security issues for this model in the general sense: do you want 200 servers and 200 databases on your local machine, but as a browser managed facility it makes sense to me.


> First, it's not a challenging medium for me.

The guy should probably switch his intellectual efforts over to figuring out how to make his work useful rather than how to make it as complex as possible.


Where do things like all-Flash web apps fall? Are they web apps, since I use my browser to get to a web page that contains no more code than is needed to launch the Flash movie? But what if I chose to download (or otherwise receive) the .fla file and launch it using Flash Player - someone could launch it this way without ever having Firefox installed.

Or consider Java Web Start applications - you click one link in a browser, which causes a helper to fire, some code to download, and an app to be launched? What are they?


What are they?

The future.

The extent to which we constrain ourselves to HTML/JS nowadays is borderline ridiculous. AIR exists and it works. It lets you build user interfaces on a much more powerful platform and these apps will run equally well inside a browser tab or a desktop window. The problem with AIR (and silverlight) is the vendor lockin, nobody likes to rely on proprietary technology.

Free alternatives will emerge as we push the limits of HTML further.


My web browser is a desktop application.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: