Hacker News new | past | comments | ask | show | jobs | submit login
Today, Web Development Sucks (harry.me)
136 points by hbrundage on Feb 1, 2011 | hide | past | favorite | 69 comments



Yes, we thought this in early 2005 and created NOLOH (http://www.noloh.com) as a solution. You write your app in a single tier, rather than having to worry about all the plumbing (Cross-browser issues, AJAX, client-server interaction, Comet, etc). You can even use your existing HTML, JS, CSS, if you like, and NOLOH then renders a version of your app targeted specifically towards the end user, whether it's IE, Chrome, Mac, Windows, etc. or a search engine robot, in a "lightweight" (only the correct and highly optimized code is loaded for a specific user) and "On-demand" (only loads the resources that are absolutely necessary at a given point) manner, thus allowing for rich web apps like gmail, to load instantly, and as needed, rather than a traditional web fat-client.

Every few months someone will write a post like this, and I wince. We've written several extensive articles for php|architect magazine and have presented NOLOH at several major web development conferences around the world. The fact of the matter is that tools like NOLOH exist, there are others, and they can be used now. Today, web development doesn't need to suck.

If you're interested in the specifics of the above stated "lightweight" and "On-demand", specifics can be found in the article "Lightweight, On-demand, and Beyond" in http://www.phparch.com/magazine/2010-2/november/.

[edit] Link to free December issue of php|architect article "NOLOH's Notables" so that you can more easily see what I mean without the November issue paywall. http://beta.phparch.com/wp-content/uploads/2010/12/PHPA-DEC-...

Disclaimer: I'm a co-founder of NOLOH


Have you considered that you could potentially make a lot more by open sourcing NOLOH and building a support infrastructure around it? It seems like the closed source aspect of it might be holding you back. If NOLOH truly is the Rails of the future, it might be wise to unleash it.

I've always thought this was part of what kept REBOL from catching on. Nice little language, but its closed nature hampered its adoption.


We have. The problem of course stems from the more established brands picking apart our tech and incorporating it into their own, thus making our tech obsolete. There's also the issue regarding our existing customers that paid for Professional and Enterprise licenses. We currently offer free licenses to open source projects, but we do understand that there are those that are turned off to any proprietary tech, even if they'll never actually go under the covers.

We'll likely revisit this issue later in the year after we make a series of major product announcements. Those announcments may make it unnecessary for the tech to continue to be closed, until then, everything we do other than the core is open source and available on github, this includes numerous modules. I'll likely blog about this in the next month or so.


Your site doesn't work with javascript disabled.


It does, but it's not ideal. Remember, it's important to separate noloh.com, from NOLOH the tool's capabilities. The decisions we made regarding noloh.com, in no way reflects the capabilities of NOLOH. In a NOLOH application or website, you decide to what extent your content will display without JavaScript, granted we should enable more options, but without JavaScript the site and content still loads and those users can still peruse your content, but not perfectly. However, if you pretend to be a search engine, you'll see they get a completely different version of the website, with generated links, different than the JavaScript disabled version.

It boils down to each end user type getting a different version, JavaScript is ideal, non-Javascript works, but is not ideal, which is something we continue to work on, and search engine robots which is completely different from the others, conforming to standards, etc., see http://dev.noloh.com/#/blog/2010/08/23/strict-output-and-oth....


This whole article complains about something simple. Web development sucks, but not because of form validations, it sucks because of IE. But that's another story.

> Services must be adapted to spit out JSON data for interpretation and rendering client side, or have their view code refactored to be accesible by fragment for use in AJAX calls.

It must be real hard to call your JSON encode function on your data set, that would make it 1 line longer. :)

> A radical departure from the jQuery mindset of DOM querying and manipulation, and use a UI kit instead. We aren’t in Kansas any more folks, its time to go where every other platform has gone and use the language to its fullest.

People wouldn't query the DOM if that wasn't the most effective way of getting stuff done. jQuery has a UI kit, it's called jQuery UI. And how is using jQuery equals to not using JavaScript to the fullest?

> The DOM should become an implementation detail which is touched as little as possible, and developers should work with virtual, extendable view classes as they do in Cocoa,QtGui, or Swing.

The author said "using JavaScript to the fullest" and now the author tells us to create "extendable view classes" in JavaScript. JavaScript is a prototypical language, you can't use it to the "fullest" if you force OOP on it.

> If we want to build desktop class applications we need to adopt the similar and proven paradigms from the desktop world. Sproutcore, Cappucino, Uki, and Qooxdoo have realized this and applied these successfully.

"proven paradigms" is a ridiculous term by itself in software development, putting it in web development context just makes it dumber. If desktop apps are so good, then how come web apps are still around? There is a reason why people still prefer jQuery over many of those.

Overall I feel this article is highly biased and the author doesn't really understand web development. The author's complaint's are invalid because the stuff that he is missing are already around.


Clearly it's highly biased because it's not a newspaper article, it's a post on what I think.

> People wouldn't query the DOM if that wasn't the most effective way of getting stuff done. jQuery has a UI kit, it's called jQuery UI. And how is using jQuery equals to not using JavaScript to the fullest?

Have you ever used Gmail? Or MobileMe? Or any of the world class web applications built without jQuery? At no point did I say jQuery was invalid, and jQuery UI is a prime example of where I want the web to go. With it, you use jQuery to find the elements you want to work with, and then you use and object based hierarchal to enhance them and work entirely in Javascript, above the DOM. See http://jqueryui.com/demos/slider/#slider-vertical, a prime example. The code selects the slider, and then uses $.slider to perform all the meat of the problem. Selecting the element is a mere convenience, the cool stuff is done by the UI kit.

> The author said "using JavaScript to the fullest" and now the author tells us to create "extendable view classes" in JavaScript. JavaScript is a prototypical language, you can't use it to the "fullest" if you force OOP on it.

Use whatever programming paradigm you like. HN wouldn't accept a comment listing all the successful JS frameworks that use class based inheritance because it would be too long.

> If desktop apps are so good, then how come web apps are still around?

Your understanding of web development may be amiss, how the tables have turned! Web apps are around because of the delivery mechanism: the web. Not because they offer astounding experiences compared to a hypothetical desktop counterpart, but because cloud storage and scaling, always on availability, and browser ubiquity can be leveraged by users for great gains. Web application developers constantly struggle to provide the "native" experience while leveraging all the benefits of the web, but its hard, hence the post.

I do however understand your desire to keep using jQuery. As mentioned below, its an extraordinary tool that it seems you and I both use religiously. However, I have found that its hard to scale up a large complex client side app using only jQuery, and I and seemingly many other feel the need for more structured frameworks to leverage in that domain. Do you disagree?

edited to remove unwarranted hate


> HN wouldn't accept a comment listing all the successful JS frameworks that use class based inheritance because it would be too long.

That certainly doesn't mean that class-based ones are the most successful.

> Your understanding of web development may be amiss, how the tables have turned! Web apps are around because of the delivery mechanism: the web. Not because they offer astounding experiences compared to a hypothetical desktop counterpart, but because cloud storage and scaling, always on availability, and browser ubiquity can be leveraged by users for great gains. Web application developers constantly struggle to provide the "native" experience while leveraging all the benefits of the web, but its hard, hence the post.

The browser would be long dead if it wasn't for JavaScript.

> an extraordinary tool

Agreed.

> you and I both use religiously

There is no religion on my part, it simply gets the job done, and allows me to focus on architecting my application instead of DOM bugs.

> However, I have found that its hard to scale up a large complex client side app using only jQuery, and I and seemingly many other feel the need for more structured frameworks to leverage in that domain. Do you disagree?

This is not a yes or no question, it depends on what's the problem, as I said in another comment, it's really easy to fix your dual validation problem, by writing a small micro framework, if there isn't one out there yet. It has become really easy to bridge the server and the client. You could abstract away any problem, but you shouldn't get too carried away with hierarchies, JavaScript is a semi-interpreted language and as I said, it's more function and prototypical than object(class) oriented .I agree that web development is hard, but as I said earlier, it's hard because of archaic browsers.


> People wouldn't query the DOM if that wasn't the most effective way of getting stuff done. jQuery has a UI kit, it's called jQuery UI. And how is using jQuery equals to not using JavaScript to the fullest?

All UI toolkits leverage the DOM, that's not the point. The point is whether the DOM is at the appropriate level of abstraction for writing business app UI logic. I wrote at the level of the DOM for years, and when I switched to ExtJS it felt like a liberation. I spend way more time on business logic and experimenting with different UI designs, and far less time fighting with browsers.

> JavaScript is a prototypical language, you can't use it to the "fullest" if you force OOP on it.

OOP and prototypal inheritance don't conflict. You don't have to use classical inheritance to have a clean OO design.


I think there are only a handful of people in the world who truly understand what building something on the level of complexity of a 280 Slides or GMail in jQuery really is. And I think even those people would agree that they'd be better off with a more structured framework, even if that was something built on top of / in conjunction with jQuery. In some ways I think that's where Backbone is trying to fit itself into the ecosystem. I think one of the fundamental realizations of building something complex that's fast and maintainable is that using the DOM as your source of application state is usually not the right way to do it.


The way I see it, there's a big difference between making fancy websites and making fancy web applications. Making a fancy website, presenting information to a user and maybe collecting a bit of data along the way, you're unlikely to collide with the frustrations outlined in the article. jQuery is more than adequate for the majority of things.

Making a desktop-style web application, however (such as his examples), is a whole different kettle of fish. Once you're dealing with state that exists outside of the DOM, you're going to be struggling to keep up without some variety of model/view concept to rely on.

They're really two separate domains. One was born in the world of desktop applications, the other was born in static HTML.


If you have a huge dataset with complex validation rules, implement a server side validation engine, and do the same in client side and expose the rules to both parties, it's not hard at all.

jQuery is not only about the DOM, it's a tool, it doesn't force you to write code like this:

$('#foo').hide();$("#foo").css('color':'white');

You can write very clean structured code in JavaScript using jQuery.


* It must be real hard to call your JSON encode function on your data set, that would make it 1 line longer. :) *

Unless you don't want to always transfer the entire dataset between the client and server every time.


That problem could occur in other scenarios, and not particularly tied to web development.


That doesn't mean it's not a problem worth trying to solve...


I agree with a lot of your points, except where it comes to JSON encoding. In many of my apps I use an ORM, NHibernate to be specific. I've found that this makes a lot of operations on the server side much simpler as there is less code for me to write and debug.

The problem comes though when I'm ready to encode to JSON to send to the client. Your correct that it is just one additional line to encode to JSON, but NHibernate creates objects that have relationships to other objects, and serializing those relationships can be very tricky. I have not found a great way to do this and often end up writing simplified versions of the server side classes and code to map from a server side class to a client side class.

Now you could argue that my framework or language sucks, but I think this goes to the point of the article. I feel the code I am writing on the server side is pretty solid and I am happy with my productivity. But as soon as I introduce any complex behavior on the client side, I end up with a lot of duplication and the entire code base gets much harder to manage.


I feel your pain, but it's not a new problem.. I spent years building server side apps with Hibernate, and before there was JSON, there was XML, or transfer-objects, or even hibernate entities that were disconnected from the backend (and would throw arbitrary exceptions when calling across un-fetched relationships). There's always a need to sort out how to represent these models across tiers.. In general, I favor only serializing relationships to things that are completely dependent (ie: dependent children who's lifecycles are married to the parent).. I try to never serialize relationships to independent entities.. let those get fetched by id if the client needs them.


I'm sorry to hear that, in most places it's just one line. Let's hope the Hibernate people fix this.


You might need to enlighten yourself on exactly which browser first brought us ajax. And are you too young to remember just how bad Netscape Navigator was?

IE was brilliant when it came out. It was MS's strategy of making it free to kill off Netscape and then never updating it to eke a few more years out of the desktop app infrastructure that sucked.


> You might need to enlighten yourself on exactly which browser first brought us ajax.

AJAX is older than XMLHTTP Request, virtually you can do AJAX with [i]frames, and this method is not even obsolete, because to my knowledge, that is the only way you could do a file upload(that works even in ancient browsers).

> IE was brilliant when it came out.

That still makes it a curse today.


furthermore, isn't blaming IE close to blaming the customer?

The customer has chosen (even if the decision was made for them), and that is something that we as developers have to live with.


Not really. As the other poster said, IE was intentionally abandoned after Microsoft "won" the browser war to drag out the desktop market a few more years. Honestly, until the last few years, web development technology stagnated HEAVILY because of Microsoft's abandonment of IE. They were able to vastly slow the forward movement of web technology for almost a decade.


> furthermore, isn't blaming IE close to blaming the customer?

Not even near. I'm blaming Microsoft for this one.

> The customer has chosen (even if the decision was made for them)

This is self contradictory.

> and that is something that we as developers have to live with

Sorry, but I believe in innovation.


> "proven paradigms" is a ridiculous term by itself in software development, putting it in web development context just makes it dumber. If desktop apps are so good, then how come web apps are still around? There is a reason why people still prefer jQuery over many of those.

I'm not sure usage counts are an indicator of quality. I bet the majority of javascript developers use jQuery because it's what everyone else is using, similar to why they're using php for writing their apps.

I'm not saying that jQuery is bad, just that usage counts is often a bad indicator.


I think it's popular because it's: simple, small, efficient and documented.

For hugely complex web applications you would probably desire more, but the fact of the matter is that the browser world is not ready for huge stuff, we still have IE, we recently got iOS, Android and many others. Progressive enhancement doesn't really allow you do those complex client side apps.


Look behind you, we're already building them. Not everything needs to work in every old browser.


There are two frameworks that come close to this, both of them are rarely mentioned and I am not really sure why other than the fact that they aren't "sexy".

First is SmartClient:

http://www.smartclient.com/

SmartClient has the best databinding support I've seen in a JS lib. Binding is automatic between server and client, between controls, and validation code can be written once. It also has the richest UI control library I've seen, full of controls that actually work and aren't just a shiny layer over simple code.

Second up is OpenLaszlo:

http://www.openlaszlo.org/

OpenLaszlo provides a much more raw layer for creating UX on the web, similar to the OP's complaints about not having a UIKit like API. Additionally, it provides a declarative language for laying out controls, and also provides expression binding, so you can say "the width of this element is always 2x the height of this other one" and the binding and event handlers are created automatically. Its declarative language is XML, so there are some nice homiconic properties you get by making it possible to return XML from a web service to generate UI elements. As a bonus, it can generate Flash or DHTML.

Edit: Of course, both of these frameworks have their downsides. They're horribly ugly to look at (out of the box.) The declarative language for SmartClient is really ugly Javascript, and the language for Laszlo is really ugly XML.

However, they've solved the hard problems and left the "easy" ones. They're both open source. If someone were to come along and clean up some of the syntax and add some real polish to either of these, I think they'd really be remarkable technologies.


The problem with OpenLaszlo is that while apparently its actively developed i rarely hear about anybody using it. It seems to have had its hayday years ago and now is used only by a very few projects.


My entire point here is that these frameworks are not sexy or popular but have very strong technical foundations and it would be a noble goal to fork either of them to polish them into something better.


Pandora is probably the best-known app written with OpenLaszlo (and I'm not sure how well-known this fact is).

When I tried OpenLaszlo a few years ago it was very nice to work with - definitely better than plain Flash. Then again, I actually don't mind XML.


I haven't used openlazlo, but SmartClient is slow on the client. Ditto a lot of the current set of heavy "desktop app" client side frameworks. As soon as you get out of developer class machines things get brutally unusable.


I've found it to be reasonably speedy once you deploy things correctly (though I haven't used it too much.) That said, they've certainly got the "make it work" part down, getting the "make it fast" has a way to go.

Again, the "make it work" part always strikes me as the challenge, and to me these two projects are several generations ahead of the capabilities of things like jQuery for things like databinding and custom drawing capabilities.


Thank you for the references. Posted to keep this thread in my saved queue.


Write once, run on the continuum between the server and browser, and forget that theres actually a difference between the two. Validations can be shared and server dependent ones can be run there...

This might be an interesting goal to work towards, but I'm not convinced that one actually wants to achieve it. I'm skeptical that abstracting away the boundary between client and server is a good idea. Unless you're a DRM true believer, there will always be an essential difference: The server is (more-or-less) guaranteed to be running the code you wrote, and the client is not. In the end, unless you are comfortable with allowing an AI to dynamically adjust your application's attack surface for you, you'll always want visibility and control of what gets done where.

SEO works fine because the page can be rendered entirely server side.

Are we missing the point that web-based applications and web pages have totally different semantics? The major difficulty with getting Google to index my single-page application is not the need to run multiple rendering engines. It's that Google indexes web pages, and my application is probably not built out of web pages -- not without a great deal of creative thought, anyway. Try to imagine an iOS application that could be fully rendered for Google. How would you do that? Does each possible window and window state get a URL? How should Google index the little popups that appear when the user executes a three-fingered leftward swipe with a twist?

The reason why you're writing your views and rendering twice is probably that you need to design them twice. You can either make your users view your app as Google does, as a series of HTML pages at distinct sensible URLs with a minimal amount of Javascript sprinkled on top (which was good enough for 1998, and even 2008, but perhaps not good enough to compete with iOS long-term), or you can design a glorious GUI experience for your users that is largely opaque to Google, or you can do the work twice: Figure out one view of your data that appeals to humans and another that appeals to indexing bots. And that job probably can't be done for you by some magic framework. Figuring out sensible views of your data is design work, for humans.

A departure from the routing paradigm found in Rails, Sinatra, Sammy, and Backbone. The traditional one URL maps to one controller action routing table no longer applies.

We didn't converge on this routing paradigm arbitrarily. Among other considerations, it is very strongly influenced by Google, a company that literally pays you money if you design a URL scheme that can be usefully interpreted by Googlebots.


The security concern is essentially a red herring, for reasons you've already identified. The code running on your server is guaranteed to be yours. That fact doesn't change. Equally unchanged is that you can expect anything from the client. That doesn't mean you can't still use the same code on the client and server (or the same code source if its some kind of code generator) to ease development pain and get an optimal user interface.

The only real concern is structuring your app in a way where you know things like private keys aren't accidentally world readable, which gets more complicated (from a discipline perspective) in a world where you are using a single language and sharing code across the client and server.


I did not mean to suggest that one cannot, or should not, use the same code on client and server. It's a good idea. What I suggest is that -- in your words -- it requires "discipline". Such discipline is a conscious process, one which requires the programmer to be fully aware of where the client/server dividing line is. When you abstract away that dividing line you make the discipline harder.


Great points.

The attack surface is something I didn't consider, and presents a tough challenge. If the framework were to implement the transport layer in a predictable way, I think it still might be a net win. It could build in all sorts of automatic good protection against XSS and CSRF and have even better control than today's frameworks since it can validate on both sides and know what to expect. The ever changing attack surface is a problem without a doubt, but the usual vectors can be better protected against, and the visibility and control issues can be mitigated with inline directives.

With regards to SEO, I disagree. There are two issues at heart here. Firstly, some web applications are single user apps with no publicly indexable data (ex Mockingbird, Basecamp), and optimize landing pages to direct users to use the application and explain why it is worthwhile. That's the data they want in Google, not the data from within the application. For these types of apps the issue SEO isn't that big. The issue is big with apps like Hunch, where as you say there are many states and non page-like semantics. Take a moment to examine the data that these kinds of apps want indexed and searchable. It's usually central to the app, the meat of the whole thing, and in the case of Hunch, available as a discrete page because it makes sense. This leads me to believe that you can without too much difficult come up with a URL scheme for representing it that either does or doesn't have an anchor in it. Thats the central idea, is that the routing table is the same on both sides of the wire. The first page they visit can be rendered server side and all ensuing pages can be rendered client side using fragments after the "#", and the Googlebot can index the pages as they are all renderable server side. This also ties into your third point, that the routing paradigm used by Google and everyone now is the only way to go. I really don't know how to solve the multiple state vs url segments problem while remaining indexable, but I believe it can be done. Do you disagree that the paradigm is no longer as useful?


The attack surface mostly has to do with input validation. Good security presupposes that you validate all input and escape all output. That means that you have to be aware of the semantics of all server I/O, which is difficult to automate effectively.

Regarding SEO, the point is irrelevant to most SPA's. If your data is private, whether google can index it is not a relevant topic.

I suspect the validation probably should be developed twice. We build our server-side framework to be client-agnostic, published as web services. It is reused across multiple clients: a full desktop web front-end, a mobile web front-end, and a set of third-party interfaces. The use cases and needs of every front-end are very different, so I have a difficult time imagining a way that the validation logic could automatically be integrated into the different front-ends.


New Twitter is really the example to look at here (and they aren't the first). New Twitter doesn't have unique pages anymore for tweets, and everything is happening in a single page. But each of those tweets also has a real HTML version generated separately for SEO. That is a perfectly valid way to do it, especially when you think of New Twitter as a content creation app, and each individual tweet as its own random piece of content.

Granted, the simplicity of Twitter's content makes that choice easier than it might be for a lot of people, and I agree that hopefully we'll converge on tools and frameworks that will make this more automatic. But in a world where your app is largely running on the client, and data exchange is done largely via some kind of simple REST AJAX API, writing an additional (likely quite simple) HTML template for the same data doesn't seem like an impossible challenge for most web apps.


I'm not sure New Twitter is good model in its implementation however. I don't think I'm the only one who's noticed that the new interface is a client-side pig, taking all the CPU time it can grab.


We've taken the view that the server side code and the client side UI are two different applications and should be developed as such.

We write all the server side stuff as "REST-like" web services and then use whatever makes sense for the UI, whether that is javascript, html emitted from the server, action-script or native binaries.

Separation of concerns.


The concerns aren't separate, thats the whole point! The validation and view logic is shared in "both" applications as you put it, so we either have to duplicate code or try and only put it in one place. This doesn't work or requires monumental effort, hence the whole post.


In my experience, validation logic is either trivially represented by a struct (could be a json structure, which is easily shared) or requires some form of I/O. In the latter case, it is usually easier to just handle this on the server side. On the former case, I agree that a simple validation library has yet to emerge as the One True Way to Validate.


They are separate. Yes, the UI and the service both validate the data, but they do it for different reasons. One is concerned with the user experience, and the other is concerned with data validity. There may be some functional duplication there, but they are separate concerns.

The service validates to make sure that the data it is dealing with is safe and isn't going to corrupt something.

The UI validates input to make sure that the service won't reject it so that the user doesn't have to deal with inconvenience of making a round trip to the server, or having to remember what a valid value for a field is.

If you're not feeling up to making things easy for your users you can always throw all the data at the service and wait for it to tell you why the data is invalid. You don't HAVE to validate the data twice.


As a user interface shishya, I agree completely.

As a developer, I still have to write code that validates the data twice and possibly deal with validation failures in two different ways. Further, I have to make sure that both checks are using the same criteria for validation and keep them in sync if requirements change. This seems less than ideal.


Out of curiosity, if your validation methods (written in the server-side language) don't have environment specific side effects, couldn't you compile them to Javascript for use by client-side code (using LLVM or something)? Or does that not work very well in practice?


You can still handle your validations server-side in a single-page application. The client JavaScript code sends a JSON packet to the server containing whatever data it needs to process. The server responds with a JSON packet which looks like

    {"status": "ok", "data": ...}
to signify success, or

    {"status": "validation_failed", "details": {"email": "malformed email"}}
to signify validation failure. Then the client-side code updates the UI appropriately. The client JS code does not bother with validation at all. If this seems wasteful in terms of hitting the server, remember that you have to talk to the server for this task anyway.


A user makes a mistake in typing email, adds in rest of form. Presses submit. Form takes 3s to send. Throws validation error and highlights email. User corrects it and sends it again (hopefully you're not punishing your users by clearing the form).

Or, you could present client-side validation which alerts user on focus out.

Other uses for client-side validation:

+ Multiple emails (Gonna verify them all in many calls?)

+ Prerequisite for further steps

Of course, have server-side validation. Client-side validation is convenience for the user.


I use this very method for dealing with validation (though mine returns a HTTP 400 on validation error). Trivial stuff, like a field being required or not, I check on the client; everything else is checked on the server. I've written a small library to plug this into jquery. It's called quaid, and the validation module docs/examples are: http://benogle.com/quaid/validation


This could be extended to say "Today, Development Sucks." Today, not only do you need a web-accessible application to stay competitive, you best develop native iPhone and Android apps. Even better, whip together a native Mac app.

You think both client-side and server-side validation is bad? Try developing 4 different client-side applications on different languages and frameworks.

Evernote came out and said it attributed part of its success to developing native apps for Android, iPhone and now Mac. Adobe Air and HTML provide an inferior user experience on respective devices.

This is also the reason I feel 37Signals is falling into obsolescence. They just launched a mobile "site" for Basecamp. Not an app, but an HTML, mobile-optimized site. Their blog admits to simply wanting to focus at "what they are good at" which is the first foot in the obsolesce coffin. They hired an iOS developer for their Highrise iPhone app, but said they felt the talent should be in-house for future projects. I agree, but their decision, again, was to keep doing the same old thing. Not exactly an innovative, hacker mindset in my opinion.


> Try developing 4 different client-side applications on different languages and frameworks.

People aren't going to keep doing that. As soon as there's a web app store that allows them to achieve a better cost/revenue ratio, they'll move away from native apps. Sure, web apps have inferior user experiences, but the difference won't be big enough to keep them on the native platforms, just like how it didn't keep them on the native desktop.


Strangely he didn't mention server-side javascript, which would be the obvious way to share code between client and server.


This is how SproutCore validation logic is used on the server, BTW.

It's easy to use SproutCore's (very) powerful model layer on both the client and the server.


Just a small note, but despite the common belief, Gmail isn't written with GWT. Serious web app development seems to result in creating a framework of your own, as your examples show, so I'd agree some useful frameworks would be good, but we have not yet worked out what they should do.


What is Gmail written in?



Any idea why thats being used as opposed to GWT?


Gmail predates GWT by several years. I'm also not entirely sure Gmail is written in Java to begin with.


Javascript (who would imagine!)


I meant do they use a specific framework. My understanding was that GWT was created out of the work they were doing with gmail which is why all of the samples for GWT look like gmail.



Google built the GWT so they didn’t have to write code twice, but I don’t want to be stuck in the Java world or be forced to learn the whole GWT and make any open source buddies of mine learn it too.

Groovy/Grails works well with GWT. Idiomatic Groovy looks more like Python than Java. Another possibility is Vaadin, which is built on GWT.


i don't have a lot of experience with vanilla GWT, but i have been using vaadin a lot lately and it is very frustrating to look at the (bloated) html that vaadin generates and not to be able to do anything about it (without designing new widgetsets).

Also i have found with vaadin the client side rendering times can be a problem. ( though my whole team is new with vaadin so there's a good chance that that is our fault)

I definitely see the appeal of vaadin, but i think that ultimately its not the paradigm you want for web application development. I think that for many cases you really need to know exactly what will run on the client side and what will run on the server.

Back when i was doing lots of silverlight development there was a very clear division between the client side and server side code. However, because they were both implemented in the same language i was able to factor out shared components (models mostly) into seperate projects that i could individually compile for both sides (for smaller chunks of functionality such as some socket protocol validtion code i just linked the files into multiple projects). This allowed for good reuse as well as good seperation. I think that this is ultimately what you want: the ability to share some code between both sides not to blur the line completely.


Agree with you on most points.

However on some points I'm not as convinced. Take the DOM abstraction, it works and is implemented by several SPA frameworks. You can design good looking, "desktop class" applications pretty fast. But problem arises quickly when your graphic designers send you those PSD mockups, full of great looking artwork waiting to come to life.

With a normal DOM approach you've always been able to solve this. With hacking some HTML, tuning CSS and a lot of swearing you pull through. But with SPA frameworks that favor "components" over low-level, raw DOM elements things usually aren't as straightforward. Very often you need to start picking apart the provided "ready-made components" to get any work done. Ends up being very contra-productive and usually takes way longer to do.

It has happened to me numerous times before with both GWT and Adobe Flex. Nice and shiny, given you don't try changing the layouts too much. Surely, I'd be one happy camper if this wasn't the case, web development need to move forward. And I hope the goals proclaimed by both Cappuccino and Sproutcore will work in practice some day.

Regarding your other point about routing, departing from the route paradigm will be only be true if what your designing is not a document-centric application. In my world, an SPA can be either document-centric or desktop-like (containing a lot of UI state, as you mention). Think the answer to that is the boring "it depends".

Worth mentioning, many of these issues are stuff we've been trying to resolve with the Planet Framework (http://www.planetframework.com), essentially bridging the gap between client and server.


Great points. The design issue is a big one, and I think the main reason is that PSDs can be translated to HTML and CSS easily, but extracting a theme for a set of widgets is much harder. The widgets give you rich interaction and an easy way to set up complex layouts, but as you said can be very rigid and not allow easy modification of their behaviour. And I agree, I end up longing for the simple world of HTML and CSS where I know exactly how to change something instead of fighting the framework. I think thats something that will be remedied if a framework like Planet or my hypothetical one reaches ubiquity however. If developers learn the ways the framework works, just like how they learned the ways HTML, CSS, or Rails works, they'll be ok with extending the widgets to do what they want.

Another good point about routing, and this I'm admittedly fuzzy on. Above mechanical_fish mentioned some serious SEO issues that arise from moving away from the standard paradigms, so I'm not quite sure how it should be solved.

All in all, Planet looks neat, thanks for sharing!


I'm with him on most of this, but I just can't agree that frameworks like Sproutcore or Uki are good fits for all or even most of the the web apps out there. We've spent years showing off the beautiful things you can do with HTML & CSS and most web users have come to expect that.

Sure, the desktop-in-browser approach works in some places, but ignoring standard elements and replacing them with non-semantic, inline-styled <div> elements, and script handlers (e.g. inspect elements on http://ukijs.org/examples/core-examples/controls/) strikes me as cavalier and reminiscent of how SOAP treated HTTP.

The DOM is not something to be coerced and abstracted upon: it is the presentation structure for the largest aggregation of human knowledge ever. It deserves some respect! :-)


SproutCore loves the DOM. You're thinking of Cappuccino.


For validations, at least, there are Rails plugins for checking model validations in the browser (by augmenting the form helpers to generate Javascript which does the checks). For example, https://github.com/dnclabs/client_side_validations

This won't work to forward more complicated logic, though. For those who are really feeling the pain, the best cure might be writing the back end using server-side JS, so you can run that code in either environment.


I'm surprised there was no mention of WebSharper in here, it's almost exactly what the author is asking for. And it's implemented in a functional programming language too...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: