Hacker News new | past | comments | ask | show | jobs | submit login

I don't get it. They're requiring authors to provide HTML content that lives behind a URL. How is this different than just requiring the application gracefully degrade to a plain HTML mode that's usable without JavaScript, and crawling that?



It's similar in effort required, but this more clearly tells Google to display as target URLs (and send searchers directly to) a site's #!fragment-ed pages, rather than their degraded/simple pages.

(Of course, the site could accept visits to degraded pages, detect AJAX-capable browsers, then redirect the users to the preferred AJAXy URLs... but perhaps they don't even want the non-AJAX URLs to appear in normal use.)


It shouldn't require a redirect to "grow" the context around the fragment, though -- which is kind of the point of progressive enhancement.


You can see a problem that the Google convention handles better than 'degrade to simple pages with distinct URLs' with the 'Noloh' site touted elsewhere in this thread.

Consider one of their AJAXy-#fragment pages:

http://www.noloh.com/#/features/

Search for a phrase on that page: ["NOLOH generates only the absolutely necessary concise"]

Google finds and sends you to their 'simplified' version of the same page:

http://www.noloh.com/?features/

...which upon visit "grows" itself with a fragment to...

http://www.noloh.com/?features/#/features/

Ugh! Does the site really want people on that URL, possibly bookmarking and sharing it? Probably not; they could be using a redirect on first Google-visit.

Try clicking to another page from the double-feature page, like FAQs. You wind up at:

http://www.noloh.com/?features/#/features/&faqs%2F=

Ugh, it just keeps getting worse.

Using the Google #!fragment convention, the initial URL appearing in the index could be the simple/direct:

http://www.noloh.com/#/features/

Some sites will want that. One canonical #fragment-filled URL collects all the inbound traffic/linkjuice.


Fair enough, but you have to remember that we needed this to work on all servers and browsers since 2007. This Google implementation was just released. Surely we could've done a URL rewrite for our specific server, but we try our best to showcase how NOLOH will operate without the need for any tweaks, as many of our users operate on shared servers without any access to rewrites.

Oddly, you don't make any mention that we effectively solved this issue automatically for our users and that they've been able to have their full websites searchable by Google. You were able to do a search for our content, and guess what we didn't need to do ANYTHING from a site development standpoint for that to work. Sure, without a rewrite it can be ugly, but the content was fully searchable.

Frankly, have you seen some of the URLs that major websites such as amazon.com, or other generate? Criticizing us for showing how the URL would look without rewrites is really nitpicking our site, however we do want to thank you for pointing out a minor issue, we should not have the &faqs%2f you saw above, coming from a search engine and navigating. It should be http://www.noloh.com/?features/#/faqs.

Sure enough we'll be implementing the Google style approach in NOLOH, and best of all NOLOH developers need not change anything. Their apps were searchable since 2007, and will continue to be searchable with newer and better methods for the forseeable future.


That's the bad behaviour of a specific implementation (more than the context of the fragment is "grown") and the resulting URL cruft is entirely unnecessary. No redirect, at least at the browser end, should be necessary to serve the fragment in context.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: