Hacker News new | past | comments | ask | show | jobs | submit login
URLs are for People, not Computers (not-implemented.com)
210 points by Kop on April 5, 2013 | hide | past | favorite | 152 comments



The example about Amazon is inaccurate.

Here is an Amazon URL:

http://www.amazon.com/Bioshock-Infinite-Premium-Edition-Xbox...

Is it completely clean? Nope. It contains a lot of information that feed into the backend, but the core URL is this:

http://www.amazon.com/Bioshock-Infinite-Premium-Edition-Xbox...

This URL will take you to the correct page, every time, and it doesn't take a genius to figure this out. It also doesn't take a genius to figure out what this page is about before you even paste the link into your browser. By putting the human-relevant portion of the URL as far forward as possible it's able to accomplish both priorities: giving the machine as much information as possible, and giving the human as much information as possible.

The trick here is that "Bioshock-Infinite-Premium-Edition-Xbox-360" is entirely superfluous. It is entirely there for SEO and human readability purposes. This URL works just fine and leads to the same place:

http://www.amazon.com/dp/B009PJ9L3Y/

Amazon isn't blind to these issues. So sure, you can take this very last URL and try to make a point about obfuscated URLs, but that's not what's actually in use at Amazon. It seems odd to pick them as an example when they're not even a violator.

[edit] It looks like HN truncates long URLs for display, which only goes further to prove the point.


Although slightly unrelated, I'd just like to add something else to commend Amazon when it comes to link handling. I'm extremely impressed at Amazon's ability to handle very old links to products. Here's a link to Hitchhiker's Guide to the Galaxy that I used on a webpage I built 13 years ago and it still works: http://www.amazon.com/exec/obidos/ASIN/0517149257/o/qid=9295...

Note: it is a particularly ugly link

  http://www.amazon.com/exec/obidos/ASIN/0517149257/o/qid=929505204/sr=2-1/002-9367729-7762218


I suppose maintaining really old URLs is more of a priority when you can measure that they directly translate into sales. :)


The Amazon case is an interesting one because, despite the appeal of the OP's argument, one can hardly deny the success of Amazon's product listings in spite of their ugly URLs.

However, this raises up an important consequence of clean URL design: when you're offering things that may be classified in several categories, it requires good design on the backed/framework to make sure your URL taxonomy isn't overly constricting. For example, example.com/toys/Nintendo-wii or example.com/consoles/Nintendo-wii?

Either one is legit but creating and keeping consistent taxonomy is difficult enough on its own without worrying simultaneously what the URL looks like


> "example.com/toys/Nintendo-wii or example.com/consoles/Nintendo-wii?"

Why not both? The URL is just a URL - it does not need to reflect your underlying data model. There would probably be a canonical URL for use when the category context isn't available (say, "consoles"), but why not have multiple URLs lead to the same information?

> "despite the appeal of the OP's argument, one can hardly deny the success of Amazon's product listings in spite of their ugly URLs"

But they're really not ugly. In fact, given the complexity of the system they represent, they are remarkably human-friendly.

In an ideal world all ideas, all businesses, and all use cases can be fulfilled by simple URLs like "example.com/shockingly-unique-identifier", but we don't live in that world. Amazon has constructed human and machine-relevant URLs. The author's argument can be applied to many sites, but I don't think Amazon is one of them.


> it does not need to reflect your underlying data model.

Absolutely! In the REST parlance, resources (which are what URIs point at) do not map 1-1 with entities (which is your internal representation of business objects). If they do, you're quite possibly exposing too many internal details and making your application brittle.

> More precisely, a resource R is a temporally varying membership function MR(t), which for time t maps to a set of entities, or values, which are equivalent.

http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch...

This is actually one of the more readable parts of Fielding's dissertation, read the whole section. The examples are great.


I don't disagree at all, I'm just pointing out it requires an extra layer of logistics and maintenance that may outweigh the benefits of the beautiful URL. And in the rise of URL shorteners and the proliferation of UrL sharin/discovery via social media, I'd argue that the beauty-effect of URLs is even further diminished.


Just a couple months ago, I pushed out a major update to classiccars.com (it's still pretty messy) that cleaned up/reduced the url structure.. the old system had like 50 routing rules to support a wide variety of "friendly" urls for searches, and listing display... I reduced that to /listings/find/YEAR(s)/MAKE/MODEL?opts where all search params other than year/make/model were not part of the route, but querystring params... it didn't make sense to support all those routes (not to mention the old results paged via postback). (hint, you can set the page size up to 100 via a ps=100 querystring param, there's a few others not in the UI yet)... the UI is very similar to how it already was.

The new routes make more sense, and are imho more friendly. The same for listings/view/###/STUB though I put the number/id before the stub, it looks a lot better than it did. Also, I put permanent redirects for any old references to the canonical url. It took a bit of work, now some more updates are going in to make the title/description/h1's more friendly. It's more maintainable now, and some very old URLs are still supported.

Currently working on some other modernization bits, which means a lot of the cruft can finally get cleaned out (if a section at a time, slowly).. having a friendly/consistent url stucture is important imho.


You can't have multiple URL leading to the same information because of SEO. Well, you can (with canonical meta and stuff), but it's not an ideal practice. It is often better to cater to robots rather than to humans. A sad truth, highlighting Google's failures.


This is why I fundamentally disagree with the 'friendly' URLs approach. URLs are meant to be unique and permanent resources. Injecting your information architecture or page title etc into this is a horribly leaky abstraction - and those things are meant to be able to change. Mixing them up either adds brittleness to them or means that your URLs are going to require constant grooming to keep in sync.


SEO friendly: yes, human friendly: no

Sure, the URL

http://www.amazon.com/Bioshock-Infinite-Premium-Edition-Xbox...

leads to the Bioshock article, but so does

http://www.amazon.com/flux-compensator/dp/B009PJ9L3Y/

or

http://www.amazon.com/hackers-and-painters/dp/B009PJ9L3Y/

In my opinion the only reason for the slug is SEO, everything else is a side effect which I personally find borderline dangerous.

BTW stackoverflow.com URLs work the same way.


Ok, it's not tamper-proof, but if I'm pretty sure that my friend that mailed me a amazon.com link is not attempting a cheap phishing attack, I can recognize the product directly from the email body. I would call this human friendly, IMHO.



you do realise that they are using rel=canonical here?

But OMG is amazons html fracking messy or what huge amounts of white space which increases the page size.


Probably leftovers from their templating engine?

I wonder if this really matters with gzip. Or at least whether traffic overhead beats one from implementing and using HTML compression.

I bet they have it measured.


they own so much bandwidth it's probably cheaper for them to just send uncompressed html than to waste time + power recompressing it when things change. Being so large, they're an exception to the general rule.


The more pages you send to a user the costly a single byte on a page is. Conversly the more costly a single byte is the more money you can save by compressing or removing said byte. Owning lots of bandwidth is often highly correlated with sending lots of pages. I think you may find that it's more cost efficient for them to "waste" time and power recompressing than you think.

Scale challenges a lot of your preconceptions when you actually get there.


Mason emits a lot of unnecessary whitespace, but gzip helps a lot.


I'm especially annoyed by http://outlook.com. When I go there, I get redirected to a garbage URL on http://login.live.com, and then automatically to another garbage URL on http://bay156.mail.live.com where I can see my inbox. Yuck.


Logging out on outlook.com is even worse. You get redirected to msn.com with its sleazy celebrity "news" and other dubious items. It's like you're in a renovated theater thinking this new interior design is pretty good and then after the show is over the exit leads to a back alley full of trash.


Welcome to Broadway!


I just did it and got blu172.mail.live.com. Load balancing is exposed to the user... nice.


They're actually letting you know what data center you're accessing as well. BLU / BL1/2 are "Blue Ridge", in Virginia (a few different towns). SN = San Antonio, TUK/TK = Tukwila WA, AMS = Amsterdam, DB2/3 = Dublin, etc. Sometimes there are multiple DCs in the same city, sometimes the number just means a different cage in the same DC, and between products you'll see different naming conventions even for the data centers.

It's fun what you can find out given those hostnames. It shows how a unified offering like Office 365 is actually just a few totally separate products, usually hosted in different data centers, that all happen to plug into the same user provisioning system.


I don't know about Outlook.com but if you use Office Live, you need to know that server name to configure Outlook.

We've recently had to configure ~200 users and Microsoft had us login to each user's account to find out on what server they were on.


Are you serious? That's crazy!


I am. I know it's not such a big deal but I would have liked to just have a list of my users and their configuration, I imagine it wouldn't be that hard to get and we would have been much happier doing the migration.


Is that sarcasm? I think that's very common practice. I've seen google, facebook, and others use similar schemes.


Well yeah, it was sarcasm (for which I apologize), it's not good to expose this to your users. When does Google do this, by the way?


One example is this server, from which my browser just requested from youtube: http://r5---sn-vgqelnek.c.youtube.com/


URLs used for asynchronous requests seem much less important. The original comment's point was about outlook.com redirecting the actual url of the page (the one in the address bar) to a data center specific version.


The seminal writing on this is by none other than Tim Berners-lee:

"Cool URIs don't change"

http://www.w3.org/Provider/Style/URI.html


These are all good ideas, but there's part of me that wonders that because they are all being broken by Microsoft, Google, Amazone (as demonstrated in the article and in the comments here) that the importance of them is overstated.

URLs are fundamentally for web browsers to translate into a domain name lookup, and a HTTP request. They are for computers - the fact that we've managed to convince humans that they should care about them, that they should be decipherable by humans, is, IMHO a failing of the web as is.

On a side note, I note that films are starting to use Facebook URLs by doing [FB Logo]/trancethemovie which the user is intended to translate into https://www.facebook.com/trancethemovie


This is an often-heard argument: "Amazon (or other large company) are doing it and it works just fine for them."

But you are not Amazon. Your listings may be competing against Amazon's listings, without the brand recognition, trust and backlinks they have built up.

So you have to do everything better, like building friendly urls, just to have a chance of getting clicks that Amazon can take for granted.

(Amazon does a pretty good job at urls, as others have pointed out, but most larger, established companies are still pretty poor at this, leaving the door open for upstarts to do it better).


Exactly, and people also forget that Amazon, eBay, et al had horrible urls when they first launched, because just having the platforms at all was a big step forward. Old eBay urls have cgi-bin in them.


According to that logic we might as well do away with DNS entirely, and just use numeric IPs in URLs.


I thought that literally the entire point of URL's was that humans can't remember and use IP's naturally so we created URL's and DNS to let humans interface with the machine IP language.

If URL's are actually intended for computers, than I'd say we've failed rather badly.

The whole point was to interface with people.

If people don't matter and it's for machines, why use URLs? Just type IP's. Skip DNS all together...


> I thought that literally the entire point of URL's was that humans can't remember and use IP's naturally so we created URL's and DNS to let humans interface with the machine IP language.

That'd be the base URI, but not the full one. If you used IPs, it would still be http://123.456.789.123/search&q=foo


Google is the new DNS. Only a small subset of people care about URLs beyond the domain name (this subset has a large overlap with HN readers). Unless you've nothing else to improve, there are usually more effective ways to improve your websites that you can spend that time on.


well, now adays there aren't enough IPv4 addresses for all the websites out there. vhosts have become very popular for hosting. I'd imagine if we didn't have DNS, IPv6 would have been pushed through much much faster.


Just type IP's

How exactly would you get at a particular resource at that IP? That's what we're all talking about.


The more accurate claim would be: "DNS is for computers, not people", because that is actually true.

URL's are for both, and so you see hints of both concerns represented. Once your routing passes a certain level of complexity, there is no way to make both functional and human-friendly URL's.

The only thing that users should really be concerned with WRT to URL's is the DNS portion; pretty URL's are just that -- pretty -- and a rose by any other name... Ultimately the user should either have trust in your FQDN or not, at which point the actual URL is inconsequential.

EDIT: additionally, a URL is not a UI element, and the user should never even need to see or know about any particular URL (much less its scheme), only that interacting with an anchor tag named "profile" takes them to the profile page, for example. It's up to developers to translate URL's to human-friendly counterparts.


>Edward Cutrell and Zhiwei Guan from Microsoft Research have conducted an eyetracking study of search engine use (warning: PDF) that found that people spend 24% of their gaze time looking at the URLs in the search results.

>We found that searchers are particularly interested in the URL when they are assessing the credibility of a destination. If the URL looks like garbage, people are less likely to click on that search hit. On the other hand, if the URL looks like the page will address the user’s question, they are more likely to click.

I wished someone at MS would follow up on that and fix the whole bay0X.cdn url jumping everytime I connect to outlook/hotmail.com


It would also lower the amount of time I spend allowing things on noScript


though, to be fair ... their inbox isn't necessarily something that they have to gaze through search results to find. So that doesn't really apply there.


URLs having any meaning at all strikes me as bias. If my parents visited a SSL site (say, a bank) and the address bar simply displayed the company name and nothing more, they would not miss URLs at all.

This is also why your parents and grandparents can just type random text into an address bar to execute a search instead of having to go to google.com or type in something cryptic like google.com?q=thing%20i%20want.


That's confusing two different concepts. URLs are the equivalent of a full postal address whereas searching is the equivalent of asking a stranger how to get somewhere. Why would you want to always ask and trust a search engine when you already know how to get somewhere? DNS spoofing aside, I know I can trust anything under, for example, bbc.co.uk and I know where /radio, /news etc takes me. Guess what, so do my seventy year old parents!


That's a sample size of one person's experience. I on the other hand am constantly meeting people who's only mode of navigation on the web involves typing the word "facebook", or "gmail", or, yes, even "google" into the address bar, and then clicking on the relevant google search result. It drives me crazy, and despite my constant wailings using the same exact argument you make above ("Why would you want to always ask and trust a search engine when you already know how to get somewhere?") ... they simply shrug their shoulders and keep on keepin' on.


Yes, I can't argue with that, though I am meeting an increasing number of non-techies who are interested in different (better) ways of using the web and IT in general.


Most people (non-techies) I know will search "facebook" to get to facebook.com. The same with YouTube/Twitter/etc.


My goto example of this is when a ReadWriteWeb article briefly become the top Google result for 'Facebook login' http://readwrite.com/2010/02/11/how_google_failed_internet_m...


Some people I know will search for "google" using the browser's search box to get to google, then type in "yahoo" into google to find yahoo's website, then click the "mail" icon to get at their email /o\


Hehe, I have seen this done. Crazy. :/


This is a good and oft-neglected part of UI and API design to keep in mind. I especially like the discussion of the implications of hierarchical, semantic URL's in improving user trust and likelihood of clicking.

Much like with database design, it's easy for programmers to take over the task of URL design and make it easy to use from the write-first, read-never programmer perspective. User considerations come later if at all. I like the reminder to pay attention to these factors. We should all be reminded to question our first impulses; are we making something good for us or good for the user?


I disagree with many points of this article, and actually feel the reverse is true:

URLs are for computers, not people.

To me, a URL is an address to a web site, not the title (or description).

If I want to find somebody's address on a map, I don't go to "Bobby's House". I go to "123 Main Street, New York City, NY". If I search for Bobby's house, I'm not given "Bobby's House" on a map, I'm given a surrogate street address.

If humans are expecting the URL to look pretty and descriptive, then the issue here is that we've conditioned this expectation and we should instead condition users to expect succinct, surrogate URLs that only serve the purpose of identifying the article you're trying to reach.

Additionally, I think the fact that search engines highly weight their optimization on a URL is terrible and counter-intuitive to the purpose of a URL. This is what <title></title> is for, and other <meta></meta> headers.

The URL should not determine a page's rank in search results, at all. At the very most, it may make sense to factor the root domain into SEO, but that's where it should end. This isn't the 1990's when much of the web was static HTML pages that could be given whatever meaningful file names. In today's world where the web is dynamic and mostly made up of user-driven content, URLs are designed to route the user based on one or many identifiers, which are often surrogate identifiers, and not natural or meaningful identifiers.

Edit: I do agree with the point about useless garbage in the URL (like the Google search examples) that are there only in the interests of the site and tracking/analytics. I think URLs should only serve to get the user where they need to go, and contain exactly enough data to get them there.


But people often need to parse URLs before they provide them to their computers (via click or keyboard), and I think that's the point.

The issue is that URLs are often the only piece of information users receive, and it's why we've conditioned users to expect meaningful URLs.

From our standpoint, it's not too hard to make URLs more meaningful, even with user-generated content. Plenty of sites incorporate the title of submitted content into the URLs, and it's even easier when creating content for oneself.

Would you prefer a link to http://www.example/about or a link to http://www.example.com/?id=123. Which are you able to understand before clicking it? Which are you more likely to click?


> But people often need to parse URLs before they provide them to their computers (via click or keyboard)

Users don't provide URLs unless they type them in the address bar. And if they're doing that, it's likely just the domain name, and very rarely anything more than that.

If they're clicking a URL, then what the URL looks like is meaningless, and often does not even need to be shown to the user. Again, the only reason they care about what a URL looks like is because we've conditioned them to care. It bares no meaning on them being able to access a page.

> The issue is that URLs are often the only piece of information users receive

Why is this? Why do they ever even see the URL?

If it's visible to the user on a page, then replace the URL in the <a></a> tags with a meaningful description of a URL. Showing a user a URL is bad UX (unless you're audience are power users, but that's not what this article is talking about).

> From our standpoint, it's not too hard to make URLs more meaningful, even with user-generated content.

I would argue this is not true. Take Amazon for example. Converting "Panasonic-KX-TG7743S-Bluetooth-Cordless-Answering" to a product in the database is a nightmare I don't want to deal with. Computers work better with surrogate identifiers that are short, and they're easier to generate.

But then add user-generated content. Who is responsible for generating the meaningful unique identifier for the content? "Sorry, that content ID is taken. Try another one." :)

> Would you prefer a link to http://www.example/about or a link to http://www.example.com/?id=123.

An "about" page is a complete different situation, and these static pages are often limited in number on dynamic sites like this article is referring to. Of course they can, and should, have a meaning (there's no reason to give it a surrogate identifier).

But if a computer is responsible for creating the content, and the address to access that content, then why go through the trouble of giving the URL a meaning?

> Which are you able to understand before clicking it? Which are you more likely to click?

Why do I care what the link is? When I see it on a page, I should see "About", not "http://www.example.com/?id=123.


Here's a recent tweet from a friend:

> My thoughts on what I believe is a bad article on HN: “URLs are for people, not computers”. https://news.ycombinator.com/item?id=5499730 http://www.not-implemented.com/urls-are-for-people-not-compu...

URLs are a way of life. Of course the ideal solution is a semantic hyperlink, but people don't share hyperlinks, they share URLs, and they don't always take the time to type out a nice description like the above tweet contains.

> I would argue this is not true. Take Amazon for example. Converting "Panasonic-KX-TG7743S-Bluetooth-Cordless-Answering" to a product in the database is a nightmare I don't want to deal with. Computers work better with surrogate identifiers that are short, and they're easier to generate.

I don't think computers care one way or the other, and it's not hard to generate a key like that at all. In your example, you'd simply have brand + model + short description. In most other cases, you'd simply take the title of content that was submitted. It's a very common solution to this problem [1]. The idea is that a short description is part of the item in the database. It's not some sort of complicated lookup that acts like a keyword search.

[1] https://docs.djangoproject.com/en/dev/ref/models/fields/#slu...


I agree that URLs should be for machines, and that SEO in url paths is stupid, but I disagree that urls should be hidden from users. If you're talking about browser UI, it could show the full domain, and hide the protocol and url path unless the url is clicked on, but the full url needs to be readily accessible.

> Why is this? Why do they ever even see the URL?

Because the url often transmits more information than what's in the page title. For better or for worse.

Because the web is about inter-site linking. You can't link if you can't see the full url to a page. You seem to think that only automated software or "power users" would ever want to create links.

> Why do I care what the link is?

You've never wanted to link to anyone else's pages? It's not just power users who do this. Casual internet users share full url links all the time. Requiring more than one click to get at the full URL is burdensome and would change linking on the web.

The web is not the only place urls are used, and http, https, spdy are not the only url protocols in existence. In some cases urls are the only information you have about a resource, and it takes quite a while to get additional information. BTIH, for instance, can take a while to get the torrent depending on the initial connection status to the p2p cloud. In other cases, there may never be any additional information about a resource besides the content of the url. You can't hide urls like those. The URL bar needs to stay. It doesn't have to show full url paths all the time, but it needs to be ready to in a click.

Search engines already understand this, but they go as far wrong in the other direction with SEO signals based on url paths, which encourages pathological descriptive url paths rather than encouraging url paths that are merely adequately identifying.


Urls are important to search engines because they should express the primary purpose of the page. Unlike meta keywords, it is very difficult to keyword-stuff a URL, because search engines can detect duplicate content. So a person needs to select the URL for a page that best describes that page, which increases the search signal quality for a URL.

Urls are important to humans for the same reason we save documents with meaningful names instead of random gibberish.

Urls are the way people use sites, and that's just the way it is.


123 Main St (comma) New York (comma) New York is a human paradigm.

You left out zip+4 code, GEO coordinates, user spoken language, travel-type preference setting, internal id, and unique user id tracker.


I blame the tools...

Most tools and frameworks are designed from the ground up to be document-focused. Some even going as far as to purposely simulate a document when none exists (e.g. Tomcat).

Let's take PHP, ASP.net, and Java. They make up the majority of the internet right now. With RoR and MS MVC being outliers.

It is VERY hard to develop applications in them without a document focus because they use documents to direct functionality (e.g. logout.php and login.php might have different underlying functionality).

Now, yes, web-servers do support request redirection, so you can redirect from /logout to /logout.php, but such "magic" is time consuming because there is a disconnect between the underlying framework which "understands" pages and the dumb web-server which just does what it is told to do.

Even if you just automate it so you strip out the extension (e.g. strip ".php") you still wind up /thinking/ about things from a document perspective rather than a functionality perspective (e.g. "this functionality is on THIS page, this functionality is on THAT page").

We just need more modern frameworks where from the ground up the thing is based on a hierarchy rather than documents/files/etc. This should all be dictated by the framework, not the server's filesystem.


These sound like complaints that would have been valid 10 years ago, but not any more. Virtually every framework has the concept of "routes" that map URLs to appropriate logic, they're not document based (whatever that means)


Server's filesystem? Are you serious? Welcome to 2013, all the problems you mention are long gone, and are only brought back from time to time by people like you who stopped learning a decade ago. You should check out symfony.com... Or any other framework for that matter! For Christ's sake.


This idea seems like a pretty normal way of seeing the world among PHP developers.

Please allow me to vent/rant -- I'm finishing up a project where your comment resonates loudly with my frustrations.

I'm a journeyman freelance programmer trying to move from doing WPress marketing sites more interesting programming/development problems, and I just got out of a 2 month Magento project (Magento is an MVC-patterned store software that is based on the Zend PHP framework): extending it to allow users to sign in and create their own product listings.

This was my first time building MVC/OOP modules in the context of a framework, and it took a little bit to connect the OOP principles that I've studied to the actual implementation within the larger framework... but the last month has been like "WOW... I can finally see in a pragmatic way why OOP and MVC are both such a powerful tools!"

Not to mention that, since we were working with extending the system, studying that stuff seemed like an essential learning process if we were going to create the system, even if I was only tasked with the "front-end" side of templates and visualizing the things from the database.

However, the senior guy in charge of creating the functionality to manage products for users thought I was from mars-- his application, which was wholly separate from the Magento install and used a separate database, was patterned around directories, with a set of configuration and include files shoved into the head.

He intend me to integrate his functions by shoving his (single, large) functions file into the head of (each of) my template files, and then calling functions to return arrays of information that I could then push into markup.

Apparently, this is "the right and normal" way to do this stuff in the PHP world, and my process of writing modules, extending existing objects, creating routes to view the data through the framework's template system was all stupidly overcomplex.

I learned a lot out of doing that, and the module that I'm building for another client (aMember/Zend) is both smooth and fun because all of the database interactions and template stuff is already in place and just works... and I can focus on the UI Javacript stuff...

but I feel really badly for the project owner of the Magento project, as I feel the codebase is... well mostly a bunch of functions in a single file and a bunch of single-form .php files that make SOAP calls and depreciated mysqli_ calls...

Now, I dunno-- I'm definitely the less experienced programmer, but if this is the norm in PHP programming (surely it can't be), I will be spending my summer trying to build enough projects in other languages that I can transition out of doing PHP.

Sorry for the rant, but it was cathartic for me :D


I feel you, but it's just a matter of finding the right circles. I would never apply for a job just because it says 'php', you have to dig deeper. At this point in my career, I would never accept a freelance gig like the one you described, because I know how that goes.

You don't need to try other languages, do it only if you want to. If you wanna save time just take my word, Symfony2 has nothing to envy to Django or Rails (I have used them and also digged into their architecture). In fact, it's more modern and better designed.

I'll leave these links here, you might find them useful:

- http://www.phptherightway.com/ (general advice)

- http://getcomposer.org/ (composer, the dependency manager)

- https://packagist.org/ (the main composer repository)

- https://github.com/php-fig/fig-standards/tree/master/accepte... (accepted conventions)

- http://www.slideshare.net/fabpot/dependency-injection-with-p... (if you liked MVC, DI will blow your mind)

- http://fabien.potencier.org/article/50/create-your-own-frame... (an amazing tutorial on how to create a framework, it will let you understand it inside out)

- http://symfony.com/download (what should become your next framework)

- http://silex.sensiolabs.org/ (minimalistic version of Symfony2, it's called Silex, and might be useful for small websites)

If you still want to try something else, try something that has nothing to do with all this, like Clojure. For example, Ruby/Rails is the same as PHP/Symfony2, you won't learn much. I jumped into Rails and just started coding, because I already knew web development, I wasn't impressed, the differences were mostly syntactic sugar, monolithic vs decoupled, and issues like the lack of interfaces and type hinting. But Clojure? Mind blown. I'm gonna spend my next summer learning more Clojure, because I already know it's my perfect language, but it's hard and full of new concepts. Meanwhile, php/Symfony2/nginx is my perfect stack for web development.


Thanks for the detailed reply and list of sources... while I hadn't really seen PHP-FIG, several of those links show up as read in my browser. Lots of good concepts, and I will definitely take a read through them over the next week.

I agree that the answer is being more selective about who I work with-- logically, I can probably write terrible code in about any language. For this last contract, it was more a case of having a one of my main clients that I do other projects with hiring a firm.

I didn't apply for it so much as "oh, the project is Magento? I'd really like to work with that", and the getting on as part of the program because I was pretty familiar with the overall project goals and I'm good with javascript/css/html and was hoping to gain a little experience in how Magento templates operate... but I did learn a lot, and I have lot better idea about what kinds of approaches that I am willing to work with.

Symphony2 looks like an entirely reasonable choice for a framework. I spent some time this year working on adding functionality to a cakePHP project, but that felt a little lightweight compared to other things I've been working with... though that is just a general feeling.

I had been trying to get Zend2 and Doctrine working, but I had to run it in a VM-- it wasn't playing well with my normal OSX environment... I eventually had it running and was building a really basic project with it when I got busy last fall.

So, anyhow-- symphony2 is now on my list of "build a really basic project in this framework": thanks :D.


Isn't this what routing does? The problem you're describing seems to be primarily a PHP one, particularly with the lack of a dominant framework in the PHP community. (Possibly also a .Net one, I've avoided working with ASP like the plague in my career).


I don't think that's the case for PHP, at least not in the last 5-6 years or so... All the main frameworks (Symfony, Zend, etc) will force you to use the routing. It's not even possible without some work to call php files. Yes, I'm aware PHP allows you to create simple .php files, but that's not how the development is done these days.


Admittedly, I've shied away from PHP projects over the past 4-5 years in favor of Python (Django) or server-side JS.

I do still think the biggest hurdle PHP faces, at least when new devs come to it, is the lack of any one clear "best" framework. As you even mentioned, there are "main frameworks", but none of them is the clear "best" choice when you are approaching the language. In fact many people start building with PHP without a framework. Almost no one would start using Ruby to build a web app without choosing Rails. Similarly, no one would choose straight Python over using Django (waiting to get flamed by the Flask community here ;) ). With PHP, a lot of people choose it to build a "simple" web app, and end up just hacking together a few .php files. That was even more true 5+ years ago, and now there are a lot of legacy applications out there, that have grown quite large, still built on that principle.


"Java" - what specifically are you referring to? Spring MVC, as one example, has advanced routing capability and is in no way "forced document centric."


Yeah, I don't think you've used RoR.


I mentioned RoR and MS's MVC as two examples that do it "right." Was that unclear?


Yes, it was unclear, actually.

Let's take PHP, ASP.net, and Java. They make up the majority of the internet right now. With RoR and MS MVC being outliers

This sounds very much like you're calling RoR and MS MVC outliers with regard to the fraction of the internet they occupy. You're expecting me to know that your use of "outliers" was with respect to a quality not even mentioned in that particular paragraph, rather than with respect to the sentence right before it? Not to mention the fact that With RoR and MS MVC being outliers. is a sentence fragment, so the most obvious correction to your grammar would be to put it together with the preceding sentence.

So, yes, you don't write very clearly, so I misunderstood you.


I thought it was clear enough. P2: Most tools are document-focused. P3: Some examples of these tools are XYZ. A & B are outliers. P4+: It is very hard to develop applications in "them" (the 'most tools' which are the topic of the comment) without a document focus, and here's some detail about that.

It's not winning any writing awards, but it's not at all unintelligible. The organization of a piece of writing doesn't stop at the paragraph level, so to assume that the paragraphs are unrelated is pretty odd.


While I do appreciate clean URLs, the reason behind all those random obfuscated query parameters in Google's url is not a mystery. They are hidden indicators that are only available at query time and/or experimentation flags and things like that to improve the results. URLs only need to be readable for the portion that the user inputted or is consciously aware of, the rest is for computers.


Google search result URLs used to be simpler, but they made a decision not to care how they look years ago and they keep stuffing more and more data into them. I assume it helps their tracking and maybe optimization.

Even more offensive are the result URLs on the result page. Here's the URL for a logged out search for "hacker news". https://www.google.com/url?sa=t&rct=j&q=&esrc=s&...


Ha, the Hacker News page won't even display the URL, it's so ugly. Let's examine it in its full glory

https://www.google.com/url?

sa=t&

rct=j&

q=&

esrc=s&

source=web&

cd=1&

cad=rja&

ved=0CDMQFjAA&

url=https%3A%2F%2Fnews.ycombinator.com%2F&

ei=k99eUcfzOsePiAKjgIG4DA&

usg=AFQjCNGxnV8qCnv_rujodDj6o2ZhqU8Nxg&

bvm=bv.44770516,d.cGE

Note that none of this is necessary for clickthrough tracking; it's easy to do that separately from the href.


You get that long and obscure URL while visiting google logged out, with no cookies.. So they all must be the defaults.

If they are the defaults, why put them in the URL?


So they all must be the defaults... If they are the defaults, why put them in the URL?

You've probably answered your own question there.


I found this recently with a 6-field form. The browser sends empty FORM fieldstrings.

Eg: http://www.bing.com/search?q=test&qs=n&form=QBLH&...

Here, sk is "".

You can strip out empty fields with a bit of PHP (creating a new URL without the empty strings), which seems to work okay, but it's probably best not to risk the wrong effect, which is the end program seeing the empty fields wrongly (as NULL and not "").


Could they not set those parameters in a POST request so they don't show up in the URL?


How possible is it to perform a POST within a GET request?


I presume robinh means why not set all those additional params when the search form is posted rather than the initial get.

EDIT: Ok the follow up comment doesn't indicate that... but it would be just as valid to do it that way if they are indeed search modifiers.


That was what I was trying to ask: can you do both, or would that be absurdly impractical/impossible? Alternatively, can't they do only a POST and somehow generate URLs dynamically based on the search query?


Gratuitous use of POST breaks the back button.


It's perfectly possible to POST to a URL with GET query parameters.

For Google though I'm not sure you'd want POST requests at all; unless it's to set cookies and do a redirect on POST.


While I agree that URLs should be considered intrinsic to good UI, social media is also undermining the value/importance of semantic URLs.

Why put extra work into making your RESTful URL structure more semantic, in other words, if Twitter if just going to shorten them to the point that they are no longer fully readable, or Facebook is just going to hide them behind a preview view?


Twitter does show the full URL in the tooltip.


I agree. Except hierarchical is problematic. The world is not hierarchical. Or, rather it is composed of innumerable hierarchies, some disjoint, some overlapping, some redundant, some varying with time, and which one to apply and what the levels are is a huge bikeshed / distraction

Chair example:

furniture/chairs/desk/chair

inventory/current/reorder/chair

customer/me/bought/chair

customer/me/wishlist/wedding/chair

products/used/modern/office/chair

products/wood/four legs/padded/black/chair

ad nauseum.


On a side-note, I have made a movie web-app where you can just enter movie name into URL to get it's rating & trailer, like www.instamovi.com/#<ANY_MOVIE_NAME_HERE>. It works for keywords as long as they are spelled correct...like http://instamovi.com/#bourne


A study conducted by Microsoft found URLs play a vital role in assessing the security and credibility of a website

Why then do most Microsoft sites not follow this finding? Also, a lot of their products break it as well (I'm looking at you SharePoint and CRM).


MSR does research and prototyping. It's up to the product teams to implement important findings and they still might have other priorities first.

Also »Microsoft« isn't one big monolithic entity and it's not uncommon for individual parts of it doing things in quite different ways.


The only URLs I care about are the main domain URLs. And I dont even type them. I just use Google to reach the main site. It is faster than typing a full URL. Even more on mobile devices.

Or for commonly accessed sites I just type a few letters on my browser address bar. reddit.com is actually re+enter. news.ycombinator.com is actually ne+enter to me. After I reach the main site I usually click around or use the site's search bar.

So, I would say that good URL names are a secondary optimisation.

I would prefer to focus on this priority: 1) A good unique domain name; 2) Good SEO; 3) Good site information architecture; 4) Good internal site search.


His examples aren't helping his case. If the most successful store and the most popular search engine don't use pretty URLs why should anyone else care?


Being big doesn't necessarily means being right, and big companies have sometime good reasons not to follow good practices.

Case in point: it took a while to those same companies to switch from table layout to CSS based layout [1][2].

[1] http://webmasters.stackexchange.com/questions/20408/if-css-i... [2] https://forums.digitalpoint.com/threads/why-does-google-use-...


Well obviously URLs are for people because raw IP addresses are unsuitable, but that doesn't mean that textual URLs as they exist today are our best option. Even well-designed URLs are too complicated. "https://news.ycombinator.com/item?id=5498198 is mostly devoid of meaning even to me. I can tell that that URL is referencing a discussion on Hacker News, but "Hacker News" or the title of the article are not present in the URL.

Hierarchical URLs betray the underlying model of the internet as a series of interrelated documents. People don't care about understanding the layout of files on a web server; they just want to open Facebook, or their email, or perform a search. Nobody types "http://www.facebook.com into their browser. They either click a bookmark or type "facebook" into the search or URL bar. What happens next is up to the browser.

The best solution would conform to the already existing mental model that people have. They don't think of a website as a bunch of documents on a web server (despite the shared vocabulary with printed media - words like "page" and "bookmark"). Their mental model is probably something like buildings on a city block. You can pick one to go into, and when you're inside you can do things and learn things that are unique to that building. Rooms are connected by hallways and doors. There are windows where you can see outside or into other buildings. You can bring things with you into the building and take things out when you leave. To get back to a room in a building that you've been in previously, you can either go back to the front door and follow the path you took originally to get to the room, or you can "bookmark the page", which is like a shortcut directly that room.


I think I read somewhere that a good number Flickr's users just hack this url: http://www.flickr.com/photos/tags/<tag_name>;

Like somebody else said, I blame the tools, the requirements (but we gotta track the referring url of the referring url!!!), and the programmers.


Kind of a tangent to this, but I'm always really amused by generated clean-looking urls that cut out short words. It's very common to have the word "not" or "no" drop out and produce a headline with completely inverted meaning.

localnewspaper.example.com/1934342342/mayor-dropping-race-after-scandal -> Mayor Not Dropping Out Of Race After Scandal


This is pretty basic, but often ignored advice for information architecture.

The article linked to [1] was also pretty interesting. The line that caught my eye was "The URL will continue to be part of the Web user interface for several more years...". Keep in mind that was published in 1999, therefore Nielsen seems to be implying that he believed the URL will eventually become less relevant as a part of the UI. I don't see this happening anytime soon on the "web", but it's certainly true on "mobile".

Modern Web Application frameworks such as WordPress, Ruby on Rails and many more are forcing good practices on the web moving forward. Most startups today are following all the practices detailed in this article as a result of the frameworks imposed on them.

[1] http://www.nngroup.com/articles/url-as-ui/


Good URLs are:

- Short over long. Consider removing useless words from the url like news.ycombinator.com/tips-for-designing-good-urls

- Concise. To the point, describe the page content from the url

- Use lowercase. Generally the best idea, for sharing links and technical issues (Apache is case-sensitive sometimes)

- Consistent. Stay consistent, make a style guide for URL's if necessary

- Conscious of trailing slashes. Stick with trailing slashes or no trailing slashes. Redirect to preferred form.

- Logical. Follow a logical structure, that follows the structure of the site. A good URL might read like a breadcrumb: site.com/category/product-name, this works for silo'ing your content. Other sites (such as news sites or without a category) might benefit more from the shortest url possible.

- Using dashes for spaces. No underscores, + or %20 spaces.

- Not using special chars. Consider replacing é with e and removing any non-alphabet non-number character like: ' " (

- Canonical. There should be only 1 unique URL in a search engines index with a page content. Use canonical or 301's or smart use of URL's to make sure this is the case.

- Degradable. What happens if a user visits example.com/category/product-name/ and then removes the /product-name/ part? The URL-structure should allow for this and example.com/category/ should return content (preferably the category description)

- Timeless. If you have an event and you set the date inside the URL, then after this date has passed, this URL gets less valuable. Either 301 these aged URL's to the current event URL, or make it so your URL's can be "re-used" for future events. Cool URLs don't change.

- Optimized for search. Use a keyword tool, to find out what users might be searching for and use the relevant keywords inside your URL. Keyword in URL is a (minute) ranking factor. Bolded keywords in URLs help discoverability.

- Not using excessive dynamic variables. These will confuse your users and search engines.

- Flat over deep. Hiding content away in many subdirectories can hamper readability and search engine crawling. Avoid example.com/cat/subcat/subsubcat/widgets/green/second-hand/widget-deluxe/reviews

- Extension agnostic. An URL ending in .php, .py, .xml, .htm etc. can be changed to another extension in the future, requiring an update or inconsistency in the URLs.

- Not spammy. Good URL's don't repeat (slight variations of) keywords or (ab)use extensions like .font or .shoe

- Not disclosing technology. There is little reason to add cgi-bin to your URLs (unless you want to confuse your competition). An extension to really avoid is the .exe extension (mapserv.exe?doc=15)

- Non-traversable. When using document IDs all URL's can easily be scanned/traversed in a loop or manually. Including URL's not yet ready/or never meant for publication.

- Secure. Not susceptible to injection, XSS etc.

I'd say URLs are both for humans and machines.


>Not using special chars. Consider replacing é with e and removing any non-alphabet non-number character like: ' " (

I disagree with this. Maybe this works in English where some spell naïve with two dots to appear fancy, but in some languages letters with diacritical marks are completely different.

Now, with IDNs, people should be moving towards more internationalization, not less. Sorry for the bitterness, but the English alphabet is like IE6.


I'm not a native English speaker, but I still design urls to only contain a-z, 0-9, dot, dash, ?, ; (prefered over &), =, %, and /. No spaces, no umlauts, just the plain latin letters.

Why? It's prettier (imho), guaranteed to work everywhere and most people don't expect é, ü, å or ł to appear in urls.


I guess that's fair if you want to play it safe, but I don't agree that we should do something wrong because people are used to it that way. In Romanian “fata” can have 6 different meanings (some related) depending on how you put the diacritics: fata, fată, fața, față, făta, fâță. If you want your URLs to be meaningful and you care about people who don't use only English letters you should use all types of letters.


- Timeless. If you have an event and you set the date inside the URL, then after this date has passed, this URL gets less valuable. Either 301 these aged URL's to the current event URL, or make it so your URL's can be "re-used" for future events. Cool URLs don't change.

Where does the user go to find the 2008 event, if his post from 2008 has a URL that now points to the 2013 event?


If relevant, in the archive.

A link to https://us.pycon.org from a post dating back to 2008 now redirects to https://us.pycon.org/2013. But that is on a domain basis. Hypothetical: python.org/pycon could always show the most current event, with a link to an archive, if you'd want to read the announcement page of the 2008 event, for whatever reason.

If you use python.org/pycon-2012/ and it gets 1000s of links, then you lose all that if you create a new URL python.org/pycon-2013/.

This does depend on the event of course and how bad it is reduced in relevance after its date has passed.

Compare: /search-engine-ranking-factors-2010 /search-engine-ranking-factors-2011 /search-engine-ranking-factors-2012

or a catchall:

/search-engine-ranking-factors

Showing the most current one. An URL like above will collect links from all the years, instead of spreading it out over multiple years.

P.S.: If you register pycon2013.org then domain squatters might register pycon2014.org pycon2015.org etc.


I agree with nearly all of these, and I even agree with this point:

Conscious of trailing slashes... Redirect to preferred form.

The interleaved explanation, however, is problematic:

Stick with trailing slashes or no trailing slashes.

A trailing slash indicates a resource that is in some sense a container. (Historically one would have called this a directory.) The browser knows that '.' refers to the current resource with a trailing slash, while it refers to the parent resource path when the current resource doesn't have a trailing slash. You want to leave off the slash when you know a particular resource is a leaf node.

I actually wouldn't have a problem with advice to "stick with trailing slashes", but I don't think "no trailing slash" would be a sustainable policy. It would be too difficult to keep URLs straight between the server and the browser side.


I agree, those are great points. Its easy to forget these details but when your running a website, changes in URL structure can cause a lot of problems both in your site functionality but also in your flow of new visitors.

I got my start with internet business by "SEO." What did I do? I made sure all of my URLs were very simple and shallow. Google liked this and I got a lot of users for free when my competitors had better sites.


I disagree with non-traversable.. I think that it's more important to implement proper security... I do 404/410 for documents that are removed, or don't exist yet. Some will 404 unless you're logged in as and admin or the owner, which isn't technically correct, but imho effective.


Thanks to this post, I decided enough and enough and I started on a user script that redirects different URLs to their "pretty" version. It currently supports Google Web Search (Google Instant Search is not yet supported), although I'll be adding much more when I get home and in the next few days. I've named the script "Prettify-URL" and it is available here:

GitHub: https://github.com/danielnr/prettify-url

UserScripts: http://userscripts.org/scripts/show/164318

Note that this is absolutely not meant to be an end-all solution to the problem, but instead a ray of sun in a thunderstorm of ugly URLs. The core responsibility still lies on the developer, this just tries to make things a bit more bearable.


My own notes on URL as user interface include a number of ways you can improve your URLs and allow users to guess them: http://alanhogan.com/url-as-ui

I also list a number of positive and negative examples from the wild.


I disagree with the "https://news.ycombinator.com/item?id=5489039 versus https://news.ycombinator.com/5489039/if-the-earth-were-100-p... - its in contradiction to the author's earlier point that URLs should be "hackable". With the former style, I know that if I want to see other hackernews articles, I can just change the number (granted, it's not the most efficient way of browsing HN) - with the latter, I can't modify the URL w/out knowing the title of the article I'm looking for.


Unless the title is superfluous a la amazon.


Missing element:

this is fantastic for SEO: Main-Category/Sub-Category/Specific-Item - is practically screaming look at my site heirarchy and look how much data I have about furniture -> chairs -> chair manufacture -> chair model


I'd like to see a mechanism to pass data to a page in the URL once, but have it discarded such that the URL the user sees and the URL used upon refresh lacks it. This would be nice for all sorts of things:

* That pesky analytics stuff (rel= on YouTube for instance) which you don't want to re-send on refresh or when someone passes on the URL to someone else (because now your data is inaccurate)

* Error messages specified by URL parameters (we only want to show them once)

* URL parameters containing secrets allowing someone to access a page (we don't want to accidentally pass them on)

etc.


Something like a POST request?


Yes, but it would become a GET request were you to refresh or go back.


You can use pushstate to do this in modern browsers.


I don't really want to add it to the user's history, though. Breaking the back button is bad.


In that case you use `history.replaceState()`, or just plain old `window.location.replace('url')`


Well, the latter would cause a reload, but the former sounds good. I'll try it, thanks!


?utm_source=twitter&utm_medium=feed&utm_campaign=Feed%3A+fastcompany%2Fheadlines+%28Fast+Company%29#1

This is the worst offender, in my opinion. It makes any link really ugly to share.


"If the URL looks like garbage people won’t click it"

I'm not so sure of this. URLs that are over-optimised seem link-baity to me and I'm more inclined to not click it.


Completely agree... this includes domain names.

At some point, searching for terms like "best hdtv 2013" would return a page full of sites with perfect urls like http://www.best-hdtv-2013.com/, and all of them would be extremely spammy and lacking in actual content. Seems less prevalent these days.


I agree wholeheartedly.

"http://example.org/furniture/desk-chairs/herman-millers-mirr... makes me think crappy search landing page website that is probably malware infested I don't want to click, ever

However, I believe that is mostly due to the domain. If it was amazon.com I'd click it likewise if it was http://www.example.com/gp/product/B0002K11BK/ref=sr_1_5?ie=U... I would also think I'm going to get spammed :)


And the other thing is that we aren't really clicking on URLs, because it's just a string you cannot click on it. We are clicking on anchor texts and using ugly URLs as anchor text can be the problem. If the anchor text is a descriptive title then most (non-techie) users doesn't even see the URL before they click because they don't know it's displayed at the bottom of the browser if you move the mouse over them.


...cannot believe that nobody else has pointed this out


Ha - I wrote about this at length years ago, even photoshopped little examples: http://uxmag.com/articles/making-the-url-bar-useful-again

I think it should be done, but I think it would have to be header data explaining the "layout" of the URL, not a standard URL scheme.


What this boils down to is that there is space on the Web for both human consumable, and machine consumable URLs.

If a URL becomes a popular for human usage, it is a safe bet to keep it as it is, that doesn't mean you cannot have all sorts of gobbledygook URLs which also get you to that same resource.

There is no need to have this be an either-or proposition.


Just to review: examples of sites doing it wrong include Google and Amazon, two of the most successful websites ever. Doesn't seem to have hindered their growth much.

I like a clean semantic URL, but if I'm being honest with myself, I know that is just my opinion. I don't know of any real-life correlation between URLs and business outcomes.


Maybe there is, maybe there isn't but this argument is a fallacy. To say that "they are successful, who are we to suggest improvements" is an appeal to authority. Just because there is success, does not mean there is not room for improvement.


> To say that "they are successful, who are we to suggest improvements" is an appeal to authority.

The argument is being made that clean URLs are important to success of websites. That the most successful sites on the internet are sites that don't use clean URLs is counterevidence to the claim made. That's not an appeal to authority


There are countless examples of web sites hat have found success through better urls leading to better rankings. I can say with absolute confidence that better urls can lead to better business outcomes.


URLs are for Browsers. Go to any asian country where most, if not all of the population, searches for the websites and content they require.

The goal in designing a good URL is in a scheme that allows the site to grow without "abandoning" URLs.


I have to say it depends. Some are for people and some are for machines.


I can't help but think of Kyle Neath's blog post about this in 2010: http://warpspire.com/posts/url-design/


The OP lost me at the Google example. The search URL is one URL that is completely unnecessary to be clean. Google wants you to get used to using the omnisearch box because it can provide such niceties as auto-suggest, instant results, etc...plus, the google query interface is no longer just a text bar, but voice activated...it works against Google's UI/UX intent for you to get used to hacking things in the URL address.

And yes, for hacker types this intention of Google's seems overbearing...but for the other 99.9% of the population, google is likely more interested in making search uniformly accessible than making clean URLs


I agree that Google is aiming at most people, and not the HN crowd, and that they're doing a good job.

But I remember the days when you could craft a Google search url by hand and tweak the results. It was part of an advanced user's toolkit. All of that has been taken away. Searching is now opaque.


The urls Google search presents don't invite hand editing, but it still works just fine.

For example:

https://www.google.com/search?q=pillow&start=100

But you may have tweaks in mind that no longer work, and so forth.


This is the point right here. Google intends these URLs for machine consumption, that you can simulate the work of the computer by hand doesn't change the facts. It also doesn't prevent them from creating another alternative URL scheme which is meant to be entered by humans explicitly.


Well why would they do that? How many people do you think it'd benefit?


At the risk of inadvertently dragging us into a discussion about the "bubble" that Google and other social services lock us into...Here are two points that I think are worthwhile:

1. Search has always been opaque. We've never known the complete details of PageRank, and we know even less about the hundreds of other flags and signals used by Google search to parse a query as vague as the famous "mike siwek lawyer mi" into something useful.

2. It is largely a good thing that we don't need to hack the search parameters anymore...because, in one sense, it means that search has gotten amazingly accurate. It's so good that I hardly ever go to the second page of results...instead, if I don't find what I want in the first 10, I just slightly alter my text query and Google will eventually get what I need (or at least what I think I need, but that's a philosophical question). I think that is a better UX experience for even hackers, as you can refine using natural language rather than tinkering with vague params.


Google can't control where the user wants to pass on or display the URL.


Quick question: Other alternatives to URLs? I'm not very informed in the subject.


There are people who say that clean URLs are SEO crap. What do I say to those?


URL are for computers too !


If URLs were really for people we wouldn't see people sent to prison for manipulation of URLs.


In essence, Andrew Auernheimer – or Weev, as much of the Internet knows him – was found guilty of incrementing a number on a url – doing basic arithmetic – and has been ceremoniously chucked behind bars for the next 41 months of his life – as a result of speaking up to point out a security problem.

https://asherwolf.net/the-tragedy-of-jailing-weev-the-intern...

(waits for further downvotes)


you're absolutely right. it's frightening there is now case law precedent that makes altering an URL felonious.


While I understand the need and implement this on every site I'll be the first to say I really don't care about this and I think it's stupid. Just my opinion.


Have you ever worked on search?


No, I live under a rock.


That makes two of us then. (:




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: