What we need is a open standard for URL shorteners - where the shortener can publish the time-to-live of the URL, and other stuff (that I really don't know about, but I'm sure some netOS guys can come up with). This way they are transparent, and systems like archive.org and search engines can simply remove the layer of indirection in their systems. Of course, shorteners may not want that to happen, but there needs to be a way to gracefully transition to the direct link in long-term archives. Perhaps the system could even negotiate that time before removing the middleman.
Of course, services like Twitter could allow <a href> tags (I do not know if they do; I do not use Twitter), which would help a lot in allowing users to save space while posting links.
This is really interesting. One using a URL shortener is essentially setting up a parallel DNS infrastructure (like the article said). And there is nothing preventing anyone from doing exactly this right now. One could run a DNS that re-maps any name to any desired IP. The new root "just" have to get people to put down your servers instead of the ones that their ISP or organization gives out.
This is usually frowned upon by the W3C Tag... as I know from experience on the XRI TC... becuase XRIs use an alternate resolution process. However I've created shortxri.net which shortens URL's to an XRI, which can then be used as relative URLs. Now one can tack an XRI on any domain which supports XRI resolution, and you can push the XRDS (which the XRI ultimately points to) to other domains as well. So the xri @si348u can be attached to http://shortxri.net/@si*3*48u or http://xri.net/@si*3*48u or the even shorter http://xri.be/@si*3*48u to all resolve to the same place.
XRIs don't even have to be just URLs... and logging in with OpenID gives you even more options.
Of course, services like Twitter could allow <a href> tags (I do not know if they do; I do not use Twitter), which would help a lot in allowing users to save space while posting links.