Anyway, pointless initiative. Can't help but think that this feels like a project you create in the hope of getting attention from people saying writing blogs about them trying to start something new, not in the hope of it actually taking off. I mean, robots.txt makes sense because robots know to look for it. It obviously makes 100x more sense to have a hyperlink to an "about us" or "who's behind this site?" page with a URL of their choice, if it's designed for people to read. Oh, and many, many sites already do that.
I think this is an attempt to create a soopersekrit easter egg standard so that the guys who write a site can pat each other on the back without the customer needing to know about it.
What is it?An initiative to know the creators of the website.
A TXT file that contains the information about the different people who have contributed to the web building.
Robots.txt specifies a set of URLs that automated crawlers are not supposed to access. I'm not sure how this is "like robots.txt, but for humans".
My guess is that the theory behind this is - Robots.txt : File that robots look for, to get certain kinds of information about a site :: Humans.txt : File that people look for, certain kinds of to get information about a site.
I'm kind of surprised at all the grumpy comments -- this idea made me smile.
I like the idea of a team leaving their collective signature in a hidden-away corner of a website. The semi-secrecy (after all, how many people are ever going to read a websites humans.txt?) and feeling of ownership that this gives is really neat.
The idea of hard coding a URI, like is done for robots.txt is a bad idea. Let's not continue to make the same mistakes over and over again.
-- http://bitworking.org/news/No_Fishing
Fair comment here[1] about the specific case of robots.txt, but lets not do this kind of thing gratuitously.
Maybe what we need instead is the X-BOFH header[2]:
X-BOFH: http://www.xxxxx.de/bofh/xxxxxx.html
The actual URL it points to has been obscured to
protect the guilty, and a local mirror[3] provided in its stead.
How many people does it take to put up a website promoting a txt-file addition to webroot? 5, apparently. Next up they'll take millions in VC, integrate with Facebook and write blog posts on it all. </sarcasm>
Somehow a balance should be kept between technology involved and ego show-off:
Static webpage: barely qualifies for even any credit.
Contains sign-up form: a minimal "About" page.
Asks for personal info: names and credible external links.
Charges you money: add photos of yourselves and references.
"like robots.txt, but for humans"
This is indeed nothing but a marketing/PR gimmick. The only connotation that I could think of was that like Robots.txt has to do something with robots, this has something to do with Humans.
Utility:
I won't go on to say it is completely useless. While many sites have an about or colophon section, they end up naming people who are most active or at the top of the ladder. What about facebook with so many employees? That is where I see this essentially being used.
As I tweeted, it is more like the Credits section for the web.
It sounds like the "bickering" at Netscape over their about:authors page was simply the result of having a culture where people would bicker about things like the details of an about:authors page.
As many have pointed out, this is not really analogous to robots.txt.
The first thing I imagined was a text file that said stuff like, "Don't read anything in the Features section. Also, don't try to use the admin login panel!"
But really, what this idea should be called is developers.txt.
I thought it was a silly idea, until someone else here talked about it providing a way of verifying work. Anyone can post a website with screen captures of other sites, and claim they worked on them. Even if the site does list a developer firm in the footer, as long as I worked there, I can claim to have worked on that site.
A developers.txt file like their example would make for a verifiable list of the people who actually worked on the site, in a way that does not add clutter to the actual content.
I know clients we work with probably don't want us putting all our names on their About page, but would have no problems with a developers.txt file that normal users are never going to see.
I see this being useful for future developers – the number of sites I've taken over with no idea who made it or how to get server access etc. Having an easy way to find out how to contact would make things much better than having to wget everything each time.
One major flaw of this approach is that people who currently have control of the site will edit out the people who worked on the site before. The only way to do it fool-proof is to use a service like http://creatorfinder.com they store a history of creators and also allow all your verified portfolio for a creator to be displayed.
I'd rather not speculate as to the usefulness of this, but I do know one thing: the English here is off and needs to be cleaned up (I realise the authors are Spanish).
Agreed. After some time there would be a database and a ranking with web site authors, skills, languages, etc. Interested people would look into it to find people that match their type of site design...
Didn´t something like this used to be achieved with meta tags anyway? I remember when it was common to have meta tags with author names, editors (I have often seem the generator often used on many WYSIWYG editors assigned to the author´s text editor), and contact information.
I like the idea of giving the developers/designers/copywrites/whatever some credit, however, I doubt this is the way it should be done. Most web designers/developers/agencies already place a link in the website's footer so it's not that hard to know who was involved.
i love the irony in reading "We Are People, Not Machines." just as the font-face css loaded and converted the entire page from a human-readable variable-width font to a machine-like fixed-width font.
I don't understand the key value format used if this is for human. It would have been more appropriate to use a format like "The Chef is" if this is for a human-being. Or is the file to be read by robots to treat better human?
When doing contract work, you don't normally get credit for a site on the about page. Sometimes you may get permission to add a footer link. I think this is great for those situations where you don't get either. My hope would be that most companies would not mind a simple text file in the root which isn't linked anywhere but is well-known.
My portfolio of sites becomes verifiable. Yes, I built these sites, and here are the humans.txt files to prove it.
I've been at plenty of companies and built many sites and can't really prove I had hand in putting building any of them.
You could just use a meta tag or link tag with rel="author". Both are hidden from users and, unlike this proposal, are also de facto standards for including author information in web pages.
All the comments and suggestions here are good. One more: why not just use the meta tags for this same purpose? Then you don't even need to create another file.
There's already an 'author' meta name attribute for instance, and you can define your own on-the-fly too.
Anyway, pointless initiative. Can't help but think that this feels like a project you create in the hope of getting attention from people saying writing blogs about them trying to start something new, not in the hope of it actually taking off. I mean, robots.txt makes sense because robots know to look for it. It obviously makes 100x more sense to have a hyperlink to an "about us" or "who's behind this site?" page with a URL of their choice, if it's designed for people to read. Oh, and many, many sites already do that.