I should probably add this SEO tip too because the purpose of robots.txt is confusing: If you want to remove/deindex a page from Google search, you counterintuitively need to allow the page to be crawled in the robots.txt file, and then add a noindex response header or noindex meta tag to the page. This way the crawler gets to see the noindex instruction. Robots.txt controls which pages can be crawled, not which pages can be indexed.
The consequences of robots.txt misuse can also be disastrous for a regular site. For example, I've seen instances where multiple warnings of 'page indexed but blocked by robots.txt' have led to sites being severely down-ranked as a consequence.
My assumption being that search engines don't want to be listing too many pages that everyone can read and they can not.
Google used to have a /killer-robots.txt which forbid the T-1000 and T-800 from accessing Larry Page and Sergey Brin, but they took that down at some point.
> The Sitemap protocol enables you to provide details about your pages to search engines, […] in addition to the XML protocol, we support RSS feeds and text files, which provide more limited information.
> You can provide an RSS (Real Simple Syndication) 2.0 or Atom 0.3 or 1.0 feed. Generally, you would use this format only if your site already has a syndication feed.
Think of robots.txt as less of a no trespassing sign and more of a, "You can visit but here are the rules to follow if you don't want to get shot" sign.
If you do not respect the sign I shall be very cross with you. Very cross indeed. Perhaps I shall have to glare at you, yes, very hard. I think I shall glare at you. Perhaps if you are truly irritating I shall be forced to remove you from the premises for a bit.
There's a lot of talk of deregulation in the air, maybe we'll see Gibson-esque Black Ice, where rude crawlers provoke an automated DoS, a new Wild West.
They may or may not, though respecting robots.txt is a nice way of not having your IP range end up on blacklists. With cloudflare in particular, that can be a bit of a pain.
They're pretty nice to deal with if you're upfront about what you are doing and clearly identify your bot, as well as register it with their bot detection. There's a form floating around somewhere for that.
FWIW, that’s why I’m working on a platform[1] to help devs deploy polite crawlers and scrapers out of the box that respect robots.txt (and 429s, Retry-After response headers, etc). It also happens to be entirely built on Cloudflare.
What's the purpose of "User-Agent: DemandbaseWebsitePreview/0.1"? I couldn't find anything about that agent, but I assume it's somehow related to demandbase.com?
But why are it and twitter the only whitelisted entries? Google and bing missing is a bit surprising, but I assume they're whitelisted through a different mechanism (like a google webmaster account)?
It is one of the service they use. As per the cookie policy page [1]:
> DemandBase - Enables us to identify companies who intend to purchase our products and solutions and deliver more relevant messages and offers to our Website visitors.
I thought about doing something like that, but then I realised: what if someone linked to the trap URL it from another site and a crawler followed that link to the trap?
You might end up penalising Googlebot or Bingbot.
If anyone knew what that trap URL did, and felt malicious, this could happen.
A crawler could easily avoid that by fetching the target domain's robots.txt before fetching the link target. However a website could also embed the honeypot link in an <img> tag and get the user banned when their browser attempts to load the image.
How do you discern a crawler agent and a human? Is it easily as the fact that they might cover something like 80%+ of the site in one visit fairly quickly?
https://www.checkbot.io/robots.txt
I should probably add this SEO tip too because the purpose of robots.txt is confusing: If you want to remove/deindex a page from Google search, you counterintuitively need to allow the page to be crawled in the robots.txt file, and then add a noindex response header or noindex meta tag to the page. This way the crawler gets to see the noindex instruction. Robots.txt controls which pages can be crawled, not which pages can be indexed.