Applebot was able to get away with doing exactly this but I imagine that's because it's Apple and websites knew that Apple was about to send them enough traffic via Apple News to make it worth their while. I don't know if other search engine operators have tried this but I would imagine they would get caught by rate limiters set for non Google IP's and then they would be blocked.
Still, you keep saying all that as if most websites even notice that they're being crawled, and that their operators are very aware exactly when by whom they're crawled. Like as if the admin gets a notification every time a crawler comes by or something, with precise details about it. I don't think it's nearly as serious as you're trying to make it look.
I've been a part of a team that operated a large website and I've been paged before because of the issues that somebody was causing because it was being crawled too much. Many people in the web operations field have had the same experience. Generally speaking, the larger the website, the more sensitive they are about who is crawling and why.
To add another data point for you: I have had one of my websites brought down by Yandex bots before. There are also dozens of no-name bots (often SEO tools like ahrefs, semrush, etc.) that can sometimes cause troubles.
For me it was a problem of having lots of pages, and having a high cost per request (due to the type of website it was).
For other websites, it is not necessarily about the volume of traffic from bots, but the risk of web scrapers getting their proprietary data. They're fine with Google scraping their info because that's where their traffic comes from. They're not okay with some random bot scraping them because it could be taking their content and republishing it, or scraping user profile data, or using it for some nefarious/competitive purpose.
> the risk of web scrapers getting their proprietary data
That's some weird logic, to me at least. That data is literally given away to everyone but some people or organizations can't have it? If you want to control access to it, maybe at least require people to register before they can see it? Is it even proprietary if it's public with no access control whatsoever?
This for-profit internet is just really such a parallel universe to me.
> This for-profit internet is just really such a parallel universe to me.
I know I have been a contrary commentor in this thread, but I hear you with this. What a monster we have built, and what always gets me is how trivial everything is. So much capital is flowing through these ephemeral software systems that, if gone tomorrow, would be ultimately inconsequential to mankind.
I mean it's ridiculous to think about it, but there's this giant, many-billion-dollar online marketing industry that I essentially don't exist for. If it's gone tomorrow, I would indeed not notice, but it'd be the end of the world for some.
> and what always gets me is how trivial everything is
Whenever I read about corporations and how they work, I always inevitably ask myself the question "where the hell does enough work to keep this many people busy even come from". Everything is ridiculously overengineered to meet imaginary deadlines.
> That data is literally given away to everyone but some people or organizations can't have it?
It's often a question of quantity. LinkedIn probably doesn't care about you scraping a few profiles, but if you're harvesting every bit of their publicly-available data, then they get a little scared that you're building something that's going to compete with them.
Same with Instagram, or Facebook, for example. Though in this case it's probably more of a user-privacy issue - at least that's what they say.
It's not really weird logic to me - seems to make sense.
> If you want to control access to it, maybe at least require people to register
Most of the time they can't do this because they need the Google traffic. LinkedIn wants a result in the SERP for Bob Smith when you search for "Bob Smith" because that helps them get signups. Google won't list the page if that content is gated by a sign-in/register page.
There are syndicated blacklists that get fed into automatic traffic filters. Not to mention a surprising amount of the web is fronted by Cloudflare and other CDNs, making that kind of traffic detection and blocking more effective and widespread than you might expect.