It's hilarious to think there exists people who think googlebot does not get special treatment from website operators. Here is an experiment you can do in a jiffy, write a script that crawls any major website and see how many URL fetches it takes before your IP gets blocked.
Googlebot has a range of IP addresses that it publicly announces so websites can whitelist them.
From that same page they recommend using a reverse DNS lookup (and then a forward DNS lookup on the returned domain) to validate that it is google bot. So the effect is the same for anyone trying to impersonate googlebot (unless they can attack the DNS resolution of the site they’re scraping I guess).
I don't whitelist googlebot, but I don't block them either because their crawler is fairly slow and unobtrusive. Other crawlers seem determined to download the entire site in 60 seconds, and then download it again, and again, until they get banned.
Googlebot has a range of IP addresses that it publicly announces so websites can whitelist them.