@johnmu Now that Google is crawling more from different locations which is great as can additionally reduce abuse. Is there still a fallback default location that Google crawls the site from? For example, before if a site was not available and blocked for US users, it prob won't be in the index. What if GoogleBot comes to crawl a website from France and the site is blocked due to compliance? What about a crawl to a single domain (multi-lingual site) Using a sub-folder structure? Thx! C
@conbug From a practical POV, I'd look at your server logs. My guess is almost all of the requests will continue to come from the same locations. I'd see the different locations more as a way of dealing with broken websites on the messy web, and less of something to aim for.
Maybe that will change over time, maybe the web will be more geographically splintered? It's hard to judge how the web will evolve :-)
@johnmu Thank you!