![Blocked Robot.jpeg](https://ewebgod.com/wp-content/uploads/2023/12/blocked-robot-958x575.jpeg)
Ever puzzled what would occur if you happen to prevented Google from crawling your web site for a number of weeks? Technical search engine optimisation skilled Kristina Azarenko has printed the outcomes of such an experiment.
Six stunning issues that occurred. What occurred when Googlebot couldn’t crawl Azarenko’s website from Oct 5 to Nov. 7:
- Favicon was faraway from Google Search outcomes.
- Video search outcomes took a giant hit and nonetheless haven’t recovered post-experiment.
- Positions remained comparatively steady, besides have been barely extra unstable in Canada.
- Visitors solely noticed solely a slight lower.
- A rise in reported listed pages in Google Search Console. Why? Pages with noindex meta robots tags ended up being listed as a result of Google couldn’t crawl the location to see these tags.
- A number of alerts in GSC (e.g., “Listed, although blocked by robots.txt”, “Blocked by robots.txt”).
Why we care. Testing is an important aspect of search engine optimisation. All adjustments (intentional or unintentional) can impression your rankings and visitors and backside line, so it’s good to grasp how Google might presumably react. Additionally, most firms aren’t in a position to try this type of an experiment, so that is good data to know.
The experiment. You possibly can learn all about it in Surprising Outcomes of My Google Crawling Experiment.
One other comparable experiment. Patrick Stox of Ahrefs has additionally shared outcomes of blocking two high-ranking pages with robots.txt for 5 months. The impression on rating was minimal, however the pages misplaced all their featured snippets.
![Google News](https://searchengineland.com/wp-content/themes/tdm-editorial/img/icons/google_news.png)
New on Search Engine Land
#occur #Googlebot #crawl #web site