Increasing the volume of visits is one of the basic objectives of most websites. And in most cases, the greater number of visits comes from the main internet search engines, such as Google and Bing.
Indeed, for the pages in our site to appear in the search result pages of these search engines, they must have previously been added to their indexes.
To index a site, search engines run specialized applications commonly referred to as “bots”, “crawlers” or “spiders”. These bots navigate the content of web sites, reading the content of the pages they find.
In principle, being actively crawled by different search engines it is a good signal for a site. But it may happen that the number of accesses done by crawlers becomes excessive, affecting the performance of the server, making it appear as slow or unresponsive to users. In this post we will review different possibilities to limit the crawl rate of the main internet search engines.