Post by amirmukaddas on Mar 11, 2024 5:07:52 GMT
The article you are about to read is a test like the ones I often do, in this case on a topic that is deliberately far-fetched. I simply wanted to see if I could get a featured snippet on Google for a query referring to a status code that at least in terms of SEO doesn't exist . Indeed, yes, I succeeded. So keep your eyes peeled, because with all due respect to the screenshot below, what follows is a work of fiction. The article arises from the observation of a phenomenon visible through the crawl statistics as we see them in Search Console. In particular, studying a website with millions of visits per month, I realized that Google's crawlers move to various types of resources depending on how the content producers on the one hand and the users on the other behave respectively. status code 610 Provided that the former and the latter do not coincide, therefore to the extent that we are not talking about a website that works on User generated content , we will have that the scanning resources will go much more frequently towards the updating of already existing resources, rather than on detection of new pages , while as regards the file type, we will have a predominance of HTML resources in status 200, then gradually we will find the JS / CSS dependencies, images, JSON.
Now, I happened to notice that for some vertical corporate websites, including blogs and e-commerce sites, in some moments I found - for a very short time - a shift in the purpose of the requests, from updating the pages already detected and scanned when new pages were detected, even if none had actually been produced. This was strange . If an e-commerce site publishes new products only Denmark Telegram Number Data a couple of times a year, we would expect more frequent scanning on existing pages and less frequent on new ones, but at certain times the opposite happened. And why does this happen? It happens when feeds come into play , or rather when we slow down (more or less voluntarily) access by sending them to status code 610 and at the same time we submit a new sitemap in the search console, making sure that Google takes note of everything. When, following the submission of the sitemap, the feeds become accessible again, a "shock" will occur which for a moment will unbalance the scanning balance, producing a deepcrawl and forcing the bot to re-read the entire website in depth.
The status 610 indicates an HTTP connection timeout, but only for some cloud services that monitor the status of your website by sending HTTP requests. If no response is received within 5 seconds after sending an HTTP request, these services determine that the connection times out. In this case some services return a status code of 610, while usually the status is 503 or more often 504 depending on the case. Initially I believed that this practice could bring benefits when the search engine "falls asleep", which often happens on recent websites and with few pages that struggle to enter the index. However, I realized that in the face of this situation, the most recent pages improve positioning significantly, obtaining visibility for keywords with significant search volumes. Indeed, statistical visibility grows so much as to lead to an increase in the authority score of up to 4 points, which normally requires a significant increase in the number of backlinks from authoritative websites.
Now, I happened to notice that for some vertical corporate websites, including blogs and e-commerce sites, in some moments I found - for a very short time - a shift in the purpose of the requests, from updating the pages already detected and scanned when new pages were detected, even if none had actually been produced. This was strange . If an e-commerce site publishes new products only Denmark Telegram Number Data a couple of times a year, we would expect more frequent scanning on existing pages and less frequent on new ones, but at certain times the opposite happened. And why does this happen? It happens when feeds come into play , or rather when we slow down (more or less voluntarily) access by sending them to status code 610 and at the same time we submit a new sitemap in the search console, making sure that Google takes note of everything. When, following the submission of the sitemap, the feeds become accessible again, a "shock" will occur which for a moment will unbalance the scanning balance, producing a deepcrawl and forcing the bot to re-read the entire website in depth.
The status 610 indicates an HTTP connection timeout, but only for some cloud services that monitor the status of your website by sending HTTP requests. If no response is received within 5 seconds after sending an HTTP request, these services determine that the connection times out. In this case some services return a status code of 610, while usually the status is 503 or more often 504 depending on the case. Initially I believed that this practice could bring benefits when the search engine "falls asleep", which often happens on recent websites and with few pages that struggle to enter the index. However, I realized that in the face of this situation, the most recent pages improve positioning significantly, obtaining visibility for keywords with significant search volumes. Indeed, statistical visibility grows so much as to lead to an increase in the authority score of up to 4 points, which normally requires a significant increase in the number of backlinks from authoritative websites.