|When you submit a URL in this way Googlebot will crawl the URL, usually within a day. |
I love this. The "Fetch as Googlebot" feature doesn't use the real googlebot, just an understudy who has learned all its lines. So they have to crawl the page all over again for it to count.
Today's interesting lesson: When you (that is, you, not me) make a post in Windows-Latin-1 using curly quotes and em dashes, my browser decides it is in "Japanese (Shift JIS)" and turns those six non-Latin-1 characters into Kanji.
|if the fetch is successful |
So, what would define a successful fetch vs. an unsuccessful fetch?
I started this awhile back ~ it's a very useful GWT feature but be aware that you are limited in how many you can fetch on a per week basis, so be sure to prioritize and do the most important URLs first.
In what circumstances would this be useful? Googlebot gets round to most pages reasonably quickly, doesn't it?
Is this 'better' than a sitemap ping? Seems like more work doing onesey twosey (Fetch as Googlebot) rather than a single submission of a sitemap file with multiple URL's.
What am I missing?
Is this now a way for G to sniff out original material versus scraped and/or duplicated content? Perhaps even a way for G et. al. to suppress duplicated, re-purposed, revised, rewritten versions of the same material?
So, maybe feed the bot before feeding the rss feed?
>In what circumstances would this be useful?
After the Panda virus decimated my sites, I started wondering if my most important pages were even visible any more to Googlebot, so to put those questions to rest, I used this tool. When each page came back with "Success", I could at least rest assured that I had not fallen into the Googlevoid. For that reason alone it was time well spent one night.
Be careful with this. It's of no danger to most websites and webmasters but when you announce to Google that "I DEFINITIVELY OWN THIS SITE" you're signing up for the unknown. Google may or may not place guides and restrictions based on you and/or your history of which the contents of are unknown to you. It's a game of roulette though most likely safe for the majority of webmasters.
If however you've been penalized on another site and/or are working on some rankings issues of some kind you may be 'infecting' your new site right out of the gate. Make sure your portfolio is in clear sailing mode before you use this for a new site, imo.
|I started wondering if my most important pages were even visible any more to Googlebot |
Wouldn't a sitemap do the same thing? One category of Crawl Errors is "On Sitemap", presumably meaning the sitemap is the only way they know of a page's existence. If you've got a trick for convincing g### that a given page doesn't exist, once they've decided it does, I think everyone would like to hear it.
>Wouldn't a sitemap do the same thing?
Yes, as would have my raw logs. But I was in panic mode and thus grasping at straws (none of which are apparently attached to anything). I ran the tool until it came back with the message that I had maxed out, then a week later did it again, so not a lot of time was lost, and with the exception of Sgt's post above, had not heard of a downside...
Not understanding how this is in any way helpful to anyone.
Sounds like the Google site submission tool, antiquated beyond its time years back...
Is it just Google heroin to give adicts more to obsess about?