Welcome to WebmasterWorld Guest from 126.96.36.199
Here's a couple of items I just noticed tonight
What Googlebot sees
These statistics show you how the Googlebot sees your site
If your site publishes feeds of its content, this page will display the number of users who have subscribed to these feeds using Google products such as iGoogle, Google Reader, or Orkut.
With all the stirrings in the SERPs it is now obvious that there is something stirring at the plex.
Looks kinda nice.
I like blue.
GWT is cool, no matter what anyone says.
A great tool to see what has been happening on your site two to four weeks ago. No, I mean it, I'm kinda thankful for Google.
Not that it had made any of the sites rank better, but at least I can learn a lot about *marketing* mistakes... typos, bad wording choices, ... other markets shooting for the same phrases...
It's a great SEM tool.
"If your site publishes feeds of its content, this page will display the number of users who have subscribed to these feeds using Google products such as iGoogle, Google Reader, or Orkut."
Google shows more than 200 URLs as 404 not found for one of my domains.
When I look into my access_log for this day, there was not a single 404 error. Seems like they have a new problem to fix. The affected domain already lost its index.html because of this bogus Google 404 error reports.
Additionally they mix up URLs which are not found with different domains. what a mess!
Anyone seeing this? Why do they release a buggy tool?
[edited by: SEOPTI at 12:12 am (utc) on Sep. 15, 2007]
No, it is under crawling statistics.
> Seems like they have a new problem to fix.
In a way, yes, but this is not new. The "Problem statistics" particularly 404s seem to oscillate around certain values for no obvious reason. I'd suspect this has to do with broken links on crappy external websites, but I am not sure.
In this respect I observed another thing, which I however never reported to google: In parts I use very long URLs. The anchor text of those long problematic URLs is abbreviated with dots, because otherwise it would not fit into that one-line-scheme. But in some cases, clicking on that link leads my browser to an URL with such abbreviation-dots in the middle, and this URL of course does not exist. It could also be the case that google has a problem with deciding those abbreviated anchor-texts from the true URL. Somewhere in their databases both these items seem to get mixed up. Seems quite likely to me, but I cannot imagine noone at the plex ever noticed this before, though to find the cause would be another matter;)
BTW: I also noticed quite a number of googlebot requests in my logfiles seeking for directories instead of full .html-URLs, and some of these were also reported in webmaster central. Many of my directories did NOT contain an index-file, because I thought my internal link-structure would not deserve any. But you never know, what other scraper-sites do whilst linking to you, so a few weeks ago I wrote a little php-file, which automatically creates such index-files (with a noindex- but follow-metatag) in order to help googlebot crawl my site properly. There was this ww-thread where googleguy or some other insider was complaining about the rotten syntactical status and link schemes of half of the web, which made it almost impossible to write a 100% accurate crawler. Adding those index-files has diminished those error reports considerably.
Why do they release a buggy tool?
Google has a lot more free tools/services to test except this one. I'm glad they add features from time to time, and would prefer to have slightly buggy tool than nothing.
PS: I haven't seen a free tool which is not buggy :-) Maybe I use too few. proverb: if the program has no bugs - it is useless.