Forum Moderators: open
I have noted more than the usual shifting of pages in and out of the SERPs for the past week or so, and of course we have seen threads about updated backlinks (FWIW).
Anyone else seeing changes?
WBF
The rolling PR calculation algorithms have not caught up with the index. Over the next several days / weeks, the PR gains, anchor text and other off page factors from the new index will be factored in and will contribute to changes in the results.
What am I on? A Starbucks crappachino.
As of this moment absolutely nothing has changed, except Google is showing signs of more broken-ness.
Sure it does. It means Google continues to be the repository of psuedo-directory crap put up for the purpose of laundering PR.
It takes waders to slosh thru the sewage that passes for results and ranks only because of keyword slammed page titles and anchor text from interlinked pages on the same site.
except Google is showing signs of more broken-ness'
kirby,
It takes waders to slosh thru the sewage that passes for results and ranks only because of keyword slammed page titles and anchor text from interlinked pages on the same site.
couldn't agree more MSN are more relivent for the search terms that i review.;)
Vimes
[edited by: Vimes at 6:37 am (utc) on Nov. 11, 2004]
As of this moment absolutely nothing has changed, except Google is showing signs of more broken-ness.I'm guessing this will be the first of a three part process.
If so, it will certainly steal some thunder from the MSN debut (at least in this neck of the woods).
One thing is for sure, the quality of search results will improve and to me that is good news.
And for the record I like Microsoft and Google. The race will benefit everyone that is looking for higher quality search results.
Less than a week ago it suddenly reported 18 million results, and tonight it is up to 22 million results.
There are a LOT of 9 Nov and 10 Nov 2004 Fresh tags appearing in the last few hours (even for pages that have not had fresh tags for months).
For pages that have not been re-cached for a long time, a LOT of them are showing a cache date of 31 Dec 1969 23:59:59 GMT (which is the Unix Epoch minus 1 second).
Has anyone else seen this or have any clues as to why they are suddenly so different?
I have several thousand extra pages (6,340 to be exact) in this new larger index. Problem is they are CGI files that have always been blocked from day one via robots.txt. The whole cgi-bin has always been blocked (3 years) and up until now Googlebot adhered to the robots.txt file (which is a valid robots.txt). It sucks that they have gone where they shouldn't have to dig up completely useless pages....
Now I guess the only way to get them out is to contact them... but with my luck they will throw out the whole domain in their haste...hehe I guess I will leave it alone since it looks worse on them than it does me..
edited: for clarity
[edited by: The_Contractor at 4:32 pm (utc) on Nov. 12, 2004]
Not on my site, at least. Google has unlimited access to the site - and it is a static site. I've seen my site doubled in size in the big G - to over 8000 pages (the site itself is only 4000 pages).
For those of you seeing more pages of your site than there should be... any chance they have now included pages in the index they should not have?
I have several thousand extra pages (6,340 to be exact) in this new larger index. Problem is they are CGI files that have always been blocked from day one via robots.txt. The whole cgi-bin has always been blocked (3 years) and up until now Googlebot adhered to the robots.txt file (which is a valid robots.txt).
I am seeing similar results for my sites and I have cgi-bin blocked.
However I don't think that the content of the page is actually indexed. None of my cgi-bin pages are cached and when I search for text that is on these pages they are not returned. I think that google has merely logged the existence of the page in the database. Hence the increase in numbers.
I am seeing similar results for my sites and I have cgi-bin blocked.
However I don't think that the content of the page is actually indexed. None of my cgi-bin pages are cached and when I search for text that is on these pages they are not returned. I think that google has merely logged the existence of the page in the database
Yeah, except the whole idea of blocking bots are so they don't go anywhere they are not supposed to. Like to hear from Google on this one since I really don't need them keeping a publicly viewable inventory of my perl files, and I'm sure others feel the same way.
For those of you seeing more pages of your site than there should be... any chance they have now included pages in the index they should not have?
Yep hundreds. All blocked by robots.txt
Only listed as url entries - as google obviously knows they exist but can not crawl the page. So I guess Gbot is obeying robots.txt.
But not sure why Google think listing url only pages adds value to their database.