| 4:09 am on Apr 9, 2008 (gmt 0)|
Just to clarify, if you request this:
...you get results from /subportal2/ and the titles are mashed up with results from /subportal3/
I must say that's an odd bug. If you've verified that your server is returning the right information to a browser, then you could load up Firefox with the UserAgentSwitcher add-on. Plug in the googlebot user agent and see if someone's been tinkering with your server, trying to return different results to googlebot and making a hash of it.
Similarly, if you do any IP based delivery, give that a health check-up
Other than that, it certainly could be buggy data at Google right now -there's more than usual it seems during this Dewey thing, whatever it is.
| 4:47 pm on Apr 9, 2008 (gmt 0)|
No I am sure its not anybody messing with the code / page... I control all of that. Its nothing that would be happening because of Googlebot being the user...
If I do site:www.example.com/subportal1 I don't get the home page of that portal, which is very weird because its a major page and the rest of the site comes up.
I have 80,000 pages indexed in google so why would some of the home pages of the subsites not be coming up? Or in their place other subsites coming up? Does google think the homepages are duplicates?
To re-iterate, I am searching for one thing, getting a response that's a combination of two different web pages (a title from one and URL from another.) Neither of which match the search I was doing!
| 4:54 pm on Apr 9, 2008 (gmt 0)|
So these are regular searches, not site: operator results - correct?
If so, it does sound like an issue with Google's current results. There's been a lot of disappearing and reappearing home pages in recent weeks. Maybe that problem extends to directory index pages, too.
| 6:31 pm on Apr 9, 2008 (gmt 0)|
Yes just regular searches, for instance if I search for our mayor, I get our neighborhood web site.
If I do "show all results from" I see lots of pages from the mayor's website in the results, but not the homepage, and the first result is always the homepage from a different sub site.
It seems to be pretty consistent in always picking the neighborhood subsite. I am going to put a block for that in our robots.txt and see if by excluding it everything will go back to normal. Then I can add it back in. Not sure if that will work but I am desperate.
| 10:18 pm on Apr 9, 2008 (gmt 0)|
I am seeing odd stuff going on at the moment. A search like:
site:domain.com "some text to find"
is also returning several pages that are NOT from domain.com in the results.
The change has happened in the last 24 to 36 hours, or less.
| 2:38 am on Apr 10, 2008 (gmt 0)|
>is also returning several pages that are NOT from domain.com in the results.
doesn't that sound familiar? Like from before 302s were "fixed"?
| 3:48 pm on Apr 10, 2008 (gmt 0)|
Just an update - the robots.txt attempt didn't fix anything.
This is really brutal, I'd almost rather Google not be returning results in these cases then returning the wrong results.
| 8:55 pm on Apr 10, 2008 (gmt 0)|
|the robots.txt attempt didn't fix anything. |
If you just changed your robots.txt yesterday, then it's way too soon to know if it helped or not.
| 1:30 am on Apr 12, 2008 (gmt 0)|
If I am putting new content up that I don't want to be indexed, I make sure that the disallow is placed in the robots.txt file at least two weeks before the content is posted.
| 10:48 pm on Apr 25, 2008 (gmt 0)|
Hey all this is still happening. To re-iterate again: The wrong sub folders are coming up even when I specifically search for the titles. Also the title of the page that comes up in the SERPs does not match the URL in the SERP.
[edited by: tedster at 11:31 pm (utc) on April 25, 2008]
| 8:20 pm on May 21, 2008 (gmt 0)|
Hey I still am having this problem a month later... I took some advice of Tedster. I also reduced the number of 301's I am using overall.
Google is still insisting on forwarding all SERPs to one of my subsites, even when that page contains none of the keywords in my search.
Really mystified here, before this problem things have not changed for 3-4 years.
| 8:05 pm on Jun 6, 2008 (gmt 0)|
Still having issue, going to try undoing a couple more 301's... maybe Google is punishing me for having 301's for Googlebot but not for users? I just do this in a couple cases to reduce the number of duplicate entries.
both redirect to:
which is the same page
| 8:32 pm on Jun 6, 2008 (gmt 0)|
pholmstr, why don't other user agents get the same 301 redirect? That seems to be the safest thing to do.
| 8:36 pm on Jun 6, 2008 (gmt 0)|
Users and bots should not hit a redirect if they follow a link from within your own site.
| 9:05 pm on Jun 6, 2008 (gmt 0)|
Our CMS is structured in such a way that it doesn't know the difference between /index.html and index.html?page=23434, if page 23434 is the home page.
I know its not best practice for SEO but its just how it happens to work.
So I had the 301s setup to reduce duplication of URLs, and to try to ensure the URL kept was the simple clean one.
I got rid of those 301s, so maybe Google will stop punishing us for them (if that's what's happening.)
| 11:34 pm on Jun 6, 2008 (gmt 0)|
You are now exposing Duplicate Content. That's killer too.
The CMS needs a redesign from the ground up.
| 12:15 am on Jun 7, 2008 (gmt 0)|
|Users and bots should not hit a redirect if they follow a link from within your own site. |
In an ideal world, yes. The reality is that most significant websites have internal redirects. Their presence alone is not enough to cause a problem.