Forum Moderators: Robert Charlton & goodroi
Does Google really think these are 3 different pages? If so, why doesn't it remove 2 as duplicates? Most importantly, should I condense these 3 versions down to one? If so, what's the best way to do it?
In HTTP and in Google, www.example.com/ and www.example.com are equivalent, but www.example.com/foo/ is different from www.example.com/foo and www.example.com/Foo/. Also, www.example.com/foo/ is different from example.com/foo/
Google also treat www.example.com/index.html as www.example.com even though the rules of HTTP do not.
If the three URLs continue to serve identical content then Google is likely to merge the three, removing two and giving the remaining URL the backlinks from the others. This does not happen instantly, and if you have a part of the page that changes frequently (e.g. latest news or today's date) then Google may find different content when it fetches the URLs at different times.
I suppose that my PR will improve if I fix this problem.
It would likely cause problems with some individual pages though. For example, if Google had already winnowed out www.example.com/foo and saved www.example.com/Foo/, my changing it to www.example.com/foo would result in the page essentially being de-indexed. Anyway, Googlebot would eventually come across it and all would be well.
Do you agree with these assumptions? If so, is it always advisable to go ahead and convert a mess like mine to a standard format?
Thanks
thanks
Thanks/
Is there a risk of duplicate content penalty due to such inconsistency? Is it worth the trouble cleaning up such a structure? Or maybe it's even necessary? I've asked this before elsewhere on WebmasterWorld, but sadly nobody seems to really know.
We have a couple hundred pages. Our homepage file is called default.htm. On a handful of internal pages, links back to the homepage were mistakenly called default.html or worse yet, index.htm.
When I discovered we got hit, I spun my wheels for weeks before figuring this out. I did a search, and sure enough Google had crawled the pages with the incorrect links and had indexed the phantom, non-existent pages as "real", all with identical content to our homepage.
I cannot prove this was the problem, but once I marked those phantom pages as "do not index", we were back in Google.
But can my and Lou's cases be compared? In my case the file name of the page is always "foo.html", although the link varies.
I believe this situation is not uncommon and could therefore be worth sorting out. I sincerely hope more webmasters would care to give their views.
I'm wondering what happens when inbound links vary. For example, one is www.site.com/Foo. Another is www.site.com/foo. If Google can successfully follow both, would it index them as two separate pages? If so, couldn't competitors mess with your site by splitting your PR?
The other problem, the capital letter on the Folder name is more tricky. Have you recently moved servers, from linux to windows (or vice versa)? Linux servers treat all names as unique, Folder is not folder is not FolDER, whilst windows will treat /folder/ and /Folder/ and /FolDER/ as all the same folder, all served with a status of 200. If people link to you in different ways then each will be seen as a legitimate folder name; when really they should not, if you use windows servers. The only way to fix this is to get the incoming incorrect links modified.
1: Get treated as 2 pages by Google (unless proper rewrite rules or redirects are in place)
and
2: Get hit with a dup content penalty (filter)
If not serving the same page they are still treated by Google as 2 pages.
As for PR if the actual domain/Foo and domain/foo were supposed to be the same page and you have IBLs to each page then those page's PR would be lower than it would have been if all links went to the same page.
Other PR gotchas:
This can be a huge problem if a site gets split on the www/non-www form of the domain and both forms have IBLs. This is besides the dup content problem.
This one cascades through the sites that this site links to.
I wonder if we are victim of our own complex addressing or 302 items? We lost almost all Google traffic in Feb after years of being loved by them. We just started 301 redirection to resolve multiple pages rather than having them point to same spot on server, which we thought would not be considered dupe content by Google but was getting doubly indexed - at least after Feb 2.
We still have a problem with site:oursite.com which shows enormous numbers of our *tracking links listed as pages* at our site - these resolve to the external sites(!) - 6 weeks ago we changed robots.txt to allow bots to follow them but that has not removed them.
This was quite a radical change for my site. So far, my Google traffic has gone from bad to worse. I'm still hopeful that it will improve once the PR gets sorted out.
At the moment, however, I'm having second thoughts. If Google places value on older links - and I think it does - some of those old links that I removed might have been important.
Anyway, thanks for your suggestions everyone.