Welcome to WebmasterWorld Guest from 188.8.131.52
Sample from my logs:
184.108.40.206 - - [08/Nov/2005:05:19:47 -0500] "GET /?&config=935&stage=7182 HTTP/1.1" 200 12565 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
220.127.116.11 - - [08/Nov/2005:05:24:02 -0500] "GET // HTTP/1.1" 200 12565 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
Should I wait it out or try add even more 301's to htaccess? I'm already 301ing old .shtml to .php and www to non www.
With jagger 3 my inner pages are finally out of supplemental, and the right urls are listed... except my main page. It's still showing url only with the www's. Could it be because of a duplicte penalty from these?
I've checked and nowhere do I link like that, ran a link checker and everything's fine.
Small rant to let off steam... Whatever happened to design for visitors and forget the search engines, it seems lately all I do is try to get Google to index properly. The other SE's can spider my site properly without needing their hands held the entire way, what happened to Google?!?
Someone has fed Google some trash directed at your site.
Who did the feeding I don't know.
But once Google has a url it goes out and asks the server hey server do you have anything for me at url.
If the server answers 200 and here it is, Google is happy and stashes what it got under the url as content to be indexed.
It appears that servers answer 200 when they shouldn't or that they don't come with clear setup instructions or that they have poor default setups.
Up until Google stopped ignoring query strings it made little difference, now you get duplicate content as a result.
With the trailing slash the duplicate content issue again comes into play.
The same problem can occur in other situations as well.
You might wish to visit the Apache forum and go through the postings there.
Google needs to adapt, not you.