Forum Moderators: open

Message Too Old, No Replies

site dropped for duplication - what do you think about this strategy?

         

voodoo

4:38 pm on Jun 20, 2003 (gmt 0)

10+ Year Member



A fairly large site got dropped from SERPs for a certain keyword. Adding the &filter=0 string solved the problem.

I turns out the site had many subdomains with an exact copy of the main page. I don't have the details on why this was done, but certainly not to spam or get better results.

Now here is the question, what do you think would be the fastest way to solve this problem. Using a permanent redirect on all these sites and pointing them to the main www.mysite.com might take 6-8 weeks. The olny way to get an immediate removal of a site from the index (according to this page: [google.com...] is to disallow indexing with robots.txt, and filling out the removal form.

If all the subdomains are removed from the index, will this be enough for the site to reappear (since there will no longer be any copies?.) Or will they have to wait until the next crawl? Is the duplicate filter dynamic (done during the search,) or does it get updated during a crawl (or some other time?)

Any other ideas?

voodoo

Jenstar

4:21 am on Jun 21, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Welcome to WebmasterWorld, voodoo!

The Google remove page states that the removal form should only be used in urgent cases. Not sure if this would qualify as urgent, especially since you would need to submit each URL or directory individually. I have heard several people advise others to not use this unless all other means of removal fail, or it needs to be removed for legal reasons.

Because deepbot doesn't exist in its previous state, Google will probably become aware of the change quite quickly. You should use a robots.txt file to disallow Googlebot, and also use the noindex meta tag on the individual pages (Googlebot does not request the robots.txt file on each and every visit). Hopefully, it will sort itself out for the next update.

The duplicate content filter that seems to be catching everyone right now is brand new with this update (or at least it didn't seem to catch this many people in previous updates). GoogleGuy also hinted they were still changing the algo for this filter. So no one is precisely sure how it will work in its "final" version in this update, so to speak.