Welcome to WebmasterWorld Guest from 18.104.22.168
What is a Supplemental Result?
A supplemental result is a result for a webpage that does not fully meet Google guidelines. This may be for a variety of reasons.Once a result has been identified as supplemental it will carry no pagerank until the problem is put right. Eventually it will be removed altogether from the index.
Can a Supplemental Result come Back?
Yes absolutely, solve the problem and the page can reenter normal index
Why are Google Doing This?
Its my belief Google is searching for and wanting to give weight to originality and good site structure. Lots of people including myself have suffered the annoyance of having content stolen off there webpage and to add insult to injury have seen there stolen content rank higher than them. If Google are able to identify the original source of content then scrapers , cached proxy servers and seo's who copy other sites can be filtered from the results and the original content, no matter who you are is preserved. In addition Google want to keep its users safe from malicious sites that infest your computer. Ok so there's some collateral damage but won't the internet be a better place if they get this right.
Possible Causes for Supplemental
Lots of people have wrote about this. But in simple terms its about serving your content only one way. So look at your server set up and if you are serving content more than one way solve in .htaccess or use meta tags or robots.txt to prevent duplicate content being crawled
The meta description should be accurate and describe the page if used.Ideally use text in meta description within the content of the page.
Sites using malware look to be a thing of the past. Users of Google will be kept safe and warned about this.
Make sure you keep your pages strongly linked. If you orphan a page it will go supplemental
If you have pages 5 levels deep from root then its a given that its not very usable so think carefully about your site structure and ensure your pages are strongly linked.
Try to describe each page uniquely and accurately in its link text. If you have a thousand pages all using the same link text pointing to parts of the site its quite possible those links will be ignored. Google filter against Google bombing also means that its important words in your link text are used within your page content of the page you are linking to.
If your creating a page make sure its of unique use to Google. They don't want thousands of pages saying "write your review here". Nor do they want thousands of version of the same news story brought in off an rss feed from the BBC.If your going web 2.0 make sure your offering something unique and keep the repetitive content away from search. You could for instance use a folder for review submissions and just tell Google not to crawl it. Make sure you haven't mirrored someone elses content. If you've copied someone else then you should remove or rewrite the contentt. Unfortunately this is also where collateral damage comes in. Some sites have there unique scraped and copied and people set up proxy servers to cache there pages. These are the tactics of so called Black Hat SEO Experts. Google is getting better and better at catching these guys and are doing a great job of protecting new content but I wish they would be more forthcoming with help for webmasters that "already" have had there content copied.More Details on this below.
8. Duplicate Content through Scrapers
Many people will set up robots that will crawl your site with the sole intention of stealing unique content. They take content off loads of sites and then typically use it to form directory content and to promote there site or sell advertising. There are others that do this to cause your site damage on behalf of competitors. Other methods scrapers use include using site:www.yoursite.com on search engines, using information off sitemap.xml, setting up fake sitemap generators to lure you in and copying your content as the sitemap is created. If your content is scraped and published elsewhere then your pages could end up supplemental.Google and Other search engines doing some good work in protecting the site:www.yoursite.com command.
9 Duplicate Content through Proxy Servers
Used to hide ip address's proxy servers are one of the worst culprits for causing duplicate content. They will copy thousands of pages and then set up links to the proxy or through search so the proxy page gets crawled and your page gets sent supplemental. Google doing some good work with this and seeing this technique starting to fail.
10 Poor Page Rank.
If the site has poor pagerank and little trust then many pages may appear supplemental at first.
Anyway think I'll stop at 10...............
Supplemental Results [webmasterworld.com] - what exactly are they?
Duplicate Content [webmasterworld.com] - get it right or perish
Duplicate Content [webmasterworld.com] - comments from Google's Adam Lasnik
Thin Affiliate Pages [webmasterworld.com] - with comments from Google's Adam Lasnik
Vbulletin [webmasterworld.com] & Wordpress [webmasterworld.com] - avoiding duplicate content
Why "www" & "no-www" Are Different [webmasterworld.com] - the canonical root issue
Domain Root vs. index.html [webmasterworld.com] - another kind of duplicate
Other sources were Matts Blog. The discussion and comments regarding sitemap.xml. IncredBill comments on protecting website content in numerous posts.
Here's a couple more possible reasons:
11.PDF Files. If your linking pdf on your site and its not created by you use rel=nofolow or block in robots.txt as google will pick up as duplicate content.
12. Secure server. I have seen a few problems where secure server was showing duplicate content or showing contents of customer order. Again of little use to google so remember to deal with this in robots.txt.
[edited by: Keniki at 8:50 pm (utc) on May 12, 2007]