homepage Welcome to WebmasterWorld Guest from 54.227.12.4
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

    
Supplemental Results - the Plus Side
Keniki




msg:3338228
 6:33 pm on May 12, 2007 (gmt 0)

Lots of websites are experiencing supplemental results (including myself) so I thought I would put down some ideas and thoughts I had for debate. Its just my opinion so feel free to tell me if you think I'm talking crap.

What is a Supplemental Result?
A supplemental result is a result for a webpage that does not fully meet Google guidelines. This may be for a variety of reasons.Once a result has been identified as supplemental it will carry no pagerank until the problem is put right. Eventually it will be removed altogether from the index.

Can a Supplemental Result come Back?
Yes absolutely, solve the problem and the page can reenter normal index

Why are Google Doing This?
Its my belief Google is searching for and wanting to give weight to originality and good site structure. Lots of people including myself have suffered the annoyance of having content stolen off there webpage and to add insult to injury have seen there stolen content rank higher than them. If Google are able to identify the original source of content then scrapers , cached proxy servers and seo's who copy other sites can be filtered from the results and the original content, no matter who you are is preserved. In addition Google want to keep its users safe from malicious sites that infest your computer. Ok so there's some collateral damage but won't the internet be a better place if they get this right.

Possible Causes for Supplemental
1.Canonical Issues
Lots of people have wrote about this. But in simple terms its about serving your content only one way. So look at your server set up and if you are serving content more than one way solve in .htaccess or use meta tags or robots.txt to prevent duplicate content being crawled
2.Meta Description
The meta description should be accurate and describe the page if used.Ideally use text in meta description within the content of the page.
3.Malicious Code
Sites using malware look to be a thing of the past. Users of Google will be kept safe and warned about this.
4.Orphaned Content.
Make sure you keep your pages strongly linked. If you orphan a page it will go supplemental
5.Site Structure
If you have pages 5 levels deep from root then its a given that its not very usable so think carefully about your site structure and ensure your pages are strongly linked.
6.Link Text
Try to describe each page uniquely and accurately in its link text. If you have a thousand pages all using the same link text pointing to parts of the site its quite possible those links will be ignored. Google filter against Google bombing also means that its important words in your link text are used within your page content of the page you are linking to.
7.Duplicate Content
If your creating a page make sure its of unique use to Google. They don't want thousands of pages saying "write your review here". Nor do they want thousands of version of the same news story brought in off an rss feed from the BBC.If your going web 2.0 make sure your offering something unique and keep the repetitive content away from search. You could for instance use a folder for review submissions and just tell Google not to crawl it. Make sure you haven't mirrored someone elses content. If you've copied someone else then you should remove or rewrite the contentt. Unfortunately this is also where collateral damage comes in. Some sites have there unique scraped and copied and people set up proxy servers to cache there pages. These are the tactics of so called Black Hat SEO Experts. Google is getting better and better at catching these guys and are doing a great job of protecting new content but I wish they would be more forthcoming with help for webmasters that "already" have had there content copied.More Details on this below.
8. Duplicate Content through Scrapers
Many people will set up robots that will crawl your site with the sole intention of stealing unique content. They take content off loads of sites and then typically use it to form directory content and to promote there site or sell advertising. There are others that do this to cause your site damage on behalf of competitors. Other methods scrapers use include using site:www.yoursite.com on search engines, using information off sitemap.xml, setting up fake sitemap generators to lure you in and copying your content as the sitemap is created. If your content is scraped and published elsewhere then your pages could end up supplemental.Google and Other search engines doing some good work in protecting the site:www.yoursite.com command.
9 Duplicate Content through Proxy Servers
Used to hide ip address's proxy servers are one of the worst culprits for causing duplicate content. They will copy thousands of pages and then set up links to the proxy or through search so the proxy page gets crawled and your page gets sent supplemental. Google doing some good work with this and seeing this technique starting to fail.
10 Poor Page Rank.
If the site has poor pagerank and little trust then many pages may appear supplemental at first.

Anyway think I'll stop at 10...............

 

g1smd




msg:3338267
 7:24 pm on May 12, 2007 (gmt 0)

That's a reasonable summary to get started with, and many of the more intricate details are covered in depth in multiple previous threads.

There are several prior threads that also ought to be referenced at this point.

tedster




msg:3338271
 7:29 pm on May 12, 2007 (gmt 0)

Here are some related threads from the Hot Topics area [webmasterworld.com] that we keep pinned to the top of the Google Search forum's index page:

Supplemental Results [webmasterworld.com] - what exactly are they?
Duplicate Content [webmasterworld.com] - get it right or perish
Duplicate Content [webmasterworld.com] - comments from Google's Adam Lasnik
Thin Affiliate Pages [webmasterworld.com] - with comments from Google's Adam Lasnik
Vbulletin [webmasterworld.com] & Wordpress [webmasterworld.com] - avoiding duplicate content
Why "www" & "no-www" Are Different [webmasterworld.com] - the canonical root issue
Domain Root vs. index.html [webmasterworld.com] - another kind of duplicate

Keniki




msg:3338299
 8:38 pm on May 12, 2007 (gmt 0)

Sorry I should have referenced any sources in the original post. But there are some there I hadn't read yet so thanks Tedster :)

Other sources were Matts Blog. The discussion and comments regarding sitemap.xml. IncredBill comments on protecting website content in numerous posts.

Here's a couple more possible reasons:

11.PDF Files. If your linking pdf on your site and its not created by you use rel=nofolow or block in robots.txt as google will pick up as duplicate content.

12. Secure server. I have seen a few problems where secure server was showing duplicate content or showing contents of customer order. Again of little use to google so remember to deal with this in robots.txt.

[edited by: Keniki at 8:50 pm (utc) on May 12, 2007]

g1smd




msg:3338341
 10:44 pm on May 12, 2007 (gmt 0)

14. Millions of "You are Not Logged In" pages in forums, for URLs that you use to start a new thread, reply to a post, send a PM, or edit a profile etc.

[edited by: g1smd at 10:57 pm (utc) on May 12, 2007]

Keniki




msg:3338344
 10:50 pm on May 12, 2007 (gmt 0)

Nice One g1smd :)

You missed 13 so guess your superstitious lets make 13 a cured reason ;)

13. Hijacking via 302 and meta refresh. Happy to say this is no longer something that can effect your site. Good Job Google!

steveb




msg:3338361
 11:50 pm on May 12, 2007 (gmt 0)

"A supplemental result is a result for a webpage that does not fully meet Google guidelines."

That has nothing whatever to do with supplementals.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved