homepage Welcome to WebmasterWorld Guest from 54.145.172.149
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 182 message thread spans 7 pages: < < 182 ( 1 2 3 4 5 [6] 7 > >     
June 27th, August 17th, What's happening?
Pages are vanishing and reappearing -- even whole sites
DeROK

5+ Year Member



 
Msg#: 3055209 posted 3:15 am on Aug 22, 2006 (gmt 0)

< a continued discussion from these threads:
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...] >

-----------------------------------------------

Hey everyone,

I've been reading through the June 27th/August 17th threads, and I was wondering if somebody could make it more clear as to what's actually going on?

Like many of you on the board, my site got trashed in the SERPs on June 27th only to recover a month later. At the time, I thought I had incurred a penalty and went through painstaking detail to remove even the most minute possible violations. I thought that correcting those problems was the reason that I recovered.

So needless to say, I was pretty upset when I got trashed again around the 17th when I knew my site was in total compliance with Google's guidelines. After visiting this forum, I now see that I was not the only one who has been experiencing this type of problem.

Here are my questions. If any of you can shed some light on these, I would really appreciate it.

1. Why is this happening? It seems like some kind of update, but why are certain sites getting trashed when others are standing firm?

2. Can I expect a recovery similar to the one I had in July?

3. Is there anything I can do to fix this, or am I completely at the mercy of Google on this one?

Thanks for you time!

[edited by: tedster at 6:25 am (utc) on Aug. 22, 2006]

 

trinorthlighting

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 10:52 pm on Aug 24, 2006 (gmt 0)

g1smd,

A factor you are forgetting is how often a keyword is searched on google. That plays into the formula a lot.


g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 11:04 pm on Aug 24, 2006 (gmt 0)

Amazing. I post an example, and the supplemental cleanup has breezed through a number of DCs and taken away many of the results that I wanted to show in the first (cosnam) example. Bad timing.

There were 5 or 6 duplicates for each of the two example searches, for the last 4 or 5 months. Now, just a few hours later, most of them are gone.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 11:18 pm on Aug 24, 2006 (gmt 0)

Reposted censored version of post: WebmasterWorld does not allow any examples.

.

Try this on gfe-gv.google.com or on another datacentre that has "standard" results, and then try it again on gfe-eh.google.com which is the one that has the updated supplemental results.

.

First, look for: ZZZZZZZ AAAAAAA BBBBBBB

Notice that there are multiple results: these are duplicate content and most are marked as supplemental. The URLs differ by just a parameter or two (pf is "print friendly"; and tx is "text only"). Notice also the duplication caused by differing capitalisation (darn IIS).

The site should be using a meta robots noindex tag on all but one URL version of the page, but they do not do so. That would solve the "duplicate content" part of the problem - and would remove all but one URL version from the index.

The spelling mistake of "ZZZZZZZ" (instead of "YYYYYYY") was removed from the live pages many months ago (back around April I think); yet these URLs still appear for that old search query: so, not only are there supplemental results for URLs that have been removed from the main index for being duplicate content, they can also be for results that are simply representing a previous version of a page, allowing a page to rank for words that are no longer on that page.

Notice that the cache is newer and does NOT include the incorrect spelling. The pages rank for a word that is not on the live page and not in the Google cache. The word only appears in the snippet, and only appears in the snippet when the word was in the search query.

.

Next, search for YYYYYYY AAAAAAA BBBBBBB

Now you see the exact same URLs listed again. Again, several are marked as supplemental results because they are duplicate content. Those URLs differ by just a parameter value, but all lead to the same content.

Notice one important thing: these are the exact same URLs as returned for the earlier search. Notice that where some of the URLs were supplemental results for the "ZZZZZZZ" query, now the exact same URL is NOT supplemental for the "YYYYYYY" query.

Look at the canonical URL (the one without extra parameters). The "supplemental result" in this case is for the older "ZZZZZZZ" content; allowing surfers to still find your site long after that particular word was deleted from the page, and the current version of the page is in the normal index appearing for a "YYYYYYY" search. So, as far as everything goes, this page is in the normal index. It does not have a problem (ignore the supplemental, it is Google being helpful to searches, it will go away after a year).

What you want to happen is for there to be one normal result for the canonical URL for the page (the URL without any added parameters would be the canonical one), and that URL will always show as supplemental when you search for words from the previous content that was on that page. The duplicate URLs should not be appearing in the index. The webmaster should be designing the site so that the other URL variants cannot be indexed. A quick application of meta robots noindex tags to the other URL versions would fix the problem.

So, this is why I keep saying to not count supplemental results for canonical URLs. They are an artifact from your site, kept by Google to allow people to find your site for search terms that are based on older versions of your content. They clean these up only after holding on to them for a year or more.

This is also why I keep saying that "URLs go Supplemental, not Sites". The Supplemental tag is handed to URLs on a case-by-case basis, not on a site or domain basis.

So, count only normal results. Look to get all current content fully indexed. Make sure that old URLs that 301 redirect or 404 error really do return the right status codes: and ignore the fact that those URLs show up as supplemental results for some queries. Google will clean them away in their own time. They are not important. For URLs that redirect or error, once they are tagged as Supplemental they are not considered to be duplicate content; they are just archived artifacts that will eventually disappear. They are additional ways that surfers can find your site.

Sidenote: In fact for a .com site at #2, one that for the last year was redirecting both .co.uk and .net to .com, both of which had been partially indexed and dumped to supplemental just over a year ago, those other URLs always showed in search results at #8 and #11, so the site really had three entries in the top 11. The #8 and #11 were just redirects for the last year, and Google cleaned those up in the very recent supplemental update last week, after almost exactly one year of listing them.

If you do have "real" "duplicate content", the same content reachable by multiple URLs, then you must work to get all the duplicates out of the index using meta robots noindex or robots.txt directives.

I have just done this with a 40 000 thread forum that was exposing more than 750 000 URLs to Google. I can confirm all the effects that I have mentioned in this post. Now the forum has just the 40 000 threads indexed, each with one canonical URL (the 200 000 alternative thread URLs have been deindexed, it was a vBulletin forum), and just a few thousand pages of thread indexes showing too. All of the other URLs (some 450 000 pages that just say "Error. You are not logged in") are gone from the index. Those were simply URLs that a registered user would use to reply to a post, start a thread, send a PM, and many others that only a registed user should be accessing. Search engines should not even be accessing those pages, but most forum software makes no attempt to stop them. So, the forum has gone from 750 000 indexed URLs - with 680 000 of them being junk, and mostly marked as supplemental - to just 45 000 indexed URLs, all of which are proper content (threads), or index listings (thread lists) of that content.

.

So, there are several types of supplemental results:

1. Results representing old-content versions of a URL, where the same URL appears in the normal index for other (newer) search terms. These get cleaned up by Google after a year. Ignore them.

2. Results that are "duplicate content". Get the site architecture sorted so that each page of content exposes only one indexable URL. These alternative URLs will be dropped very quickly if they are not supplemental. However, you must wait a year for the alternative URLs to be dropped if they are already reported as supplemental results. In some cases the "normal" URL will be dropped, only to reappear as a supplemental result a few weeks later. Don't worry about those, they are not harming anything. They will be dropped eventually. It takes about a year.

3. Supplemental results for pseudo-duplicate content. These are cases where the page content is different, but not different enough for Google, or you have repeated the title and/or meta description across multiple pages. This is a "special case" of duplicate content. The fix is in your own hands; get more unique content on the page, and make sure that every page has a unique title and a unique meta description. Again the pages will show as supplemental results for some search terms (the older content, after editing) and as normal results for other search terms (the newer content, after editing).

.

I also see a number of sites that have a large number of Supplemental Results caused by PageRank issues. These are nearly always caused by one of several things.

1. The internal pages link back to /index.html sending the PR there, but Google chose to list www.domain.com as the canonical URL. Make sure that every page of the site links back to http://www.domain.com/ in exactkly that format.

2. Poor site architecture. Google recommends that you join sitemaps. I recommend that you implement breadcrumbs and run Xenu LinkSleuth over your site. Make sure that all internal links are in normal HTML links, and the site is easy to navigate. Think of your users as well as the search engines.

3. One other type of duplicate content that I didn't properly mention was of non-www and www URLs for the same "page". Fix that by using a site-wide 301 redirect, one that preserves the originally requested folder and filename in the redirected URL. The redirected URL will show as a supplemental URL for a year, but ignore it, it is NOT causing a problem.

4. Aligned to that is where you own multiple domains. Again, get the site wide 301 redirect in to the redirect everything to one domain. Don't serve the same content at multiple domains.

5. No-one linking to the site. If no-one else thinks the site is worth linking to, Google might relagate most of the pages to supplemental.

egomaniac

10+ Year Member



 
Msg#: 3055209 posted 11:25 pm on Aug 24, 2006 (gmt 0)

thms said:

If what you say is true, then I could link to my competitor site and append a dummy query string to the URL thus creating a duplicate content page on my competitor's site, like:

[competitor-domain.com...]

so the SEs will see a duplicate page and remove both index.php and index.php?a=1?

First of all I wouldn't say it happened to me if it did not. So of course it was true when it happened. Whether it is still true today is always an unknown with an unpublished SE algo... but I digress.

In my situation, the first page had a #1 ranking for a modestly competitive keyword phrase. Once the second page got put up, then the first page lost its #1 ranking. Once the second page got removed, the first page recovered its #1 ranking. This happened a couple of years ago, back in the days of monthly updates.

As for your example about a competitor linking to another site with a URL that tries to fake a copy of a page, I don't see how that would cause a problem. Somehow G knows the difference which is proven out by the fact that such links exist by the thousands in the form of affiliate links.

The bottom line is that duplicate content filtering was put in place a few years back to keep unscrupulous webmasters from creating large sites with lots of PR power by simply replicating pages hundreds or thousand of times. I personally think the threshold for duplicate content is somewhere's between a 90-100% duplication of the entire html content, but who knows for sure. The fact that many pages (such as articles) exist out there and rank well while having exact body content duplicates, yet differing in page templates bears out my theory I believe.

Bewenched

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 12:12 am on Aug 25, 2006 (gmt 0)

definatley some odd results showing up today on 72.14.207.104

doing a search for site:www.mydomain.com inurl:/subcategory shows like 431 results.

No biggie right?

well .. I got to the bottom of the screen and click on any of the 1 2 3 4 5 6 7 8 9 10 links takes me to a page that says there are only two pages 1 2 and a link for omitted results..

so I click on omitted results and it shows 925 results at the top of the screen. very strange for it to show 10 pages yet clicking on any forces you to click on omitted results.

Bewenched

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 12:15 am on Aug 25, 2006 (gmt 0)

thms said:
If what you say is true, then I could link to my competitor site and append a dummy query string to the URL thus creating a duplicate content page on my competitor's site

Or worse .. link to them under SSL https://www.competitor.com
if they don't have that disabled for pages it will give duplicate content penalties.

Happened to us! they are viewed as differnt sites by google! and no way to search for it with inurl:https

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 12:20 am on Aug 25, 2006 (gmt 0)

>> ...says there are only two pages 1 2 and a link for omitted results... <<

bewenched: this is warning you about "duplicate content"; might just be too similar page titles and/or meta descriptions, or it might mean that your on-page content is too light.

It is the same effect that Matt Cutts has already talked about [threadwatch.org] in recent days.

Bewenched

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 12:50 am on Aug 25, 2006 (gmt 0)

this is warning you about "duplicate content"; might just be too similar page titles and/or meta descriptions, or it might mean that your on-page content is too light.

It is the same effect that Matt Cutts has already talked about in recent days.

These are the pages that got linked to under SSL https://www.mydomain.com

All the pages had unique content. some were products so alot of the main description may have been similiar but the application was completely different.

Part of the problem is that 4 months ago we had a dropdown menu for some major navigation at the tops of some of our pages and for whatever reason google decided the options in a dropdown list are copy on a page and put that as the page description. They listed them in the results description like option1, option2, option3, option4, option5, etc.. but there was no comma in our dropdown.

Needless to say once I saw those results coming up on MSN I immediately took the menu off. Yahoo never listed them this way.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 12:55 am on Aug 25, 2006 (gmt 0)

They obviously felt that those words were of better quality than whatever you had in your meta description at the time. Make sure to review the meta description on every page of your site again.

Bewenched

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 1:09 am on Aug 25, 2006 (gmt 0)

I'm real tempted to take off all meta description tags from the site and make the bot do all the work.

Regarding the SSL problem. If anyone needs a "work-around" to prevent that from happening in ASP sticky mail me. I'll get you the code.

trinorthlighting

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 1:20 am on Aug 25, 2006 (gmt 0)

I would not take meta tags off, other search engines use them plus you do not know what the bot will pull off the page.

Has your google traffic dropped off?

fjpapaleo

10+ Year Member



 
Msg#: 3055209 posted 1:59 am on Aug 25, 2006 (gmt 0)

re: duplicate meta tags

What about product pages that use "sort by price", "sort by size" etc.. I think this might be part of my problem although I see plenty of other sites doing this and not showing any ill signs.
URL's are different but the title and description are the same. Since the pages are dynamic there's no way of adding a no-index tag. Any ideas on this?

Bewenched

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 2:39 am on Aug 25, 2006 (gmt 0)

Trinorthlighting,
Yes google traffic has dropped dramatically over the last few months.

Thank goodness we don't rely totally on free serps and the funny thing is since google has been dropping we've finally started getting alot more natural search results and a lot more spidering from yahoo and msn... even ask has been coming around.

Bewenched

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 3:19 am on Aug 25, 2006 (gmt 0)

Since the pages are dynamic there's no way of adding a no-index tag

You should be able to do an IF .. THEN and set them to no index if you are using a custom application.

Halfdeck

5+ Year Member



 
Msg#: 3055209 posted 10:04 am on Aug 25, 2006 (gmt 0)

In cases where your pages are missing a META description tag or its shorter than ~50 characters, Google uses signals (e.g H1, P) to guess where content begins, and starts snippetizing from there.

But say the Heading is wrapped in HREF - Google will often ignore it (standalone HREF lying outside of a paragraph is not considered content). Or say if you add an HR between H1 and P. Google will skip over that (HR tells Google H1 and P are unrelated, therefore Google delves deeper looking for the next H tag). Other separator signals include DIV, SPAN, and TABLE, where TABLE seems to do the best job of getting Google lost.

When Googlebot doesn't find anything useful on the first run, it goes back up top and starts over, except this time it indiscriminantly picks up anything and everything from text in IMG ALT, NOSCRIPT, OPTION, and HREF (i.e. Menu link text).

To prevent that from happening, I either use unique META description tags between 60~150 chars, or like I do on one of my blogs, drop META tags altogether, and structure pages in a way that makes content easy for Googlebot to find.

[edited by: Halfdeck at 10:06 am (utc) on Aug. 25, 2006]

trinorthlighting

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 12:44 pm on Aug 25, 2006 (gmt 0)

I have seen cases where a person would have a catagory or navigation box on the left or top side of their site and no meta tags. Guess what the meta tags were in google for all their pages?

Catagory A Catagory B Catagory C etc....

Use meta tags when ever you can and use them wisely. Write the descriptions for your customers.

Northstar

5+ Year Member



 
Msg#: 3055209 posted 1:37 pm on Aug 25, 2006 (gmt 0)

Bewenched:

Today I'm noticing the same problem you are having. After reading the replies to your post I realized I do have a advertisement on each pages that has a drop down menu. I wonder if Google was picking up on that and causing duplicate content issues. I have removed this ad so hopefully it fixes the problem if that was the cause. I don't know why the Google bot would skip over my meta tags and index this advertisement drop down menu. I do have dynamically produced meta tags on every page. The meta tags are different on every page and over 80 characters long.

[edited by: Northstar at 1:41 pm (utc) on Aug. 25, 2006]

texasville

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 3:09 pm on Aug 25, 2006 (gmt 0)

>>>>>5. No-one linking to the site. If no-one else thinks the site is worth linking to, Google might relagate most of the pages to supplemental.<<<<<< from g1smd...

And that is a MAJOR FLAW of Google. How are you supposed to get links to a page about "blue-widgets for sale by our company"? It is not really going to draw links organically. Particularly for an independent retailer. So G doesn't index them in the regular index. Which means Google doesn't really have an accounting of what your site is about or directs surfers only to your index page.
G does this because G only understands academic or MFA pages.
Google needs to practice what it preaches. Constantly we are being told to build our sites for the surfers and everything else will take care of itself. Now, I am calling on Google to build it's search engine for surfers. Practice what you preach.
Don't force us to buy links to get our pages properly indexed.

simonmc

5+ Year Member



 
Msg#: 3055209 posted 3:26 pm on Aug 25, 2006 (gmt 0)

Hey Texas, only google is allowed to make money from the internet. Didn't you know?

maschu

10+ Year Member



 
Msg#: 3055209 posted 3:39 pm on Aug 25, 2006 (gmt 0)

Our english index-page was #1 for several years.(2 word key, 150 mill. pages). On 2005/08/05 the page droped to position #12-14. (with google.com hl=en or each other language-tag, exept hl=de)
Google.com hl=de shows me the old rankings. Our .com-domain, hosted in Germany, has about 20% backlinks from german sites. Do you see any coherence with the 27./27./17.-problem?

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 5:19 pm on Aug 25, 2006 (gmt 0)

>> I'm real tempted to take off all meta description tags <<

Just a couple of pages back I wrote this:

"An omitted meta description causes roughly the same problems as an identical meta description."

"In the omitted case, Google uses the first words on the page: often from the nav bar, and therefore often identical too."

It is worth paying attention to that detail. I have proved it time and time again on dozens of sites.

.

>> What about product pages that use "sort by price", "sort by size" etc.. I think this might be part of my problem although I see plenty of other sites doing this and not showing any ill signs.
URL's are different but the title and description are the same. Since the pages are dynamic there's no way of adding a no-index tag. Any ideas on this?
<<

If they are dynamic you can add whatever you want on a URL by URL basis. That is the power of a dynamic site. Either tweak the meta description for different sorts, or set it up so that only the "sort by price" pages are indexed, but still have easy access to the other sort options to keep the visitor happy.

.

HalfDeck: Some of the things that you mention are non-valid HTML, but commonly seen in page code of some sites. So, avoiding some types of non-valid HTML markup is another key.

You can't have a heading inside a link; but you can have a link inside a heading though. I build all pages using headings, paragraphs, lists, tables, and forms; all content is inside these blocks.

maschu

10+ Year Member



 
Msg#: 3055209 posted 10:03 pm on Aug 25, 2006 (gmt 0)

quote
Our english index-page was #1 for several years.(2 word key, 150 mill. pages). On 2005/08/05 the page droped to position #12-14. (with google.com hl=en or each other language-tag, exept hl=de)
Google.com hl=de shows me the old rankings. Our .com-domain, hosted in Germany, has about 20% backlinks from german sites. Do you see any coherence with the 27./27./17.-problem?
/quote

correction: sorry the droped-date was 2006/08/05 not 2005

Bewenched

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3055209 posted 4:57 pm on Aug 26, 2006 (gmt 0)

Today I'm noticing the same problem you are having. After reading the replies to your post I realized I do have a advertisement on each pages that has a drop down menu. I wonder if Google was picking up on that and causing duplicate content issues. I have removed this ad so hopefully it fixes the problem if that was the cause. I don't know why the Google bot would skip over my meta tags and index this advertisement drop down menu. I do have dynamically produced meta tags on every page. The meta tags are different on every page and over 80 characters long.

In our case google completely ignored the unique meta descriptions and chose the dropdown which really makes not sense. I'm very surprized they spider form fields like that. What a waste. Our navigation was very easy for our users and now they have to scroll and select through two different pages making it much less user friendly.

It would be so much better for google to just index the pages in our sitemaps and quit spidering so many dynamic pages.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 10:56 am on Aug 27, 2006 (gmt 0)

Herd the bot. Tell it what you want indexed, and what you don't, with nofollow attributes on links and meta robots noindex tags on pages. It's all in your control.

I have just done this for a site that was exposing 750 000 URLs to Google, of which more than 600 000 did not need to be indexed - and now they are not.

Redirect all non-canonical URLs with a 301 redirect pointing to the correct URL to make sure that each piece of content only gets indexed once.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 11:48 am on Aug 27, 2006 (gmt 0)

More thoughts: [webmasterworld.com...]

Northstar

5+ Year Member



 
Msg#: 3055209 posted 3:48 pm on Aug 27, 2006 (gmt 0)

So it's the Aug. 27th and still no change in Google. I wonder when we will see the next update?

wanderingmind

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3055209 posted 4:06 pm on Aug 27, 2006 (gmt 0)

There was a data refresh on August 16th, and my site had gone down again. So unlikely we will see another refresh today.

g1smd, about meta descriptions. My site that was hit during a few alternate refreshes had identical descriptions on say, around 5 % of pages. Would that lead to a flip-flop in refreshes?

I have gone ahead and changed them, but wouldn't changing so many pages at one shot lead to further triggers at google? I suppose one has to do it, but just checking if it will lead to some problems. Also, if it doesnt lad to problems, would it be fixed and rankings back after the next crawl or refresh?

wanderingmind

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3055209 posted 4:10 pm on Aug 27, 2006 (gmt 0)

Oh - and would the identical meta tag problem in a few pages lead to a loss of rank across the site? My loss of rank during the last refresh is across the site even hitting pages with their own meta tags.

Tomseys

10+ Year Member



 
Msg#: 3055209 posted 5:03 pm on Aug 27, 2006 (gmt 0)

My sites are still missing from the index, though the back links have been updated. Very frustrating.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3055209 posted 9:34 pm on Aug 27, 2006 (gmt 0)

If you have meta description repetition then fix it.
Matt Cutts pointed to that as a problem only days ago.

I don't think you would be hit for making a mass change if that change makes sense to do - and it does here.

Ride45

10+ Year Member



 
Msg#: 3055209 posted 10:44 pm on Aug 27, 2006 (gmt 0)

g1smd,
Webmasters will always perceive "Going supplemental" as a penalty.
In the literal sense, it should not be a bad thing or penalty. Google is simply "cleaning house" and removing pages that are simply cluttering the main index, or just not needed in the main index because the content/pages are not unique enough.
I had over 40K pages in the main index and yeah I was always wondering why.. the 40K pages just didn't need to be there.
So the fact that the number has only been cut down to 3,500 pages in the main index with the others all going supplmenental is not a bad thing in my opinion.

Yes for me, the reality is a penalty - I no longer rank in the top 5 for competitive keywords where I had sat confortably for the last 3 years. Now I am on page 5 - I run an authority site, yet 4+ pages of sites rank higher than I - and it's been about 5 days (my panic threshold). So it so happens there is a penalty at the same time as going supplemental. Yet if I do all of your suggestions from the last 2 years than if all works, with one deep crawl, 40K pages should retrun back to the index and the penalty would be lifted and I would return back to top 5 in SERPS?

This 182 message thread spans 7 pages: < < 182 ( 1 2 3 4 5 [6] 7 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved