Welcome to WebmasterWorld Guest from 18.104.22.168
It seems nearly all the data centers are showing some form of bigdaddy results now. I am not sure if there is a 100% bigdaddy/not bigdaddy distinction anymore.
The only two DCs I see that are showing different results are 22.214.171.124 and 126.96.36.199 and those results seem to be from late December, early January, at least in the sectors I monitor.
[edited by: tedster at 5:31 am (utc) on Mar. 25, 2006]
My problem these days is that Google can't decide who is the content owner. I had a well paid copywriter who wrote unique content for my site pages, a few days after a scraper site is showing for my content pushing me to omitted results...go figure...rrrr "
Sounds like the scraper was able to get your content indexed first due to having a higher PR link shooting into the site..... File a complaint (DMCA ) with Google Yahoo & MSN
I had a huge problem with this in tbe past.
Google is sailing very close to conflict of interest on this. I have had content ripped off by the fake "index scrapers" ... you know, send out a bot and put the results of url, page name, description meta into your index of alabama.widgets.com or ilovesex.widgets.com (one of your 100 other subdomain sites) that contain ZERO original content but DO contain AdSense links.
But hey! Not only does Google not apply their supposed penalites for no original content or duplicate content for you, they take your advertising and rotate your ads onto the same pages you're ripping off.
Google didn't create scraper sites but they gave them the biggest boost ever with AdSense and they appear to allow them to operate under different rules (Hey! they generate ad clicks for Google so that makes it all right, huh?)
Wish it was the case..my site is strong PR6 crawled daily , all second level pages are PR6 too...the scrapper site don't even have PR.... and I showed for the terms before the scrapper got them, but afterwards he got the top....:( Way to go Google!
Of course, "cheating" is a relative term, but if you meet Google's threshhold, you are cheating.
Guess what? I think the same I am being penalized for something..asked Google they said its not....
Site is 100% white hat, no link trading no spam, full with original content, 100% hard coded no tricks nothing. I'm in this business for over 6 years I know almost every black hat tactics out there, never used one and never will use.
Perhaps this is why I am being penalized!
Well bobmark I think your best bet is to make a robots.txt and ban all the bots that you do not want hitting your site. At least that is the way that will save you the most money & aggrevation in the long run.
Court battle with Google isn't for the meek.
Walkman trustrank? The only problem with that theory is his main site is a 7....while he says 6 h..e is passing the 6 to his child pages... so the site must be at 7 by Googles internal PR score...I doubt a low trusted site would receive PR that high...if there is a trust rank....
TrustRank...SEO double speak... it seems to me.
And further (maybe its related maybe not) if my "trust rank" is so low why on earth would Google use my site to define half of the keyword that are related to my industry. When using define:keyword my site is on top for most of the words.
Try "define:shabbat" why am I there if not "trusted"
Some people flip out which is understandable but to have you guys even communicating with us on the issue and fixing it in under 30 days is just fantastic to some of us old schoolers. Thanks again and keep up the great work!
Well this word (6,000,000 results) is just an example I have other words with 18,000,000 results and more And below me on the definition pages are site that are definitely authorities in my area...(and they are optimized for these keywords). The only thing I can think of is the web definition does not have anything to do with "trustrank" if there is such thing...
I said "in the past" Seo1. My personal problem was solved long ago but the practice keeps on for others and last time I looked Google ain't changed its guidelines to say "duplicate content, zero original content, etc. are just fine as long as you generate some AdSense clicks."
Maybe they should; at least it would be honest.
have cached another 60k from 30k yesterday
results are returning, but as yet no traffic as they are in poor positions. I think we're No 90 for a position we previously held No 1 for and unique content phrases are right down the bottom also.
these still show the old pages and supplementals.
My hope is that these will removed shortly, but who know's what's in play
These show completely different results some of the time.
Always remember to hit refresh a few times when you search. A refresh often brings a completely different results set to the one you see first.
Always explore what you get with a normal search and with &filter=0 on the end of the search URL because that can also give very different results again.
The link says "In order to show you the most relevant results, we have omitted some entries very similar to the 10 already displayed. If you like, you can repeat the search with the omitted results included".
Clicking that link will show you pages that were omitted for being too similar. Just having the same title and/or meta description is enough for pages not to be listed.
There is no suggestion that the pages that are then shown are going to be purely supplemental results. They sometimes are, but very often they are not. This is purely a link to omitted results.
I don't think supplementals are new..
That Google doesn't display them anylonger is new though.
Google has always had two indexes. The main index (or forward)is the index which contains those sites most likely to be returned during the average search query.
The auxiliary (supplemental) index is made up of the rest of the pages not in the main index and called upon only when requested (click on the link) or when the main index does not contain enough scored documents / resources to fulfill the user search query properly.
The below is from Google
3. Why is my site labeled "Supplemental"?
Supplemental sites are part of Google's auxiliary index. We're able to place fewer restraints on sites that we crawl for this supplemental index than we do on sites that are crawled for our main index. For example, the number of parameters in a URL might exclude a site from being crawled for inclusion in our main index; however, it could still be crawled and added to our supplemental index.
The index in which a site is included is completely automated; there's no way for you to select or change the index in which your site appears. Please be assured that the index in which a site is included does not affect its PageRank.
Also reading this page will help you understand Google a bit better.
Hope this helps
While this may show some supplementals, it is certainly not meant for that function. You're kidding yourself if you think these are all the supplementals you have to worry about.
You need to work much harder to find your supplementals, but it can be done.
First you need to be sure Google has indexed all your pages.
Then count regular results and supplementals and see if that equals all of the pages of your site and Googles count of your site.
Remember too what you see in results today... is at times what was done weeks ago or months ago.
I hope that makes sense.
Your example site of someone wrongfully gone supplemental contained the following-
1) Same title on all pages (as mentioned)
2) Same meta description text on all pages
3) Text in main nav is done in graphics (with no alt text to boot)
Was this an example of a standard supplemental site or an example of how to create a supplemental site?
I guess the moral of the story here is before crying wolf... read the google webmaster guidelines.