Welcome to WebmasterWorld Guest from 184.108.40.206
>>The reason in our opinion is that it is not an update - it is the introduction of a new filter, and the subsequent application of a penalty to all sites that attract that filter.<<
If that was the case, then it should have been very easy for GoogleGuy or Matt Cutts to explain or deal with instead of keeping silent. They just need to post ONE LINE... something like this:
Folks! we are not updating, but have just applied few filters.
What happen to Google Index showing the supplemental pages which do exists, pages not having any duplicate content, well inter-linked.
I heard that if you have same title and description on several pages, Google will consider duplicate content, maybe it could be your case.
In my sector this is definately not the case. sites of hundreds of pages with the same titles and descriptions. One in particular ranking #1 for a keyphrase i follow.
Having just searched on "GoogleGuy" (not using Google BTW) and read a number of his posts, the opposite appears to be the case: he seems ready to discuss "updates" but not so ready to discuss "filters" or other algorithmic details.
And surely no rational update could sensibly not rank us highly for processes we have invented. An out of control unbalanced penalty filter though?
Whatever it is, we are moving now beyond Google, having come to terms with this madness. If they want to exclude a site like ours, so be it.
No, all pages have different Title and Description and related to the page content, its plain html pages no dynamic pages used.
Its like they have agreed to be silent on this 'update'
Look at how long this thread is..
All the SEO boards are the same - the authority members are not attending - do they not know - or perhaps busy correcting their own serps?
GG and MC where are you when there is an issue to resolve - I know I go to MC's Blog and see nothing but diversions?
No-one has to admit to the screw up just tell us if its an update, roll-back or clean-up?
I've noticed a similar pattern.
He doesn't comment (At least not untill years after the fact) on "real" things, like bugs (unless its to deny them), filters, the sandbox and anything else that really matters. so when a lot of noise appears and he doesn't comment, its just as good as a confirmation from him. If it wasn't the case he'd just come out and deny it.
joined:Oct 27, 2001
The reason in our opinion is that it is not an update - it is the introduction of a new filter, and the subsequent application of a penalty to all sites that attract that filter.
It isn't that simple. Consider:
The number #1 result for one of my major keyphrases (where a guidebook publisher and I have been playing leapfrog with the #1 and #2 positions for much of 2005) is now from an "open source" site with very little content on the topic that wasn't even in the top 10 results until now. If a filter were all that had changed, the site wouldn't have risen to the top unless the sites immediately below it had been filtered from the index (which isn't the case).
Granted, that's just one example of a change that can't be explained by a filter, but it is an example.
There's a tendency here for members to assume that one factor or another is responsible for all their troubles or joys. IMHO, blaming a white-hat site's loss of rankings (or disappearance from the index) on a filter is likely to be an oversimplification when Google's "100 factors" can interact in so many different ways.
I ran into this on Matt Cutt's blog.
The only things we can think of (since we don’t use any black hat SEO) is:
1.) We use titles and descriptions in our sub sections to introduce contents of our articles which is the same as the title and description on the top of our articles and related articles as well as the meta title and descritpion.
I am monitoring many sites with that use this same structure that have all been dropped way down in the serps. I have the feeling that these sites also suffer from a great deal of scraping. Therefore, Google has dropped the sites along with the scraper sites. It can no longer tell the difference between the real site and the scraper sites.
I agree that this smells like a stinking sitewide spam filter and I resent the fact that a site with 248 original 300 word essays would be considered spam by anyone. Add to that 2,500 individual backlinks and I should be collecting my dough in peace. The funny thing is, my revenue is not down anywhere near it could be based on traffic figures. At least the Yahoo and MSN is traffic. And bless MSN's little heart because they are at least sending a visitor every 5 minutes or so whereas Google has slowed down to about 1 every 8 hours.
On a search engine you often get only the title, so for an article the title should be there.
If the title of your article is NOT keywords, then why is it the title of the article?
And how can you not put a title at the top of the page?
The reason why this helps you with Google is because they used to reward good design. If they are penalizing this now then the whole world is upside down.
I can confirm this. If we use the filter command, everything returns back to normal. Pages that are in the hundreds return back to top ten. Whats up? Anyone have any good ideas?
Being that my site is no where to be found in the index at this time, do you think it would be even more damaging to block my images folder from Google or better yet do a frames breaker? All of my Google hits are coming to my images and it's really just fanning the flames of my frustration to see all of that bandwidth being consumed without a second look at my worth while content.
I also want to use the removal tool to wipe out my old domain but from what I'm readking and learning is that the removal tool doesn't work either.
I think this year I will send my kids out trick-or-treating as Google being that it's the scariest monster I know of, ROTFLMAO!
- Do not use the removal tool. Better to 404 the pages or use the "noindex" meta-tag. Removal tool hides things. They will just reappear in the index in six months to haunt. I also have the sneaking feeling that even if you use the removal tool - these pages can still count against you.
- Make sure to do you 301 non-www to www (or vice-versa).
- Check for bad outgoing links on you site. Bad people buy up expired domains often and turn what was a good neighborhood into a bad one.
- Use Xenu to look for broken links, etc. Correct these issues.
- Look for any blatant issues. These can be found in the Google web guidelines.
If all the above is correct and your site seems clean. Wait it out.
This has nothing to do with your optimization. Google is just screwed up. There's no logical pattern to say anything. Now searching with Google you can find SERPs with old cloacking techniques, old-banned spammer sites and also a lot of dynamic pages with "broken links" or no relevant content.
This means Google Guidelines has really nothing to do with this current "update" or how do you want to call it ...
I wish to thank Danny Sullivan for adding to the list of comments of the SearchEngineWatch blog, a reference to my today´s Open Letter to Google Chief Eric Schmidt (with a direct link to our current thread):
Roundup Of Google Size Announcement Coverage
"Open Letter To Google Chief Eric Schmidt is from WebmasterWorld member reseller who has issues with Google's claims of being larger in terms of "unduplicated" pages. My story above gives examples of how some duplicate pages are already in Google. The complaint in partiular covers the fact that while duplicate pages exist, de-duplication efforts may also remove the original documents from Google, rather than mirrors."
Now for a biased opinion:
To say that my site is out and this other junk is considered higher "quality" is almost saying that a Fillet Mignon is of lower quality than a Mc hamburger. To keep the analogy going the Fillet was prepared according to the quality and health guidelines and the hamburger was not cooked at all, thrown on the floor, spat on, smashed on the bun, and served up under a pile of trash fixins.
yep, the removal console is quite dangerous. I made a change in robots.txt after removing the old stuff. And after that change Googlebot came along crawling duplicates again.
This will not happen again. This time I use robots.txt and .htaccess to block the dupe files...