Forum Moderators: Robert Charlton & goodroi
Well it took about a month and a half, but that offending site I mentioned is no longer there.
site:www.domain.com returns zero results.
Looks like I spoke to soon. It is now about 40 days later and the site is back with the same old tricks - <font color=white>
This is disapointing - I want a qulity Google - not this garbage.
The SEO site was caught, and removed from Google's index. But only for a week. Then it re-appeared, all cleaned up.
I just can't understand how they could get back that soon. Meanwhile all their supporting sites still use the very same techniques. And so does most of their customers.
If Brett is right, and Google cannot catch this automagically, then Matt Cutts and his crew needs more resources.
The sad thing is, the blackhats get nothing but some free and untargeted traffic, the surfers leave the Internet (some for good), while the rest of us pay the price.
The good news is, what goes around must come around.
Its just a personal annoyance. They were removed and now they are back with the exact same trick.
A similar thing has happened with a site in my sector. It came back a few weeks ago after having been removed post Jagger. It had done nothing to fix the blatant spam (interlinking mirrors, masses of bought links).
However, just today I notice that it's PR is back at 0 as part of the directory PR update.
I am fairly confident that it will be goodbye to some spammy friends once this PR/backlink update is complete.
Lets count the ways:
- inline css.
- external css.
- on the page.
- on the page in nested tables.
- in javascript.
- in javascript that calls a css.
- in a css error that causes ie to render wrong.
- gfx intentional overlays anyone?...
hehe.. that's just a start
If Brett is right, and Google cannot catch this automagically, then Matt Cutts and his crew needs more resources.
Also, it seems amusing that on the one hand, G seems to have the bandwidth & processing-power of God, in terms of what it's able to do when indexing and understanding the www, yet on the other hand, they don't have enough resources to figure out hidden text.
Sure, there are many ways it can be hidden when you're inspecting code ... but is visual inspection (by machine) not an option? Especially comparing visual inspection to a simple once through the code as well as once through the text on the page?
Or (perhaps more likely), does G simply not want to expend much energy going after hidden text?
Sure, it's true we shouldn't pay much attention to competitors, and it's equally true that we probably all belong to a large "free-search-engine-traffic-addicts-anonymous" group ... given that, our concerns are still reasonable, if perhaps misguided.