Welcome to WebmasterWorld Guest from 184.108.40.206
Now my sites have been through the ringer over the past 16 months or so and his have not been so much as touched. He has held the same position for our "Town Name Web Designer" keywords since I got knocked off.
The only thing we do differently in our designing processes is that I actively SEO my work and he leaves his to luck, time, call it what you will ... not even a title tag, no incoming links ... nada!
My question is this ... Is Google targeting all branches of SEO techniques, not just the dodgy ones?
If there are SEO techniques that don't involve some form of artificiality, they aren't common enough to be concerned about. So, at any time, Google could add a new item that can be DETECTED (and therefore treated as a negative factor in the genuine-page-rank algorithm). Very likely, among all the SEO tricks you did, at least one of them is now being negative weighted.
The question isn't whether a given trick is "dodgy" enough. The question is always whether the presence of particular TRACKS (perhaps typical of a given TRICK) indicates poorer quality sites on Google's test searches, in the opinion of Google's QA reviewers.
So a particular coding practice could suddenly become a detriment to your site, because low-quality-page-generators all got together to start using it (or, for that matter, because all the low-quality-page-generators in this month's QA searches happen to use it.)
It's a natural effect of random human efforts to subvert a fixed member of a class of algorithms, and necessarily incomplete analysis of the choice of which member of the class is (on the average) least successfully subverted--today.
I have a site where stuff is logically structured, and the only serious SEO is to give perfect keyword rich titles - nothing weird, but perfect. No reciprocal linking at all. And with every data refresh, this site gets into trouble. Goes up, down, then up, then down...
Its making me seriously think about my titles.. Should I deliberately tone them down? Or would I be just doing the wrong thing and digging my own grave?!
These are sites that have been designed from the bottom up for the visitor, not the search engines.
Many sites that 'consciously' design in this way and fail, forget that Google can be affected by non-seo measures; code bloat is a classic example; poor internal navigation is another, inappropriate use of js and flash is another ... and there are many, many more "non-seo" problems that can cause a site to fail.
Sites also fail for simple misunderstandings of 'best practice' - For example, many people (including me) advise "get listed in quality directories"; and some people interpret that as needing to submit to 400 directories; most of which, on close inspection, are no more than MFA or link exchanges, or worse ... and there are many, many more "seo best intention" problems that can cause a site to fail.
"Build A Better Site" has come to be a cliche - but it works, even for people who have never heard of SEO. For those that have, it's important to understand the why of SEO as well as the how, to get the best for your site.
[edited by: Quadrille at 10:12 am (utc) on Aug. 18, 2006]
And then factors like everflux or rotating datacenters come into play. That means the whole multidimensional corpus begins to spin, twist, get pounded. To me this means that there is a pure core in the center, where you are on the safe side. The more you try to optimze for the "fuzzy edges" of this corpus, the better your ranking may be, but also the closer you get to the "outside" where you run into the filters. And the exact boundary may change every second, so you have to make sure your website is "inside" the very second the spider visits. I think this makes SEO somewhat impossible.
Probably there is also a sort of "black hole" in the very center, where pure flash animations, pages without a title, or sites without inbound links are located. Targetting this is SEF chapter one, not SEO.
However, what does not fit into this picture, is google tracking user behaviour. Before this, you could roughly double your visits by doubling the number of pages. We all know that there are many attempts to gain revenue by fooling thousands if not millions of people: if only 10 in a million trapped into it, you earned a few cent. Considering the wasted time of all the other 990000, this is simply immoral.
Thus I'm happy to see, that this obviously does not work any more. Whenever your percentage of click-throughs stays below a certain level, your ranking will tank more and more for that page.
So what we formerly knew as SEO, has somewhat mutated to a mixture of usability and good old customer-service. Personally, I like that.
Me too, but I ask you then ... why do still thousands of sites get away with the most blatant spamming techniques? I know a company that I worked for a few years back and they are using huge amounts of crammed hidden text. They have never had a warning, nor a penalty ... and for one particular 2 word keyphrase, Google rewards them with number 1 position (1/2 billion competitors). I'm fairly sure that they would have been reported, as they're in a top sector ... so, again why have they not been hit?
The only thing they haven't done over the years is change a thing. Their cache shows 15th August 2006 and they have dominence over their particular market keywords.
All I have is a title description & keyword tags and I've been struggling to return for 16 months ... oh, where's my cat, I'm gonna beat it ;-)
All the Best
Little sites will often over-optimize too soon -> that's a red flag (too many links too soon and most are reciprocal or paid for, too many pages, poor adsense-to-content ratio,etc..)
Little sites will often game search engines constantly tweaking H1's titles, keyword density, etc.. while, larger sites appear more stable because likely they have gone through alpha -> beta -> live deployments and therefore require little adjustments once indexed, presenting to both users and search engines a stable and very well thought out information architecture.
I say this because large sites can get away with spammy stuff, while little guys seem to get nailed.
old little sites get hit too. The only common factor that pulls you out of the danger zone is a bang (authority) factor!
The fact that some sites can be marginal seo'd and get nuked while others can cheat to their hearts content and ride the serps is the point.
1. Pick a program that creates pages that are w3c compliant. (This helps on crawlability.)
2. Write unique content for each page, add all meta tags and title tags and check the page to see its w3c compliant.
3. Add one link from one of our other sites mentioning the new site.
4. Sit back and add one page of content a day.
We always rank well and solid and never have fluxuations during "data refreshes"
For an older site that we SEO, when we have to clean it up, we start from the bottom level and work our way to the top. This method seems to work best for us.
While there's always the odd site that seems to 'get away with it', most don't manage it forever, and many not for long at all.
And luckily for us (and Google), most of the sites that cheat are such rubbish, that no-one would buy off them anyway! I've long argued that if most spammers turned their skills to honest site building, they'd probably make much more money.
Yes, I'm sure there's exceptions; but that's the point - they are exceptions. For most sites, Good old fashioned honest hard work is the most successful approach, and probably always will be.
sounds great until you are penalized without any
Add one link from one of our other sites mentioning the new site.
4. Sit back and add one page of content a day.
We always rank well and solid and never have fluxuations during "data refreshes"
I can't imagine what keywords you are going after where you can start off with only one link and your site(s) rank well.
Frequent title changes are not good. By this I mean that when you come up with a title for a page, and it gets picked up by Google, just let it stay there for a couple of months before you modify it again. Even though fresh content is important, 'fresh' titles (I have found) are not.
Come up with a good title, and stick to it.
SEO is not something that is going to get a site in trouble--bad SEO is, however. I do SEO for a living and inevitably talk to other SEOs who seem to have no clue what is going on, and are unintentionally hurting their clients' sites. SEO doesn't necessarily mean doing things to get around the algorithm, but it does mean preparing a site to make it easy for the search engines to know what the site is about. It is really as simple as that. Of course Google (and the other engines) often have ridiculous SERPs, taking the simple and slow approach is going to always be ok.
I'd absolutely agree with your points, but I doubt the term "SEO" is really applicable to that activity. You cover the mentioned chapter one of SEF and then turn towards "Good old fashioned honest hard work" as Quadrille put it. Indeed "the most successful approach."
Technically, SEO is dead; what remains, is an empty marketing-concept. If it helps to get new customers for an existing company: pecunia non olet. But if SEO evokes illusions in young students they might manage to become a millionaire SEOing, instead of reading books, we should all send a big WARNING.
> why do still thousands of sites get away with the most blatant spamming techniques?
Because even google is not perfect. It's just a couple of human beings trying to make the world a little bit better, like most of us. Leave that part to the google engineers, and listen to the end of "The End" on Abbey Road.
> great point ride.
Indeed. Large sites have far more means to test and evaluate the fuzzy edges; and this is justified, because generally quite a lot of "Good old fashioned honest hard work" is necessary to establish a large site (or even whole network). And if you have invested such a lot of work, you normally act more careful, because you cannot risk to trap into the filters. As long as the site stays "inside" sufficiently, it will remain in the index. And this is OK, because sooner or later the makers of such large sites will realize the potential of conversion-rates and user-needs, so that even if begun as a grey-hat-project, the large site might once be of considerable value for the visitors.
many sites are going back and forth for no apparent reason so don't go blaming x or y (I did that, and I was wrong). It seems that having a wikipedia, dmoz and plenty of blog links serves as a vaccine, otherwise, there's no guarantee.
My site is very old (started in 1995 and on its current domain since 1999); has thousands of pages of content; has thousands upon thousands of high quality volunteer inbound links; does not participate in link swaping schemes; and avoids SEO tactics. Yet it was severely punished by the July 27th update.
What I think we are seeing is Google accepting a very high rate of colateral damage for only a very modest management of SEO spammy sites and has been pointed out, many of the worst SERP spammers are getting away with doing whatever they want.
I am honestly believing the more Google tries to focus on sites that try grey to black hat SEO, the less success they are having at dealing with SERP spam in general.
I think in the grand scheme of the future of search, Google's approach will prove to have been a blip. Absolutely the wrong approach, but was useful at the time.
Where we NEED to go is for search engines to UNDERSTAND content.
There's a good article on C¦Net today about new developments in search engines, using AI techniques:
A great example of what is being missed in today's search engines is ignorance of noise words. Typically, they are just thrown out. An example used in the article is "books by children". Most search engines will throw out the "by", and just use keywords anyway. The search engine has no idea that you are searching for books by children - just that you are searching for keywords "books" and "children". Do this search on Google, and you will see what I mean - you will simply get links to pages about books FOR children.
This is silly. Though a full understanding of content using AI techniques is the holy grail - a lot could be done today using huristics. A lot of common noise words convey an awful lot of meaning when combined with keywords. A search engine could make use of this so that when you search for books BY children, that is what you get. Some search results could be 1000% better (defined by me as: you get some useful results, instead of NO useful results...) simply by programming in some specific cases where the algorithm would choose to keep the noise word.
One good point that the article makes is that current search engines force us to alter our language and the way we think to accomodate search engines. This is completely backwards to the way it should be, and search WILL evolve so that search accomodates language and the way we think.
The question for Googleites (can we still say that? :) ) is will Google evolve, or be left in the dust? The article suggests that Google and other mainstream search engines will slowly evolve, but what is really needed is an entirely new architecture.
I think that, if Google want to be the best, they're gonna have to listen to reports of spam, check them with human eyes ... not their duff-bots and finally ban the cheats for life ... no warnings and also anything else that is registered in the same name or address.
Thanks to everyone for getting involved in this post ... the forum controllers changed my title (it was alot more witty ;-)).
Have you checked for scrapers and hijackers? They can cause effects that look like a Google penalty when they aren't.
This is a very good point. Site scrapers and content hijackers are becoming extremely aggressive. In one case, I had one site scraper from one IP make over 16,000 requests to my site over the last couple of days. The bot was detected early on and its requests were denied, but it is a good example of how aggressive they are getting.
Given how prolific site scrapers are becoming, it is becoming very important to monitor one's logs and to build comprehensive bad bot detection and banning routines.
If their was ever a time when an engine was actually working to address the content issue it would be now. Whether they are having problems at the other end is irrelevant (that's the game), Google have come a lot closer to ranking real content than any other SE.
This program is impractical and would be ineffective. Spammers are already regularly giving false identification to the national registrars. And ... I don't have that common a name, but there are about a half-dozen people that share both it and have a significant web presence. Talk about collateral damage!
Unlike the current scheme, where, despite some claims made here, there really isn't any collateral damage at all: just regularly shifting collateral benefits (as the positions shift, different websites get temporary placement to which none of them had any just or permanent claim.)