homepage Welcome to WebmasterWorld Guest from 54.226.180.223
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Subscribe to WebmasterWorld

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 88 message thread spans 3 pages: 88 ( [1] 2 3 > >     
Does all SEO work get you penalised now - not just dodgy tricks?
colin_h




msg:3051127
 5:46 am on Aug 18, 2006 (gmt 0)

I have a friend who works as a web designer in the same town as me. For many years we have had a mutual respect for each other and because of this we both have links to each others sites.

Now my sites have been through the ringer over the past 16 months or so and his have not been so much as touched. He has held the same position for our "Town Name Web Designer" keywords since I got knocked off.

The only thing we do differently in our designing processes is that I actively SEO my work and he leaves his to luck, time, call it what you will ... not even a title tag, no incoming links ... nada!

My question is this ... Is Google targeting all branches of SEO techniques, not just the dodgy ones?

 

hutcheson




msg:3051154
 6:17 am on Aug 18, 2006 (gmt 0)

Google gives lists of artificial page rank promotion schemes that have been actively targeted (cloaking, hidden text, mutual-admiration-society link exchanges, etc.): but they've always said "any kind of artificial page rank inflation scheme" is risky.

If there are SEO techniques that don't involve some form of artificiality, they aren't common enough to be concerned about. So, at any time, Google could add a new item that can be DETECTED (and therefore treated as a negative factor in the genuine-page-rank algorithm). Very likely, among all the SEO tricks you did, at least one of them is now being negative weighted.

The question isn't whether a given trick is "dodgy" enough. The question is always whether the presence of particular TRACKS (perhaps typical of a given TRICK) indicates poorer quality sites on Google's test searches, in the opinion of Google's QA reviewers.

So a particular coding practice could suddenly become a detriment to your site, because low-quality-page-generators all got together to start using it (or, for that matter, because all the low-quality-page-generators in this month's QA searches happen to use it.)

It's a natural effect of random human efforts to subvert a fixed member of a class of algorithms, and necessarily incomplete analysis of the choice of which member of the class is (on the average) least successfully subverted--today.

wanderingmind




msg:3051307
 9:38 am on Aug 18, 2006 (gmt 0)

It may not be all SEO tricks. But even normal low-intensity SEO could get trapped...

I have a site where stuff is logically structured, and the only serious SEO is to give perfect keyword rich titles - nothing weird, but perfect. No reciprocal linking at all. And with every data refresh, this site gets into trouble. Goes up, down, then up, then down...

Its making me seriously think about my titles.. Should I deliberately tone them down? Or would I be just doing the wrong thing and digging my own grave?!

Quadrille




msg:3051329
 10:10 am on Aug 18, 2006 (gmt 0)

There are many, many sites around that do no 'active seo' and do fine, thank you.

These are sites that have been designed from the bottom up for the visitor, not the search engines.

Many sites that 'consciously' design in this way and fail, forget that Google can be affected by non-seo measures; code bloat is a classic example; poor internal navigation is another, inappropriate use of js and flash is another ... and there are many, many more "non-seo" problems that can cause a site to fail.

Sites also fail for simple misunderstandings of 'best practice' - For example, many people (including me) advise "get listed in quality directories"; and some people interpret that as needing to submit to 400 directories; most of which, on close inspection, are no more than MFA or link exchanges, or worse ... and there are many, many more "seo best intention" problems that can cause a site to fail.

"Build A Better Site" has come to be a cliche - but it works, even for people who have never heard of SEO. For those that have, it's important to understand the why of SEO as well as the how, to get the best for your site.

[edited by: Quadrille at 10:12 am (utc) on Aug. 18, 2006]

soapystar




msg:3051348
 10:27 am on Aug 18, 2006 (gmt 0)

hutcheson hits it bang on. Its not whether what your doing is a trick or not but if its coincides with what you find on spam sites. Its called profiling and is effective if you take the decision that large collateral is acceptable. You counter this by upping the weigthing for authorative sites and some other twidling that will help large sites. What you get is what we have now, large sites can get a way with huge spamming while little sites with little bang can be totally dropped for having their titles match their h1 tags that match their anchors.

Oliver Henniges




msg:3051420
 11:58 am on Aug 18, 2006 (gmt 0)

SEO means search engine optimization (optimizing websites for the algorithms of search engines to be precise). The optimum is the peak of a mathematical curve, as I understand the definition. However, considering the many many potential factors of the algos involved, this is not just a two-dimensional scale, but a multi-dimensional space with the 'optimum-space' presumably covering something like the (n-1)th space of the first derivation.

And then factors like everflux or rotating datacenters come into play. That means the whole multidimensional corpus begins to spin, twist, get pounded. To me this means that there is a pure core in the center, where you are on the safe side. The more you try to optimze for the "fuzzy edges" of this corpus, the better your ranking may be, but also the closer you get to the "outside" where you run into the filters. And the exact boundary may change every second, so you have to make sure your website is "inside" the very second the spider visits. I think this makes SEO somewhat impossible.

Probably there is also a sort of "black hole" in the very center, where pure flash animations, pages without a title, or sites without inbound links are located. Targetting this is SEF chapter one, not SEO.

However, what does not fit into this picture, is google tracking user behaviour. Before this, you could roughly double your visits by doubling the number of pages. We all know that there are many attempts to gain revenue by fooling thousands if not millions of people: if only 10 in a million trapped into it, you earned a few cent. Considering the wasted time of all the other 990000, this is simply immoral.

Thus I'm happy to see, that this obviously does not work any more. Whenever your percentage of click-throughs stays below a certain level, your ranking will tank more and more for that page.

So what we formerly knew as SEO, has somewhat mutated to a mixture of usability and good old customer-service. Personally, I like that.

colin_h




msg:3051450
 12:21 pm on Aug 18, 2006 (gmt 0)

<<So what we formerly knew as SEO, has somewhat mutated to a mixture of usability and good old customer-service. Personally, I like that.>>

Me too, but I ask you then ... why do still thousands of sites get away with the most blatant spamming techniques? I know a company that I worked for a few years back and they are using huge amounts of crammed hidden text. They have never had a warning, nor a penalty ... and for one particular 2 word keyphrase, Google rewards them with number 1 position (1/2 billion competitors). I'm fairly sure that they would have been reported, as they're in a top sector ... so, again why have they not been hit?

The only thing they haven't done over the years is change a thing. Their cache shows 15th August 2006 and they have dominence over their particular market keywords.

All I have is a title description & keyword tags and I've been struggling to return for 16 months ... oh, where's my cat, I'm gonna beat it ;-)

All the Best

Col :-)

soapystar




msg:3051505
 1:07 pm on Aug 18, 2006 (gmt 0)

because the filters are micro based and so large scale spamming works very nicely, such as large networks. Also because to take certain types of sites out of the danger zone they weight for things such as perceived authority which you can still fake. So if you scoare in that sector you avoid being dragged into the filters and appear to get away the same things that take dwn sites without the same bang factor.

Ride45




msg:3051590
 2:23 pm on Aug 18, 2006 (gmt 0)

I think it's a matter of when and how you do your SEO, not necessarily what you do.
I say this because large sites can get away with spammy stuff, while little guys seem to get nailed.

Little sites will often over-optimize too soon -> that's a red flag (too many links too soon and most are reciprocal or paid for, too many pages, poor adsense-to-content ratio,etc..)

Little sites will often game search engines constantly tweaking H1's titles, keyword density, etc.. while, larger sites appear more stable because likely they have gone through alpha -> beta -> live deployments and therefore require little adjustments once indexed, presenting to both users and search engines a stable and very well thought out information architecture.

jakegotmail




msg:3051605
 2:29 pm on Aug 18, 2006 (gmt 0)

great point ride.

soapystar




msg:3051731
 3:52 pm on Aug 18, 2006 (gmt 0)

I say this because large sites can get away with spammy stuff, while little guys seem to get nailed.

old little sites get hit too. The only common factor that pulls you out of the danger zone is a bang (authority) factor!

The fact that some sites can be marginal seo'd and get nuked while others can cheat to their hearts content and ride the serps is the point.

trinorthlighting




msg:3051796
 4:31 pm on Aug 18, 2006 (gmt 0)

Here is what we do to SEO when we build a new site.

1. Pick a program that creates pages that are w3c compliant. (This helps on crawlability.)

2. Write unique content for each page, add all meta tags and title tags and check the page to see its w3c compliant.

3. Add one link from one of our other sites mentioning the new site.

4. Sit back and add one page of content a day.

We always rank well and solid and never have fluxuations during "data refreshes"

For an older site that we SEO, when we have to clean it up, we start from the bottom level and work our way to the top. This method seems to work best for us.

Quadrille




msg:3051832
 4:54 pm on Aug 18, 2006 (gmt 0)

Sounds a pretty sensible approach to me!

While there's always the odd site that seems to 'get away with it', most don't manage it forever, and many not for long at all.

And luckily for us (and Google), most of the sites that cheat are such rubbish, that no-one would buy off them anyway! I've long argued that if most spammers turned their skills to honest site building, they'd probably make much more money.

Yes, I'm sure there's exceptions; but that's the point - they are exceptions. For most sites, Good old fashioned honest hard work is the most successful approach, and probably always will be.

twebdonny




msg:3051950
 6:37 pm on Aug 18, 2006 (gmt 0)

>>>For most sites, Good old fashioned honest hard work is the most successful approach<<<<<

sounds great until you are penalized without any
reason

walkman




msg:3051951
 6:39 pm on Aug 18, 2006 (gmt 0)

many sites are going back and forth for no apparent reason so don't go blaming x or y (I did that, and I was wrong). It seems that having a wikipedia, dmoz and plenty of blog links serves as a vaccine, otherwise, there's no guarantee.

Jane_Doe




msg:3051981
 7:00 pm on Aug 18, 2006 (gmt 0)

Add one link from one of our other sites mentioning the new site.

4. Sit back and add one page of content a day.

We always rank well and solid and never have fluxuations during "data refreshes"

I can't imagine what keywords you are going after where you can start off with only one link and your site(s) rank well.

Kufu




msg:3051986
 7:05 pm on Aug 18, 2006 (gmt 0)

Here is one thing I've noticed:

Frequent title changes are not good. By this I mean that when you come up with a title for a page, and it gets picked up by Google, just let it stay there for a couple of months before you modify it again. Even though fresh content is important, 'fresh' titles (I have found) are not.

Come up with a good title, and stick to it.

SEO is not something that is going to get a site in trouble--bad SEO is, however. I do SEO for a living and inevitably talk to other SEOs who seem to have no clue what is going on, and are unintentionally hurting their clients' sites. SEO doesn't necessarily mean doing things to get around the algorithm, but it does mean preparing a site to make it easy for the search engines to know what the site is about. It is really as simple as that. Of course Google (and the other engines) often have ridiculous SERPs, taking the simple and slow approach is going to always be ok.

Oliver Henniges




msg:3052008
 7:26 pm on Aug 18, 2006 (gmt 0)

> Here is what we do to SEO when we build a new site.

I'd absolutely agree with your points, but I doubt the term "SEO" is really applicable to that activity. You cover the mentioned chapter one of SEF and then turn towards "Good old fashioned honest hard work" as Quadrille put it. Indeed "the most successful approach."

Technically, SEO is dead; what remains, is an empty marketing-concept. If it helps to get new customers for an existing company: pecunia non olet. But if SEO evokes illusions in young students they might manage to become a millionaire SEOing, instead of reading books, we should all send a big WARNING.

> why do still thousands of sites get away with the most blatant spamming techniques?

Because even google is not perfect. It's just a couple of human beings trying to make the world a little bit better, like most of us. Leave that part to the google engineers, and listen to the end of "The End" on Abbey Road.

> great point ride.

Indeed. Large sites have far more means to test and evaluate the fuzzy edges; and this is justified, because generally quite a lot of "Good old fashioned honest hard work" is necessary to establish a large site (or even whole network). And if you have invested such a lot of work, you normally act more careful, because you cannot risk to trap into the filters. As long as the site stays "inside" sufficiently, it will remain in the index. And this is OK, because sooner or later the makers of such large sites will realize the potential of conversion-rates and user-needs, so that even if begun as a grey-hat-project, the large site might once be of considerable value for the visitors.

KenB




msg:3052010
 7:27 pm on Aug 18, 2006 (gmt 0)

many sites are going back and forth for no apparent reason so don't go blaming x or y (I did that, and I was wrong). It seems that having a wikipedia, dmoz and plenty of blog links serves as a vaccine, otherwise, there's no guarantee.

If DMOZ and Wikipedia were a vaccine, my site would be totally ammune to everything. Every language of Wikipedia provide hundreds of links to my site (all voluntary and not solicited by me). DMOZ also loves my site.

My site is very old (started in 1995 and on its current domain since 1999); has thousands of pages of content; has thousands upon thousands of high quality volunteer inbound links; does not participate in link swaping schemes; and avoids SEO tactics. Yet it was severely punished by the July 27th update.

What I think we are seeing is Google accepting a very high rate of colateral damage for only a very modest management of SEO spammy sites and has been pointed out, many of the worst SERP spammers are getting away with doing whatever they want.

I am honestly believing the more Google tries to focus on sites that try grey to black hat SEO, the less success they are having at dealing with SERP spam in general.

Bewenched




msg:3052021
 7:34 pm on Aug 18, 2006 (gmt 0)

totally dropped for having their titles match their h1 tags that match their anchors.

Why would having the title in the bar and the title of the page in h1 that are the same be so bad? That's exactly the way it should be right?

Your title is your title.

wmuser




msg:3052038
 7:46 pm on Aug 18, 2006 (gmt 0)

Good points Ride45

trinorthlighting




msg:3052107
 8:48 pm on Aug 18, 2006 (gmt 0)

Adding one link and letting it go works. As we build content google starts to trickle traffic. After that we get the natural links.

jtara




msg:3052117
 8:55 pm on Aug 18, 2006 (gmt 0)

IMO, Google has historically relied way too much on a few simple (or simplistic) concepts - keywords and links more specifically. While Google's treatment of links (supposedly) revolutionized search, it produces an experience that is still frustrating for users, and is an easy target for "optimization".

I think in the grand scheme of the future of search, Google's approach will prove to have been a blip. Absolutely the wrong approach, but was useful at the time.

Where we NEED to go is for search engines to UNDERSTAND content.

There's a good article on C¦Net today about new developments in search engines, using AI techniques:

[news.com.com...]

A great example of what is being missed in today's search engines is ignorance of noise words. Typically, they are just thrown out. An example used in the article is "books by children". Most search engines will throw out the "by", and just use keywords anyway. The search engine has no idea that you are searching for books by children - just that you are searching for keywords "books" and "children". Do this search on Google, and you will see what I mean - you will simply get links to pages about books FOR children.

This is silly. Though a full understanding of content using AI techniques is the holy grail - a lot could be done today using huristics. A lot of common noise words convey an awful lot of meaning when combined with keywords. A search engine could make use of this so that when you search for books BY children, that is what you get. Some search results could be 1000% better (defined by me as: you get some useful results, instead of NO useful results...) simply by programming in some specific cases where the algorithm would choose to keep the noise word.

One good point that the article makes is that current search engines force us to alter our language and the way we think to accomodate search engines. This is completely backwards to the way it should be, and search WILL evolve so that search accomodates language and the way we think.

The question for Googleites (can we still say that? :) ) is will Google evolve, or be left in the dust? The article suggests that Google and other mainstream search engines will slowly evolve, but what is really needed is an entirely new architecture.

Kufu




msg:3052124
 9:04 pm on Aug 18, 2006 (gmt 0)

jtara

Just to throw in a wrench in your comments there, how about "books by children" (in quotation marks).

I don't disagree that the meaning of the content is ignored, but your example is not a very good one. :)

[edited by: Kufu at 9:04 pm (utc) on Aug. 18, 2006]

colin_h




msg:3052146
 9:16 pm on Aug 18, 2006 (gmt 0)

I agree with everyone on this forum ...

I think that, if Google want to be the best, they're gonna have to listen to reports of spam, check them with human eyes ... not their duff-bots and finally ban the cheats for life ... no warnings and also anything else that is registered in the same name or address.

Thanks to everyone for getting involved in this post ... the forum controllers changed my title (it was alot more witty ;-)).

Col ;-)

Lorel




msg:3052170
 9:41 pm on Aug 18, 2006 (gmt 0)

Colin,

Have you checked for scrapers and hijackers? They can cause effects that look like a Google penalty when they aren't.

KenB




msg:3052209
 10:27 pm on Aug 18, 2006 (gmt 0)

Colin,

Have you checked for scrapers and hijackers? They can cause effects that look like a Google penalty when they aren't.

This is a very good point. Site scrapers and content hijackers are becoming extremely aggressive. In one case, I had one site scraper from one IP make over 16,000 requests to my site over the last couple of days. The bot was detected early on and its requests were denied, but it is a good example of how aggressive they are getting.

Given how prolific site scrapers are becoming, it is becoming very important to monitor one's logs and to build comprehensive bad bot detection and banning routines.

jomaxx




msg:3052230
 10:55 pm on Aug 18, 2006 (gmt 0)

But Colin, that situation you described in post 1 wasn't related to SE spamming, was it? Is it even clear that you should have the #1 position instead of that other designer?

Bennie




msg:3052280
 11:51 pm on Aug 18, 2006 (gmt 0)

jtara, do you remember what the SERP's looked like before Google? Blip-Blip-Blip-Blop?

If their was ever a time when an engine was actually working to address the content issue it would be now. Whether they are having problems at the other end is irrelevant (that's the game), Google have come a lot closer to ranking real content than any other SE.

hutcheson




msg:3052285
 11:53 pm on Aug 18, 2006 (gmt 0)

>if Google want to be the best, they're gonna have to listen to reports of spam, check them with human eyes ... not their duff-bots and finally ban the cheats for life ... no warnings and also anything else that is registered in the same name or address.

This program is impractical and would be ineffective. Spammers are already regularly giving false identification to the national registrars. And ... I don't have that common a name, but there are about a half-dozen people that share both it and have a significant web presence. Talk about collateral damage!

Unlike the current scheme, where, despite some claims made here, there really isn't any collateral damage at all: just regularly shifting collateral benefits (as the positions shift, different websites get temporary placement to which none of them had any just or permanent claim.)

This 88 message thread spans 3 pages: 88 ( [1] 2 3 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved