homepage Welcome to WebmasterWorld Guest from 23.20.63.27
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 116 message thread spans 4 pages: < < 116 ( 1 [2] 3 4 > >     
Quality According to Google - Official "Guidance" on Panda Update
netmeg




msg:4308894
 7:02 pm on May 6, 2011 (gmt 0)

Finally a post from Webmaster Central about what Google thinks about quality:

[googlewebmastercentral.blogspot.com...]

 

indyank




msg:4309163
 4:20 am on May 7, 2011 (gmt 0)

How much quality control is done on content?
Does this article have spelling, stylistic, or factual errors?
Was the article edited well, or does it appear sloppy or hastily produced?


The first question covers the other two.

Does this article contain insightful analysis or interesting information that is beyond obvious?
Does the article provide original content or information, original reporting, original research, or original analysis?
Does the page provide substantial value when compared to other pages in search results?


Another set of similar questions.


Are the pages produced with great care and attention to detail vs. less attention to detail?
Does this article provide a complete or comprehensive description of the topic?


One more set of similar questions.

In how many different ways have the question been reworded? This article seem to be a good example of what has been defined as poor quality.

The only good point is the acceptance that it is difficult to measure quality but a half baked algo to address panda is no good and it seem to have been hastily written. The algo fails the panda objectives.

[edited by: indyank at 4:31 am (utc) on May 7, 2011]

tedster




msg:4309165
 4:30 am on May 7, 2011 (gmt 0)

What we've seen so far in the algorithm is just a beginning. For me, that was the main take-away from the article. And even that isn't really new - Matt Cutts already commented that algorithmic assessment of website quality is going to be a year-long focus at Google.

Demand Media is taking some very serious steps with it's eHow property Reference [webpronews.com] that sound like they're along the lines that this blog article recommends. Anyone serious about their website as a business should be doing the same, IMO. Google is not going to reverse this change, although they certainly will improve on it.

indyank




msg:4309166
 4:30 am on May 7, 2011 (gmt 0)

In my above comment, I have added similar sentences to describe different set of examples.

I have just grouped what i felt are similar questions into separate blocks. Those blocks are picked from the original google blog post.I have added very little of my own text.

But it does convey what I am trying to say.

If google were to treat a similar kind of article as low quality because the words have been copied from an article on another page, then the algo designed for panda objectives is not good to measure quality.

walkman




msg:4309168
 5:04 am on May 7, 2011 (gmt 0)

And there's the problem I see. My site that was hit hard really has very little to do with articles, and certainly isn't a site that anyone would need to worry about using a credit card on. Imagine a place that allows people to freely download original works of art, for instance. While the site has some text on it, it isn't meant to present factual information of anything, or present even one side of anything, much less two. It doesn't ask anyone to spend any money on the site, so there's no need to trust it with a credit card. There's actually very little in that post that has anything to do with my site that was hit, and yet it was. Whatever, Google.

So imagine the manual raters that provided the seed info, rating your site or mine. Almost nothing applies to my site, Panda apparently was designed to take out the MFA and content farm sites.


I doubt spelling matters, Youtube after all went way up in the rankings. And I don't need to tell you about the quality of comments or most of their videos.

But i really think Google could do better to communicate the quality of sites within WMT without compromising their "secret sauce". Not all effected sites deserve to be treated with contempt. Here's hoping.

What gets me and cracks me up at the same time is this: these are the same people that ranked eHow, Mahalo and the dozens of clones high for many, many years. In fact Amit, Matt Cutts and friends are reason #1 why all content farms flourished, as Google bragged about search improvements with the same confidence as with Panda. So we have to try not to take it personally.

Effected webmaster's are panicked and quick fixes will not work straight away. The factors are more balanced and conservative than this.

Why though? Only a timed penalty or time needed to recalculate any possible quality score (?) makes sense. It makes no sense to keep going down when you are lowering the number of pages, adding content, fixing internal links and when it seems like no 'average' webmaster is coming out of Panda. We're on week 9 and Google has taken the pages many times. Something is up.

Robert Charlton




msg:4309170
 5:20 am on May 7, 2011 (gmt 0)

indyank - I'm not exactly sure of the point you are making, and please forgive in advance if I'm misunderstanding you.

Some of Amit Singhal's questions do begin with similar wording patterns....

Does this article contain...
Does the article provide...
Does the page provide...

Are you suggesting that this similarity of wording in a list is perhaps equivalent to eHow's excessively repetitive article titles, in which articles eHow was targeting minute variations of longtail queries?

eHow's titles might often fall into mechanistic repetitions and go something like this....

How can I tell if my dog has fleas?
How can I tell if my dog has fleas and lice?
How can I tell if my spotted dog has fleas?
How can I tell if my spotted dog has fleas and lice?
How can I tell if my dog has been scratching at its fleas and flea bites?

Note, in contrast that each of Singhal's questions has an additional 10 words, average, beyond the opening interrogative clauses, to differentiate the substance of the question. I think that Singhal's points are quite nuanced, quite a bit different from the sameness of eHow's clustered topics.

Apart from that, Singhal's questions are simply questions in a list on page, and each question would probably support a unique essay or long discussion, perhaps a book, about aspects of site content and construction.

They are not a list of article titles, obviously playing games with an algorithm, as eHow's questions very often are. Singhal addresses that with this one particular point...

Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?

walkman




msg:4309172
 5:30 am on May 7, 2011 (gmt 0)

Some of Amit Singhal's questions do begin with similar wording patterns....

These are the questions that were given to manual raters to determine the "good" sites, if I'm not mistaken. Based on the results Google designed the Panda

indyank




msg:4309176
 5:57 am on May 7, 2011 (gmt 0)

Robert, Isn't there an overlap in those questions?

Example:

How much quality control is done on content?

Wouldn't an answer to the above question also cover answers to the following two questions?

Does this article have spelling, stylistic, or factual errors?
Was the article edited well, or does it appear sloppy or hastily produced?

[edited by: indyank at 6:18 am (utc) on May 7, 2011]

Robert Charlton




msg:4309177
 6:03 am on May 7, 2011 (gmt 0)

These are the questions that were given to manual raters to determine the "good" sites, if I'm not mistaken. Based on the results Google designed the Panda

walkman - I'm not sure whether your comment was addressed to me or to indyrank, but I'll take this as an opportunity to point out that you've partially answered your own question about what's taking so long for site changes to register.

IMO, Panda two is a recalibration run for Panda one. These calibration runs take a long time. Do you remember how long after MayDay it took for changes to register? Ditto with factoring in Universal results.

I suspect that there have been a lot of chicken and egg steps that haven't been described to us, but Panda one was built around a set of correlations that Google observed when it looked at the questions that were, as you say, given to the manual raters.

The Google algo has always been in part an effort to build upon human perceptions of relevance, popularity, importance, authority, trust, etc etc etc.

Now, they've given us a new list of "quality" related factors. Making some guesses, I'd say that these factors have been given engineering equivalents; searcher responses were measured; and perhaps manual changes in the algorithm were made. First on test beds or with selected sites... then system-wide. Ultimately, if not yet, this will be a self-calibrating system, undoubtedly with manual observation of various query areas and site types that Google monitors more closely than others.

On a set of databases as large as Google's, just the latency factors in various sets of indexes would take a while to register. Beyond that, though, the longer Google runs its calibration mode, the more accurate its user satisfaction information will be.

I suspect that Google will be continually recalibrating via searcher behavior, but this is machine intensive and slow, so ultimately they'll nail down some factors, as much as they nail down anything, into mathematical formulations.

After a while, they start folding in more factors. I'm expecting a Google social layer to be coming in before too long. As many of us observed in the minus 950 era, Google started with single word queries and then got into longer and less competitive stuff. And they've cycled it over and over and over. I'm seeing that they're doing the same thing now. The first few phases of any big change are likely to take a while.

indyank




msg:4309179
 6:12 am on May 7, 2011 (gmt 0)

Walkman, the algo is not unformly applied to all sites.Some parts of the algo are not applied to forums and that might include spelling errors and a few other things.It might even be that a few sites were totally excluded from panda evaluation.

I also get the feeling that the 404 errors which I reported earlier for links that weren't existing on the domain were part of this algo to determine whether a particular evaluation need to be done or not.

For example, I had earlier reported seeing 404 errors for a certain link i.e.

domain.com/forums/

The above doesn't exist on the affected site but still GWT reported to have found a link to it on all navigational pages, yet crawling it returned 404.

This check might have been part of this algo to determine whether there is a forum attached to the site.

Robert Charlton




msg:4309181
 6:21 am on May 7, 2011 (gmt 0)

Isn't there an overlap in those questions?

indyrank - I don't think it's productive to try to split hairs and play gotcha with an engineer's cursory description of an ever-evolving and necessarily secret algorithm. It's more productive to try to understand the intent of what Google is after and to look self-critically at whether your site is delivering. Yes, an awareness of the algo helps, but so does an awareness of what searchers want. I think that's what Google is trying to get across to us.

It also helps to understand design, marketing, network theory, information architecture and what constitutes useful information, and on and on and on. This isn't, though, about how many contextual links can you get away with because Wikipedia uses them.

No algorithm is ever going to be perfect, which is why they're constantly changing. IMO, eHow was taking advantage of several loopholes in the algo, and they did it well... apparently through Panda one.

walkman




msg:4309198
 7:35 am on May 7, 2011 (gmt 0)

Now, they've given us a new list of "quality" related factors. Making some guesses, I'd say that these factors have been given engineering equivalents; searcher responses were measured; and perhaps manual changes in the algorithm were made. First on test beds or with selected sites... then system-wide. Ultimately, if not yet, this will be a self-calibrating system, undoubtedly with manual observation of various query areas and site types that Google monitors more closely than others.

I missed the MayDay discussions. I remember my earnings went down quite e bit but I was still making serious money and traffic stayed the same. I figured I lost a good keyword and will gain a new one the next update, so I would have been here insulting other people's sites ;)

First I don't give Google as much credit as some do, or put another way it's impossible to do all the things they say they do, or that we think they do. Exhibition A: I still see plenty of people making a killing with bought links and that's 10 years after PR was introduced and 7-8 after Google declared war on paid links. So when Google says that they can measure if an article mentions different points of view I laugh.

Google already has their formula of what a 'good site' is based on x and y...b, c ...and a. These questions were there from the beginning, so Google, based on site specific stats /template /links /content /spelling /total# of 'bad' pages /etc /etc and assigned each site a score. I have no doubt that each site has a specific score at any point. Unlike PR it *may* not be calculated on the spot of course due to being resource intensive. Now, for user part Google may need to measure user reaction (even harder when traffic is cut) but for 'content' and templates and bad /good page ratio Google just needs to grab the new ones. That alone should boost the pages a bit, assuming things were fixed for the better. No one came back on 'Panda 2' IIRC, just went down more, despite drastic measures.

You are saying that Goog is waiting till it gets even better data, since the longer the data collection the better. Maybe, but they'd be ignoring a helluva lot of pain they caused. I repeat, not all sites deserved to go down or down that much, some are innocent things that are only 'bad' because Google said so retroactively. Traffic has gone up and down for me 10%-30% all the time, never really complained. But to release a surprise algo and leave it like this with zero explanation is not fair. Testing is fine but maybe should have done it in stages, not trash sites completely and then say, we're learning.

However, the fact that 9 weeks later Google told us what it expects shows that they realized something is wrong, at least in their communication department.

IMO, Panda 2 was "make sure eHow is punished since it's embarrassing us," but that's another story.

RedCardinal




msg:4309219
 8:49 am on May 7, 2011 (gmt 0)

An interesting Tweet from Matt Cutts in response to a question about how long it takes to get out of Panda:

short version is that it's not data that's updated daily right now. More like when we re-run the algorithms to regen the data.


[twitter.com ]

walkman




msg:4309220
 9:15 am on May 7, 2011 (gmt 0)

RedCardinal it's 5.13 AM here :)
What it mean? They have specifically to re-run the algo for the new data to show up?

jinxed




msg:4309221
 9:16 am on May 7, 2011 (gmt 0)

Nice find, RedCardinal - thanks

dazzlindonna




msg:4309223
 9:33 am on May 7, 2011 (gmt 0)

short version is that it's not data that's updated daily right now. More like when we re-run the algorithms to regen the data.


So we've all not been crazy after all. Call it what you will, but that's equivalent to a timed penalty in my book. Thanks for pointing it out RedCardinal. That's one less thing to be in the dark about.

tristanperry




msg:4309228
 9:56 am on May 7, 2011 (gmt 0)

Brilliant spot RedCardinal. The algo must use a fair bit of CPU power then if it's only re-run periodically? (Further implying that latent semantic analysis is a bigger part of Panda, IMO).

walkman




msg:4309229
 10:04 am on May 7, 2011 (gmt 0)

Brilliant spot RedCardinal. The algo must use a fair bit of CPU power then if it's only re-run periodically? (Further implying that latent semantic analysis is a bigger part of Panda, IMO).

Unless it's by design to teach the mere mortals a lesson

Pjman




msg:4309231
 10:29 am on May 7, 2011 (gmt 0)

Call it what you will, but that's equivalent to a timed penalty in my book.


Yes, that is the final outcome. Question is....

How often will they be re-evaluated? Quarterly?

tristanperry




msg:4309237
 11:21 am on May 7, 2011 (gmt 0)

Unless it's by design to teach the mere mortals a lesson

LOL! Fair point though. Thinking about it, Matt Cutts did recently say that they have the server power to take down most of the web if they wanted (in a webmaster video), so it might not be a case of CPU power and could be more a case of a penalty.

Reno




msg:4309244
 12:10 pm on May 7, 2011 (gmt 0)

Matt Cutts did recently say that they have the server power to take down most of the web if they wanted (in a webmaster video)

Did not see that. If it's an accurate portrayal of what he actually said, it's offensive in the extreme (and admittedly, we're reading this out of context). It's a veiled threat, the kind of thing one would expect from a cheap thug. "Don't mess with us, we're bigger than you, we'll break your legs if necessary." The fact that someone would even harbor such a thought speaks volumes, IF, as I said, that is anywhere close to being the quote.

........................

scooterdude




msg:4309246
 12:26 pm on May 7, 2011 (gmt 0)

Great thread

I wonder if Panda is a series of lists that that are generated by a combination of editor marks and algo processing of site data. A bit like the pagerank generation.


They can ony run it perhaps every 2 months or so thanks to the size of their databases, latency is a scary thing and the unintended consequences can be quite time consuming to fix

Perhaps the score boosts some sites and degrades others , one of several other factors, but at a similar or superior weight to pagerank

just guessing

indyank




msg:4309252
 12:45 pm on May 7, 2011 (gmt 0)

They have already run the evaluation again at least once during panda 2.

Personally, I do feel that they have run it one more time after panda 2 as well.

AlyssaS




msg:4309253
 1:06 pm on May 7, 2011 (gmt 0)

Brilliant spot RedCardinal. The algo must use a fair bit of CPU power then if it's only re-run periodically? (Further implying that latent semantic analysis is a bigger part of Panda, IMO).


+1!

I think we've seen just one major re-run, and that was on April 11th. And I think they are running it again right now -they seem to have lifted the filters on Apr 26th, and my competitor, duplicate guy, reappeared. He's still there 12 days later, so whatever they are doing, it obviously needs time as well as CPU. Maybe they need to observe user data over a period of two weeks? Anyone's guess.

At least we now know that we'll be getting these monthly jolts/re-jigging to look forward to for the rest of the year...

tristanperry




msg:4309254
 1:06 pm on May 7, 2011 (gmt 0)

Did not see that. If it's an accurate portrayal of what he actually said, it's offensive in the extreme (and admittedly, we're reading this out of context). It's a veiled threat, the kind of thing one would expect from a cheap thug. "Don't mess with us, we're bigger than you, we'll break your legs if necessary." The fact that someone would even harbor such a thought speaks volumes, IF, as I said, that is anywhere close to being the quote.

[youtube.com...] - 56 seconds in

It isn't said in a nasty way (I think more of a 'hey this is a fun nerdy fact', IMO) - I may have unintentionally taken it out of context with my post - but I know what you mean lol.

g1smd




msg:4309255
 1:11 pm on May 7, 2011 (gmt 0)

If spelling and grammar are important, most of the Chinese counterfeit, fake and knockoff goods sites should sink like a stone.

pageoneresults




msg:4309261
 1:41 pm on May 7, 2011 (gmt 0)

55 responses so far to this topic. Not one have mentioned...

Our advice for publishers continues to be to focus on delivering the best possible user experience on your websites and not to focus too much on what they think are Google’s current ranking algorithms or signals. Some publishers have fixated on our prior Panda algorithm change, but Panda was just one of roughly 500 search improvements we expect to roll out to search this year. In fact, since we launched Panda, we've rolled out over a dozen additional tweaks to our ranking algorithms, and some sites have incorrectly assumed that changes in their rankings were related to Panda.


Google have already moved on from Panda. It's here to stay. It's being improved as we move forward. If you've not been able to recover as of this date, there's a good chance you won't be recovering anytime soon. If I had a site hit by Panda, I'd be hiring another set of eyes to take a close look at everything. Existing site owners typically have blinders on and miss the bigger picture. I've seen sites hit by Panda and have seen the owners complaining. One look at the sites and you can clearly see why they were hit.

While this announcement from Google may cover a large percentage of those hit by Panda, it doesn't cover the remaining percentage that don't fit the initial mold. Maybe they'll come out with a second set of guidelines that expand on this further. Based on my basic research into those hit by Panda, my list of guidelines might look something like this. And remember, the bots are deaf and blind.

  • How many cookies are being delivered?
  • How fast does the site load?
  • How fast does the site load perceptually?
  • How many HTTP Requests are being made?
  • How many of those HTTP Requests are third party?
  • How many Round Trips are being made?
  • How many of those Round Trips are third party?
  • How much does the page weigh after rendering?
  • Can you use the site in a Text Browser e.g. Lynx?
  • Can you use the site with images turned off?
  • Do the documents validate? HTML? CSS? Mobile?


The above is just a short list. This list is something I've used for years when performing document quality audits for that which you "cannot see". Everything discussed thus far has revolved around that which you can "see". What about all the other stuff behind the scenes, do you think that may be a factor? Take the above list and use it to perform an audit on those hit by Panda that don't fit the "first" set of guidelines being referenced in this topic.

Can you use the site with images turned off?


I call this my SEO Sniff Test. I find more problems in this mode than anything else. Many sites are unusable when images are off. That means the site is unusable in Lynx too. Google make very specific references to this in their guidelines even mentioning Lynx as a tool to check.

aristotle




msg:4309268
 2:00 pm on May 7, 2011 (gmt 0)

PageOne - Thanks, That's a nice list. But what do you mean by:
How much does the page weigh after rendering?

dazzlindonna




msg:4309270
 2:09 pm on May 7, 2011 (gmt 0)

pageoneresults: And if the site's PURPOSE is to share images? That sniff test suddenly begins to smell itself.

scooterdude




msg:4309271
 2:16 pm on May 7, 2011 (gmt 0)

@Pageoneresults

After a similar post, i spent weeks forcing my sites to validate 100%, sure i'm glad i did, they load massively faster, look much better, but,,

I am not certain how its impacting ranking,
here is to hoping :)

pageoneresults




msg:4309281
 2:49 pm on May 7, 2011 (gmt 0)

Thanks, That's a nice list. But what do you mean by: How much does the page weigh after rendering?


Once the page has rendered in the browser, what is the total weight. For example, many of those hit by Panda are what I refer to as UA Abusive. I see sites making 300+ HTTP Requests per page weighing in at 2.5MB+ after rendering. That's huge from my perspective and something to carefully consider optimizing.

Now, take those 300+ HTTP Requests and add Round Trip Times and you really get UA Abuse. Take each redirect and add another round trip. In one example site, there were an approximate 450 round trips being made per document. Mind you, many of those get cached but, I've seen some of them override those cache settings and force content to be recached each time the doc is requested.

All this time, you've got the server responding to each and every HTTP Request. Put the site under heavy load and the browsing experience becomes painful to say the least.

And if the site's PURPOSE is to share images? That sniff test suddenly begins to smell itself.


Ah, I dig image optimization, one of my fortes! My SEO Sniff Test works wonders in this area. But, there are a whole nuther set of factors to look at when dealing with image optimization. Do you allow hotlinking? When images are off, is proper alt text displayed? And, is it styled? I've seen alt attributes appear in #eeeeee against #ffffff, that's not really working in your favor from a user perspective. Image naming. Image sizing and optimization. It's a rather long list and makes for a great topic which there are many floating about. ;)

After a similar post, i spent weeks forcing my sites to validate 100%, sure i'm glad i did, they load massively faster, look much better, but, I am not certain how its impacting ranking, here is to hoping.


There's no hoping in this instance. What you've done is what everyone should be doing out of the box. People spend more time writing broken code, and then fixing it, than anything else. You cannot perform any type of site audit unless the underlying code is well formed and validation is the first step in that process. After validation comes the extraction of semantics. I've seen plenty of valid documents fail the semantics test.

Most folks will argue that validation has zero impact. What they fail to realize is that validation plays a major role when dealing with a variety of user-agents. You SHOULD never leave the user-agent guessing, ever! While error recovery routines are robust, there are certain parsing errors that just can't be recovered from correctly.

Think about the time involved for those error recovery routines. Then, think about the output once those routines have run. A simple test for this is to run HTML Tidy on your documents and see what you end up with. I've seen comments from Google that they run a Tidy routine during their document indexing. They have to. Their bot is designed to index machine readable grammar. If that code is not well formed, I don't think it can interpret it correctly, hence the tidy routines during indexing and one of the reasons why folks are able to get away with the crap code they produce.

brinked




msg:4309283
 3:09 pm on May 7, 2011 (gmt 0)

I really need to chime in here.

First of all, that is a great blog post by google and very very helpful if you know how to read between the lines. I recommend everyone read it at least 5 times.

Now, this is not there secret sauce but they do give us some tips as to what panda is about, and much of it I have greatly speculated about here on these forums.

Lets take the main points from this list, the realistic ones and not ones such as "Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?" you can all but throw this out the window. Here are the points everyone should be paying attention to.

- Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
We have discussed this in the panda threads. Having same/similar articles with slightly different phrasing can be a major factor with panda. Everyone should read this point at least 10 times, I overlooked it until I read the article for the 3rd time and then the wheels started turning. If you overlook this point and think it doesnt apply to you, you need to really check your site and make sure this does not apply to you.

- Would you be comfortable giving your credit card information to this site?
This one is not as important to me as all my ecommerce sites are doing fine and guess what? they all use copied manufacturers descriptions, so all those who say duplicate content/manufacturer descriptions are being pandalised, look elsewhere for your problem source.. But to everyone out there with an ecommerce site, do make sure your site is SECURE meaning that all your SSL certificates are valid and all your pages that should be secure are in fact secure.

- Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?

This falls in line with over optimizing for google. Its tempting to put your keywords everywhere, but dont do it....just dont.

- Does the article provide original content or information, original reporting, original research, or original analysis?

I theorized on these forums about many sites that write about the same story. It didnt get many responses but this further brings my theory to light. Lets take your typical gossip/entertainment site. They all mostly write about the same thing. Every time Lindsey Lohan is arrested, every gossip site has the same story written differently but with the same point. Be original and write about something nobody else is writing about (googles words not mine).

- Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?

A lot of people talked about syndication. I dont agree with google doing this, but from this point it sounds like they do not want you to spread your content to other sources. Big thumbs down for google on this one and not giving the proper credit to the source.

- Does this article have an excessive amount of ads that distract from or interfere with the main content?

Here is the point I have been pushing big time on here. I think I was one of the first to push this idea in the "sites that dont fit the mold" thread. If you have too many ads on your site it is taking away from your users experience. Do not use deceptive ads or ads that overwhelm your users.

As for Matt Cutts saying that pandalised sites can get released when the algo refreshes, that is great news for everyone. Everyone should assume that it will take 3-4 months, so keep working on your sites and cover all of your bases. If you have one small quality issue on your site, fix it, you never know what might help you break free of panda.

This 116 message thread spans 4 pages: < < 116 ( 1 [2] 3 4 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved