Welcome to WebmasterWorld Guest from 22.214.171.124
What SEO "facts" fall into what category? What kind of testing would you do to move a given idea up a notch or two on an SEO "trustability" scale of:
1. Google evaluates the content section of a page differently from the rest of the template. I say "True".
I've seen this effect several times in the power of a link in the body of an article compared to a link somewhere in a "related links" section. Links in the body content rock the house.
2. Google is using human editorial input to affect the SERP. I say that's "Probable".
I sure can't think of a test to make this one proven. The big fuss over eval.google.com is several years old, and Google has now filed a patent on how to integrate editorial input into the algorithm.
3. Using a dedicated IP address helps in ranking. I say that's "Opinion".
In fact, I have moved domains from shared IP to a dedicated IP and seen no obvious change. I think it depends on the company you keep when you share an IP. If you do your own hosting, this could be tested by starting a new domain on an IP address that already has a few banned sites -- assuming you have manged to get a few domin banned somewhere along the line.
4. Seeing any urls tagged as Supplemental Result means there is a problem. I say that's a "Myth".
In fact, g1smd has shared a lot of work in this area - and while seeing urls tagged as Supplemental MAY BE a problem, there are many reasons for Supplementals. In fact, a given url can appear as both Supplemental (with n older cache date) and as a regular result (with a more recent cache date.) This kind of thing is NOT a problem.
WHAT IS PROOF?
One true counter-example is enough, logically, to disprove any proposition. However, being sure you actually have a counter-example can be a challenge with something as complex as today's Google Search. It's a good idea to get a handle on formal logic if you want to untangle the mass of information available about Google. Here's a good resource on that: [nizkor.org...]
So what do others think? Any SEO myths you want to debunk? Tested "truths" you want to share? Testing ideas you want to propose?
How about common fallacies in logic when thinking about SEO?
[edited by: tedster at 5:29 pm (utc) on Oct. 31, 2006]
Supplemental Results that return a "200 OK" when you do a search for words that form a part of the current page content are the ones with a problem.
Supplemental Results that represent the previous version of the content at a URL, or which represent a URL that is now a redirect, or is 404, or is on an expired domain, are not a problem.
>> GoogleGuy: "the supplemental results are a new experimental feature to augment the results for obscure queries. This is a new technology that can return more results for queries that for example have a small number of results.
Those extra results are of several main types:
- Many are simply for any URLs that are duplicate content of the stuff already listed as normal results.
- They are also for URLs that have been redirecting, or are 404, or have domains that have expired sometime in the last year or so, and which have old content that matched your search term.
- They are also for URLs where Google has stored the current page content as a normal result and the previous content of the page as a Supplemental result.
- The last type are pages that have been deemed "unimportant" as they have low PR, few inbound links, and live somewhere on the periphery of the web.
The first three types of Supplemental Results allow you to see old content, content that no longer exists live on the web, via the Google cache.
[edited by: g1smd at 9:01 pm (utc) on Oct. 29, 2006]
[edited by: engine at 2:30 pm (utc) on Oct. 31, 2006]
[edit reason] added link [/edit]
I'll jump in
Having non W3C compliant code will harm your site - Myth.
Having html errors will harm your site - True.
One is standards... no reason to suddently start penalising odler sites with great content just becuase they didn't update their code.
The other may prevent a spider actually being able to crawl your pages properly.
Sorts of things i mean is having two <heads>s accidentally, deleting a closing </html> etc
I have one page that ranks very highly and gets plenty of traffic for a commonly misspelt word.
The word is used nowhere on the page and is not used in any link to the page. The only place the word is used is in the filename - I had a keyboard with a faulty 'e' 8 years ago when I created the page.
I rank #1 for that misspelt keyword in google (and combinations of that word with other words that actually are included on the page) but not in any other se (and I receive lots of traffic for it).
Make of this what you will...
1. Google evaluates the content section of a page differently from the rest of the template. - Probable
I've seen major movements when changed text in the first 200 words - not the "content section". But there might be something that google does differently to defferent parts of the page.
2. Google is using human editorial input to affect the SERP - Probable
I don't know anything about this - close to impossible to proove...
3. Using a dedicated IP address helps in ranking. Opinion
I have no evidence to support this. I do have few sites on shared IP
s - doing just as well as on dedicated...
4. Seeing any urls tagged as Supplemental Result means there is a problem. Opinion
I guess it depends what the "problem" is. If the "problem" is having too mcuh "obscure" information, than, yes. If the site is small and it's not about some "obscure" topic and most of it supplemental, than there is a "problem".
5. Having non W3C compliant code will harm your site - Myth.
Look at the SERPs
<editor's note: the W3C topic sparked a side discussion
which I moved here [webmasterworld.com]>
6. Having html errors will harm your site - Opinion.
It depends on what kind of errors. If it's really screwed up, then True.
[edited by: tedster at 2:54 am (utc) on Oct. 30, 2006]
1. Google evaluates the content section of a page differently from the rest of the template. - True
And I believe they do this to prevent duplicate content type issues.
2. Google is using human editorial input to affect the SERP - Myth
Unless you're talking about spam. I just think it's easier to automate everything.
3. Using a dedicated IP address helps in ranking. Myth
I think there are other signs of quality that go beyond IP addresses.
4. Seeing any urls tagged as Supplemental Result means there is a problem. Half True
Even Matt Cutts says he wouldn't worry if there were some, but I think an indexing problem cropped up several months ago and resulted in some very real Supplemental problems.
5. Having non W3C compliant code will harm your site - Myth
That's just plain silly.
6. Having html errors will harm your site - Myth
There are lots of sites - including big ones - that do not validate. Although some day this may make a difference as it becomes more popular to deliver content to more device types - but not a pc browser.
7. Google Page Rank affects Traffic levels Directly - Fact
Although it's not quite as simple as PR anymore.
I'll add my input, obvious though they may be or otherwise:
1) Using a single, relevant <H1> tag prominently assist SERPS - I say true
2) Spending time on coaxing inbound and reciprocal links is a major factor - I say true
Although i think this is the one area that will change in time as it's (arguably) too easy to manipulate a result not indicative of quality.
3) Locating your server in the country of your target audience helps SERPS - I say true
Although I don't think this is a logical assumption for Google to make on our behalf personally.
4) Pagerank is useless - I say Myth
...but I believe its misunderstood and is intended as an indicator of how strong your site navigation is, not how authoratitive your site is.
Hmmmm...well that would strongly contradict what MC says regarding supplementals...
>>>- The last type are pages that have been deemed "unimportant" as they have low PR, few inbound links, and live somewhere on the periphery of the web.<<<<<
Everyday more sites are losing urls out to supplemental hell for this reason. Coming soon to a site near you. Especially if it is a small commercial site...and I am not talking about affiliates.
This is something I have been saying since BigDaddy hit. And it is going to get even worse.
Page times-out during loading: True-probable
If site times out during loading it might affect rankings in G (depending on site’s other standings). This goes back to user experience – if site times-out it will not make user’s happy
[edited by: Tastatura at 12:35 am (utc) on Oct. 30, 2006]
joined:Dec 29, 2003
TRUE. I remember reading how google takes out everything else that is repeated and leaves the core of each page. It was one of the founders I believe.
>> 2. Google is using human editorial input to affect the SERP.
Probably. They might check pages that are borderline or pages to get a seed.
>> 3. Using a dedicated IP address helps in ranking.
Myth. Most sites do not have dedicated IPs so why give those who an advantage? It probably started with guys have 400 spammy sites on a few IPS. They got banned or penalized for different reasons; the IP was just a coincidence.
>> 4. Seeing any urls tagged as Supplemental Result means there is a problem.
Most likely there is a problem, especially if they are too many of them, or worse, all of them.
Google is using human editorial input to affect the SERP
Has to. This is fundamental to data mining. You gotta have a set of "good" results (rank order in this case) in order to tell the machine "find other things that are 'good' like these". I would be shocked to learn they do not also do the reverse for spam: "find other things that 'stink' like these pages". Human input is needed for training the machine what the difference between "good" and "bad" is.
OTOH, if you're saying that Google is using humans to individually tweak rank order for specific queries, then I seriously doubt that happens enough to notice, possibly not at all.
1. Get a bunch of people to find and rate really good websites and really spammy websites for certain searches
2. Make their rating into a parameter
3. Look at what the algo says the top results "should" be for a particular search
4. See if that search is in one of the topic areas that has an editorial rating
5. If so, look to see if there is some relationship to either the good guy list or the bad guy list
6. Shift the search rankings according to whatever parameter the editors generated.
7. Serve the shifted results to the user.
Editorial Input patent [webmasterworld.com]
Pretty difficult to reverse engineer that one, no? How about this one, from the monster itself -- the Historical Information patent?
1. Backlinks have less influence when they first appear. Probable
I haven't isolated this factor in a test, but I see plenty of suggestive evidence. Like a site that went out asking for one way links in a very intense way, saw a pop in their rankings within a couple weeks, but then more pop over the next two months -- even though they had moved their link monkeys over to a different jungle gym.
joined:Dec 29, 2003
Don't know if that's true or not, but it sure "feels" like it to me. Google can get pretty good data about user dis/satisfaction with particular listings when they turn on their "click tracking". They could also get some data by combining search behavior with AdSense display data, but that would technically violate the "AdSense never affects rankings" rule.
This is a real hard effect to prove, since changes designed to better satisfy users (improving content, more descriptive SERP listing) pretty much have to affect lots of other Google algorithm variables.
I've long suspected that the piles of fascinating data the Google toolbar gleans could be used to produce better SERPs. If I were Google, I would, and I can think of half a dozen toolbar metrics that could be indicative of a good content-rich site. It makes little sense that Google wouldn't use that data. But do they? One Google rep I confronted would neither confirm nor deny anything, but slyly conceded that my theories were "interesting" and agreed that toolbar data "could be used that way"
Google ranks a nearly or even vaguely relevant page with a keyword / key phrase hyperlink higher for that keyword / key phrase than it ranks an otherwise equal content rich page for that keyword / key phrase.
industrialwidgets.com has two areas
blue industrial widgets
orange industrial widgets
What turns up in Google's SERPs for "blue industrial widgets"?
Why, it is the page for orange industrial widgets!
Now, that is not so bad when widgets and industrial are the two most important / relevant factors / elements, but when "blue" and "orange" are the most important relevance wise, imagine how stupid the search returns look! Also this is why forum and blog spam redirects rank so high.
I believe that Google does not like new sites that appear to be over optimised with KWs in the domain name, title, description, H1, H2, H3 and anchor text. I also believe that the data below backs this up.
Two years ago I launched three sites within about four weeks of each other. The very same SEO techniques were used in all of them. One of them was commercial and heavily optimised for the KWs in the domain name www.mauve-widgets.co.uk. Another, non-commercial, was also heavily optimised for a poet's name, www.johndoe.org.uk. The third, also non commercial and heavily optimised, was for a private sports club, which used the domain name, www.mytownmysportclub.co.uk.
All three sites were quickly indexed by Google. All of them were launched with only a couple of IBLs. The third mentioned site started ranking first although it is a minority sport and did not get much traffic it has climbed steadily ever since. The second started ranking a month or two later but it did not get a high position. It has however steadily climbed the rankings ever since and is now top five for the poet's name.
The first site, www.mauve-widgets.co.uk, did not see any significant Google traffic for about 15 months. It then started attracting traffic for some of the lesser terms on the site and it has stayed there ever since. It is still off the radar for the main KWs (those in the domain name). It has reasonable content and it genuinely offers something for nothing (a free service for those looking for mauve widgets). To me this is strong indicator that some sort of OOP is in play.
This morning for the purpose of this exercise I ran a check and for the term mauve widgets the site is at position 601 in Google.com and 602 in Google.co.uk. On a search for "mauve widgets" (in quotes) the site is not listed on either and after two years I doubt that it ever will be. This to me is proof that Google has penalised my site for the main KWs.
Google takes into account hundreds of factors to determine the SERPs that are all interrelated, meaning saying something that works for you, may not be reproduceable on another site due to the inter-relationship of other factors on that site.
Just my opinion
#:3139299 - google takes out everything else that is repeated
What exactly does "takes out" and "repeated" mean?
More specifically, is something classed as repeated when it is identical, or when it is similar? I presume this applies to a navigation bar. A navigation bar can be made slightly different on each page simply by not having a page linking to itself.
Added: I agree with M_Bison.
[edited by: Patrick_Taylor at 11:34 am (utc) on Oct. 30, 2006]
my 2 cents...