| 8:46 pm on Oct 29, 2006 (gmt 0)|
>> seeing urls tagged as Supplemental MAY BE a problem <<
Supplemental Results that return a "200 OK" when you do a search for words that form a part of the current page content are the ones with a problem.
Supplemental Results that represent the previous version of the content at a URL, or which represent a URL that is now a redirect, or is 404, or is on an expired domain, are not a problem.
>> GoogleGuy: "the supplemental results are a new experimental feature to augment the results for obscure queries. This is a new technology that can return more results for queries that for example have a small number of results.
Those extra results are of several main types:
- Many are simply for any URLs that are duplicate content of the stuff already listed as normal results.
- They are also for URLs that have been redirecting, or are 404, or have domains that have expired sometime in the last year or so, and which have old content that matched your search term.
- They are also for URLs where Google has stored the current page content as a normal result and the previous content of the page as a Supplemental result.
- The last type are pages that have been deemed "unimportant" as they have low PR, few inbound links, and live somewhere on the periphery of the web.
The first three types of Supplemental Results allow you to see old content, content that no longer exists live on the web, via the Google cache.
[edited by: g1smd at 9:01 pm (utc) on Oct. 29, 2006]
[edited by: engine at 2:30 pm (utc) on Oct. 31, 2006]
[edit reason] added link [/edit]
| 8:49 pm on Oct 29, 2006 (gmt 0)|
I'll jump in
Having non W3C compliant code will harm your site - Myth.
Having html errors will harm your site - True.
One is standards... no reason to suddently start penalising odler sites with great content just becuase they didn't update their code.
The other may prevent a spider actually being able to crawl your pages properly.
Sorts of things i mean is having two <heads>s accidentally, deleting a closing </html> etc
| 9:02 pm on Oct 29, 2006 (gmt 0)|
Great thread Tedster, cheers for starting it. I can see this thread being a huge help to many people.
| 9:05 pm on Oct 29, 2006 (gmt 0)|
I'll probably get flamed for this:
I have one page that ranks very highly and gets plenty of traffic for a commonly misspelt word.
The word is used nowhere on the page and is not used in any link to the page. The only place the word is used is in the filename - I had a keyboard with a faulty 'e' 8 years ago when I created the page.
I rank #1 for that misspelt keyword in google (and combinations of that word with other words that actually are included on the page) but not in any other se (and I receive lots of traffic for it).
Make of this what you will...
| 10:36 pm on Oct 29, 2006 (gmt 0)|
Google Page Rank affects Traffic levels Directly - Myth
Over the last 6 months I made a site and carried out a linking programme specifically designed to inflate page rank. The results when i got to PR 5?
No increase in traffic. Full stop.
| 10:38 pm on Oct 29, 2006 (gmt 0)|
Hi here is my opinion:
1. Google evaluates the content section of a page differently from the rest of the template. - Probable
I've seen major movements when changed text in the first 200 words - not the "content section". But there might be something that google does differently to defferent parts of the page.
2. Google is using human editorial input to affect the SERP - Probable
I don't know anything about this - close to impossible to proove...
3. Using a dedicated IP address helps in ranking. Opinion
I have no evidence to support this. I do have few sites on shared IP
s - doing just as well as on dedicated...
4. Seeing any urls tagged as Supplemental Result means there is a problem. Opinion
I guess it depends what the "problem" is. If the "problem" is having too mcuh "obscure" information, than, yes. If the site is small and it's not about some "obscure" topic and most of it supplemental, than there is a "problem".
5. Having non W3C compliant code will harm your site - Myth.
Look at the SERPs
<editor's note: the W3C topic sparked a side discussion
which I moved here [webmasterworld.com]>
6. Having html errors will harm your site - Opinion.
It depends on what kind of errors. If it's really screwed up, then True.
[edited by: tedster at 2:54 am (utc) on Oct. 30, 2006]
| 11:08 pm on Oct 29, 2006 (gmt 0)|
Okay, my 2 cents...
1. Google evaluates the content section of a page differently from the rest of the template. - True
And I believe they do this to prevent duplicate content type issues.
2. Google is using human editorial input to affect the SERP - Myth
Unless you're talking about spam. I just think it's easier to automate everything.
3. Using a dedicated IP address helps in ranking. Myth
I think there are other signs of quality that go beyond IP addresses.
4. Seeing any urls tagged as Supplemental Result means there is a problem. Half True
Even Matt Cutts says he wouldn't worry if there were some, but I think an indexing problem cropped up several months ago and resulted in some very real Supplemental problems.
5. Having non W3C compliant code will harm your site - Myth
That's just plain silly.
6. Having html errors will harm your site - Myth
There are lots of sites - including big ones - that do not validate. Although some day this may make a difference as it becomes more popular to deliver content to more device types - but not a pc browser.
7. Google Page Rank affects Traffic levels Directly - Fact
Although it's not quite as simple as PR anymore.
| 11:15 pm on Oct 29, 2006 (gmt 0)|
Good idea for a thread tedster :-)
I'll add my input, obvious though they may be or otherwise:
1) Using a single, relevant <H1> tag prominently assist SERPS - I say true
2) Spending time on coaxing inbound and reciprocal links is a major factor - I say true
Although i think this is the one area that will change in time as it's (arguably) too easy to manipulate a result not indicative of quality.
3) Locating your server in the country of your target audience helps SERPS - I say true
Although I don't think this is a logical assumption for Google to make on our behalf personally.
4) Pagerank is useless - I say Myth
...but I believe its misunderstood and is intended as an indicator of how strong your site navigation is, not how authoratitive your site is.
| 11:37 pm on Oct 29, 2006 (gmt 0)|
>>>>>4) Pagerank is useless - I say Myth
...but I believe its misunderstood and is intended as an indicator of how strong your site navigation is, not how authoratitive your site is. <<<<<<
Hmmmm...well that would strongly contradict what MC says regarding supplementals...
>>>- The last type are pages that have been deemed "unimportant" as they have low PR, few inbound links, and live somewhere on the periphery of the web.<<<<<
Everyday more sites are losing urls out to supplemental hell for this reason. Coming soon to a site near you. Especially if it is a small commercial site...and I am not talking about affiliates.
This is something I have been saying since BigDaddy hit. And it is going to get even worse.
| 11:45 pm on Oct 29, 2006 (gmt 0)|
Links inside a <noscript> or <noframes> element get followed. True
Links inside a <noscript> or <noframes> element pass PR and other backlink influences. True only sometimes
| 12:35 am on Oct 30, 2006 (gmt 0)|
Page load time affects rankings : mtyh-opinion
Apparently G doesn’t care how long it takes for your page to load (not sure about other SEs)
Page times-out during loading: True-probable
If site times out during loading it might affect rankings in G (depending on site’s other standings). This goes back to user experience – if site times-out it will not make user’s happy
[edited by: Tastatura at 12:35 am (utc) on Oct. 30, 2006]
| 1:25 am on Oct 30, 2006 (gmt 0)|
>> 1. Google evaluates the content section of a page differently from the rest of the template.
TRUE. I remember reading how google takes out everything else that is repeated and leaves the core of each page. It was one of the founders I believe.
>> 2. Google is using human editorial input to affect the SERP.
Probably. They might check pages that are borderline or pages to get a seed.
>> 3. Using a dedicated IP address helps in ranking.
Myth. Most sites do not have dedicated IPs so why give those who an advantage? It probably started with guys have 400 spammy sites on a few IPS. They got banned or penalized for different reasons; the IP was just a coincidence.
>> 4. Seeing any urls tagged as Supplemental Result means there is a problem.
Most likely there is a problem, especially if they are too many of them, or worse, all of them.
| 2:56 am on Oct 30, 2006 (gmt 0)|
|Google is using human editorial input to affect the SERP |
Has to. This is fundamental to data mining. You gotta have a set of "good" results (rank order in this case) in order to tell the machine "find other things that are 'good' like these". I would be shocked to learn they do not also do the reverse for spam: "find other things that 'stink' like these pages". Human input is needed for training the machine what the difference between "good" and "bad" is.
OTOH, if you're saying that Google is using humans to individually tweak rank order for specific queries, then I seriously doubt that happens enough to notice, possibly not at all.
| 3:14 am on Oct 30, 2006 (gmt 0)|
I had the recent "Editorial Input patent" in mind. Not sure how to test whether this is currently all theoretical or actually in place. My gut says it's being used.
|1. Get a bunch of people to find and rate really good websites and really spammy websites for certain searches |
2. Make their rating into a parameter
3. Look at what the algo says the top results "should" be for a particular search
4. See if that search is in one of the topic areas that has an editorial rating
5. If so, look to see if there is some relationship to either the good guy list or the bad guy list
6. Shift the search rankings according to whatever parameter the editors generated.
7. Serve the shifted results to the user.
Editorial Input patent [webmasterworld.com]
Pretty difficult to reverse engineer that one, no? How about this one, from the monster itself -- the Historical Information patent?
1. Backlinks have less influence when they first appear. Probable
I haven't isolated this factor in a test, but I see plenty of suggestive evidence. Like a site that went out asking for one way links in a very intense way, saw a pop in their rankings within a couple weeks, but then more pop over the next two months -- even though they had moved their link monkeys over to a different jungle gym.
| 3:32 am on Oct 30, 2006 (gmt 0)|
quick observation: Google and math can do a LOT of things, but whether they can be implemented in such massive scale is another question. Google might have to pick and choose what to do given the resources needed to crunch the numbers.
| 5:12 am on Oct 30, 2006 (gmt 0)|
x. Google moves you up in the SERPs if diverse real users are finding the answer their searches in your listing.
Don't know if that's true or not, but it sure "feels" like it to me. Google can get pretty good data about user dis/satisfaction with particular listings when they turn on their "click tracking". They could also get some data by combining search behavior with AdSense display data, but that would technically violate the "AdSense never affects rankings" rule.
This is a real hard effect to prove, since changes designed to better satisfy users (improving content, more descriptive SERP listing) pretty much have to affect lots of other Google algorithm variables.
| 5:25 am on Oct 30, 2006 (gmt 0)|
Just an opinion:
Google uses whois records to detect sites and networks of sites.
| 7:42 am on Oct 30, 2006 (gmt 0)|
Google uses toolbar data to gauge quality and affect rank. "Opinion"
I've long suspected that the piles of fascinating data the Google toolbar gleans could be used to produce better SERPs. If I were Google, I would, and I can think of half a dozen toolbar metrics that could be indicative of a good content-rich site. It makes little sense that Google wouldn't use that data. But do they? One Google rep I confronted would neither confirm nor deny anything, but slyly conceded that my theories were "interesting" and agreed that toolbar data "could be used that way"
| 7:46 am on Oct 30, 2006 (gmt 0)|
Google ranks a nearly or even vaguely relevant page with a keyword / key phrase hyperlink higher for that keyword / key phrase than it ranks an otherwise equal content rich page for that keyword / key phrase.
industrialwidgets.com has two areas
blue industrial widgets
orange industrial widgets
What turns up in Google's SERPs for "blue industrial widgets"?
Why, it is the page for orange industrial widgets!
Now, that is not so bad when widgets and industrial are the two most important / relevant factors / elements, but when "blue" and "orange" are the most important relevance wise, imagine how stupid the search returns look! Also this is why forum and blog spam redirects rank so high.
| 8:28 am on Oct 30, 2006 (gmt 0)|
Be careful - Google has trouble judging which of three keywords in a phrase makes the key distinction - True
| 8:51 am on Oct 30, 2006 (gmt 0)|
Google penalises sites that it sees as being over optimised. strongly held opinion with data to back this up below ;)
I believe that Google does not like new sites that appear to be over optimised with KWs in the domain name, title, description, H1, H2, H3 and anchor text. I also believe that the data below backs this up.
Two years ago I launched three sites within about four weeks of each other. The very same SEO techniques were used in all of them. One of them was commercial and heavily optimised for the KWs in the domain name www.mauve-widgets.co.uk. Another, non-commercial, was also heavily optimised for a poet's name, www.johndoe.org.uk. The third, also non commercial and heavily optimised, was for a private sports club, which used the domain name, www.mytownmysportclub.co.uk.
All three sites were quickly indexed by Google. All of them were launched with only a couple of IBLs. The third mentioned site started ranking first although it is a minority sport and did not get much traffic it has climbed steadily ever since. The second started ranking a month or two later but it did not get a high position. It has however steadily climbed the rankings ever since and is now top five for the poet's name.
The first site, www.mauve-widgets.co.uk, did not see any significant Google traffic for about 15 months. It then started attracting traffic for some of the lesser terms on the site and it has stayed there ever since. It is still off the radar for the main KWs (those in the domain name). It has reasonable content and it genuinely offers something for nothing (a free service for those looking for mauve widgets). To me this is strong indicator that some sort of OOP is in play.
This morning for the purpose of this exercise I ran a check and for the term mauve widgets the site is at position 601 in Google.com and 602 in Google.co.uk. On a search for "mauve widgets" (in quotes) the site is not listed on either and after two years I doubt that it ever will be. This to me is proof that Google has penalised my site for the main KWs.
| 9:11 am on Oct 30, 2006 (gmt 0)|
Is your conclusion that only overtly commercial sites suffer from this OOP? Is that why the other two similarly optimized sites rank well and not the mauve-widgets?
| 9:34 am on Oct 30, 2006 (gmt 0)|
BeeDeeDubbleU, maybe there simply is a lot of competition for your commercial site? Established, authority sites?
| 9:47 am on Oct 30, 2006 (gmt 0)|
Keyword in the domain name helps ranking - True
Hyphenated domain names harm ranking - False
Fireproof jacket is now *ON*
| 10:27 am on Oct 30, 2006 (gmt 0)|
Google takes into account hundreds of factors to determine the SERPs that are all interrelated, meaning saying something that works for you, may not be reproduceable on another site due to the inter-relationship of other factors on that site.
Just my opinion
| 11:30 am on Oct 30, 2006 (gmt 0)|
|#:3139299 - google takes out everything else that is repeated |
What exactly does "takes out" and "repeated" mean?
More specifically, is something classed as repeated when it is identical, or when it is similar? I presume this applies to a navigation bar. A navigation bar can be made slightly different on each page simply by not having a page linking to itself.
Added: I agree with M_Bison.
[edited by: Patrick_Taylor at 11:34 am (utc) on Oct. 30, 2006]
| 12:35 pm on Oct 30, 2006 (gmt 0)|
adding outbound links to relevant sites makes a BIG difference in SERP results - TRUE
links to other internal pages from your own page content is a good way to build links... (ie don't just rely on the navigation menu) - TRUE
| 12:47 pm on Oct 30, 2006 (gmt 0)|
Internal anchor test makes a difference - TRUE.
Pagerank is not important as a ranking factor - FALSE.
non-compliant html causes you to rank badly - generally FALSE
following W3C guidelines helps you to rank better - TRUE
Having Adsense on your site (oldie) helps you rank better - FALSE
Having lots of affiliate links on your site lowers your ranking - OPINION
Keywords in filenames can be a ranking factor - TRUE.
Unique IP address helps in ranking - generally FALSE.
Google takes up far too much of our time and resources - TRUE.
my 2 cents...
| 12:56 pm on Oct 30, 2006 (gmt 0)|
|Google takes up far too much of our time and resources - TRUE |
That's because we are preoccupied with what is TRUE and FALSE - TRUE
| This 117 message thread spans 4 pages: 117 (  2 3 4 ) > > |