Forum Moderators: Robert Charlton & goodroi
1. In the patent article it is suggested that pages that are on top are regarded more popular than not so highly ranked pages due to increased traffic. Wouldn't that be a reversed vicious circle (a positive upward circle so to say), since highly ranked pages naturally get more traffic than others?
In this context the possibility of Google analysing the logfiles of all/ some pages is mentioned? Wouldn't that devastate Google capacities if they'd start analysing logfiles of millions of webpages?
Wouldn't that also be easy to manipulate? I'm sure webmasters won't have any trouble programming tools that generate tons of traffic.
2. Concerning the aging delay discussions: I have a four week old webpage that's on the first page for a keyword with 100.000 search results. Wouldn't that contradict the aging delay theory (new sites not getting good results for competitive keywords for the first 9-12 months)?
3. An SEO myth I start hearing more and more: Content in tables might not be so perfect for Google anymore but that sites programmed in e.g. style sheets make the content turn out to be a bit more relevant (because you can make the page still look good without having to use tables)?
Looking foreward to some words of wisdom to these thought?
2) 4 Weeks could be the freshie bonus. Wait another 2 weeks. ;)
3) The more text in comparison to HTML tags the better. css gives cleaner code than tables, and more logically structured.
An SEO myth I start hearing more and more: Content in tables might not be so perfect for Google anymore but that sites programmed in e.g. style sheets make the content turn out to be a bit more relevant (because you can make the page still look good without having to use tables)?
That does sound like a myth, if only because search engines index content, not code.
An SEO myth I start hearing more and more: Content in tables might not be so perfect for Google anymore but that sites programmed in e.g. style sheets make the content turn out to be a bit more relevant (because you can make the page still look good without having to use tables)?
search engines remove all html tags. They give different weightings to different tags but in the end it is all text so thus in that respect this is a myth.... BUT using css instead of tables will give better control over the order / structure of the content which is an important benefit imho.
For example a page with using tables could lose meaning if structured incorrectly.
eg.
"<tr><td> product 1 title</td><td>product 2 title</td></tr>
<tr><td> product 1 description</td><td>product 2 description</td></tr>"
after removing the html this will be read to the search engine as
"product 1 title product 2 title product 1 description product 2 description"
Thus the meaning of product 1 and 2 are confused.
however with ccs you can have better control.
I know that you can get around these issues if you carefully layout the tables but there is a lot to be said for the simplicity of css.
Google Patent: Using Usage Statistics in Search [webmasterworld.com]
If so, notice Import Export's comments at the end of that thread about tracking and A/B testing.
Added:
Concerning the aging delay discussions: I have a four week old webpage that's on the first page for a keyword with 100.000 search results. Wouldn't that contradict the aging delay theory (new sites not getting good results for competitive keywords for the first 9-12 months)?
Going by what I can gather from posts by such members here, it seems that they've been by webmasters with existing, older sites that are very heavily trafficked and if they link to new ones from those, it could start new sites out with heavy traffic to begin with.
[edited by: Marcia at 6:13 am (utc) on Nov. 18, 2005]
Wouldn't that be a reversed vicious circle
Only if Google engineers are really dense. If not, they could arrange to do things like only inspect traffic for SERPs that are page 2 or lower to see if they possibly merit a boost. That would have the potential effect of pushing people off of page 1 that might be offering inferior results to searchers.
I have multiple times seen a URL go to the "plausibly viewable range" (e.g., pages 3-10) and then directly pop from there to page 1. My belief is that these tended to be examples where my URL was a pearl of detailed, complete information swimming in a sea of not-too-helpful-to-users competing pages. I suspect that when Google sees a statistically significant number of searchers going all the way to, say, page 5 to find the answer they need, some ranking significance is attached to that.