Welcome to WebmasterWorld Guest from 184.108.40.206
If you have a Webmaster Tools account, you can see that Google is keeping track of what amounts to "impressions" -- what search terms are returning your domain's URLs on the SERP, whether they are clicked on or not.
So what about a URL that never gets tapped for any SERP at all, over some period of time - its dance card is empty. Could having too many of these in a domain be a negative algorithm factor?
[edited by: tedster at 8:33 pm (utc) on Nov. 20, 2006]
Also I've got to say thanks to the GWT team - it is nice to get the information.
To me, the more interesting question would be, "If they are using this information, where are they using it?"
They could certainly use that information son specific popular searches. The could even use it for less popular searches within the same theme. Would it make sense for them to use it in the very long tail, you know, those searches that are the equivalent of supplemental pages?
And now that you have brought up the possibility, get ready for the posts of those concerned that this one factor is suddenly *the* important factor when it comes to their ranking. They are going to start going to libraries and searching and clicking on their results to try and boost their positions.
Yeh, Google is smart, but if one thinks about it... what would be the next step? or How would take advantage of this situation? -- The next step is to "get as many click on your listing as possible"! Sounds very familiar, but nobody is doing this!
Example: Ask any SEO if it's a good idea to put "Click here for ..." in the title of the page?!
I am sure in a couple of years the "rules" of getting to the top of Free SERPs would be almost the same as getting to the top Adwords...
not really, tedster. they only see those hitting the back button
Guess I wasn't clear enough, sorry. What I'm talking about has nothing to do with clicks or measuring traffic in any way. It's just whether a certain URL shows up at some position or other in a real search result - in other words, "organic impressions", no matter if that result gets clicked on or not.
That kind of data definitely does not require a GWT account for Google to measure it. It only requires a GWT account for you to get that kind of feedback.
If you have a lot of URLs in the index that are never tapped for any search at all -- ad say that the number of such URLs is far beyond the mean for all websites -- then is that poor showing a negative reflection on your domain as a whole? I've never heard anyone anywhere talk about this factor. But it's clear as a bell that Google records it, or else they could not give a GWT report on it. Whether (and how) they use that data is another story.
Also, in one session, Matt made it clear that ownership of other domains that are involved in something highly dodgy might raise at least a question about an apparently clean domain owned by the same person. In fact, he read out some other domain names to one site owner and basically asked what they were doing with those domains -- it was a kind of warning that many in the room took note of.
If you were to place a page about a topic on the web and provide links to it it may take some time for a search request to come along that the S/E would even consider the page worthy of being included in a SERP.
If however the page should appear in a commonly searched for SERP and ranked high and then doesn't get clicked you'd have something to think about if you were the S/E.
Likewise if people still click on the page's link and the link appears in position 734 the S/E might wish to do a bit of thinking on the matter.
Now I'm going back to the sidelines and and watch folks go insane participating in the datacenter watch threads.
Matt made it clear that ownership of other domains that are involved in something highly dodgy might raise at least a question about an apparently clean domain owned by the same person.
This implies to me Google is definitely using domain registration data, since IP isn't enough to determine domain ownership, though you can go a long way I suppose by analyzing IPs and linking patterns.
Right now Google is ranking my site high for elements of the url that have nothing to do with the content of the site. Needless to say no-one is clicking on my site in such search results. On the other hand people are clicking on my pages for certain relevant keywords, even though they rank way down for them.
But imagine how complex it must be to analyse all this data for billions of pages and searches, and then adjust the algorithm to improve SERPs.
[edited by: Genie at 4:17 pm (utc) on Nov. 21, 2006]
If your SERP-less pages reached a certain threshhold of total page percentage then you could also see how it would show some pages being over optimized and the site as a whole being less useful for the user.
Analyze a little click data from GWT and all of a sudden that site gets a penalty.
then take into account that G knows your other domains and they happen to deal with many unrelated topics and those sites may, or may not, have a similar 'empty dance card' page percentage (similar footprint) and then a whole network of sites gets weighted.
this is all obviously gross extrapolation but there is a strange logic to the whole thing
I see what you're getting at Ted, interesting way of looking a it.
Gee Ted, maybe it's all about identifying a site's footprint. ;)
"an increasing number of site-wide measures" or words to that effect.
I never understood the argument that "Google ranks pages, not sites" based solely on the fact that PR is based on pages. There are so many quality indicators that only make sense on a site wide basis.
I still have problems with that aspect of what appears to go on from time to time.
But imagine how complex it must be to analyse all this data
I think there is no doubt that Google has ways to look at a lot more than single pages. The question is are they doing it on a wide spread basis or just when something alerts them to possible problems with individual pages or sites.
I would expect a site to have a fairly large serpless vs serped footprint if they are new or even if old and added a lot of new "stuff"
exactly, now identify 'an increasing number of site-wide measures' and think of
page age (maybe even number of significant changes)
domain age (to that owner)
site wide ranking and clickthroughs
I am sure others could be added
you can see how a profile emerges and the value (to G) of adding more site/domain signals
if a disproportionate number of pages on your site are unserped and unvisited then G could use other factors to decide whether that means your site is less or more relevant
Ah but you see being relevant doesn't require that you be an old, well linked, or even technically clean site.
That is but one aspect of the never ending battle that is currently being fought.
However those automagic aspects of shelving pages in file 13 or putting them on page 1 slot 1 can't automagicly take such information into account.
Even detailed click data could prove subject to the same manipulation as other aspects of world wide wobbley or is it still wait.
very true, but
if you are old and well linked then the majority of the time you would be seen as relevant, G's just playing the odds and using site wide factors to possibly increase their odds of identifying relevant sites/pages
Like I say automagic, is automagic, and nothing about automagic implies correctness or anything else.
Oh don't get me wrong jatar_k, I understand proxy indicators, and a whole lot more.
However I came from a world where the very last $0.01 was accounted for, all of the time, every time.
Many of us have noticed how the longer our sites rank well for long tail searches in a theme, the better our ranking on the more general searches.
So what if, rather than using the data as tedster postulates, they are using your historical ranking in more specific searches to adjust your ranking in less specific searches.
Spending some months ranking for "Fuzzy green widgets" will then help you for searches on "fuzzy widgets" and "green widgets. Once those crawl to the top and stay there for a few months, "widgets" will start working its way up in the SERPs.
This does happen, the question is, is this the reason that it happens, and does it affect other things as well.
I think you are seeing that over time Google understands that your content actually covers various aspects of a topic.
I've also noticed that there is frequently a lag between ranking (even slightly) on most common phrases and begining to pick up a lot of long tail coverage followed at last by higher rankings on the more common phrases. But if the cause is in fact the long tail or just the amount of time it takes to fully calculate the effects of all of the factors in ranking I can't say.
A lot of folks will say that they checked their rankings and nothing changed except traffic levels, some later on have remarked that eventually traffic levels returned.
Acts like everything is following a periodic wave form.
[edited by: theBear at 12:58 am (utc) on Nov. 22, 2006]
Quick click-backs for another SERP choice would be a URL specific signal of low-quality, but I'm just pondering what else there might be.