|What might be accidental "low-quality signals"? |
If people don't like it, they don't bookmark it, they don't print it out, they don't explore other parts of the website, and they don't come back. Those are examples of "low-quality signals". And I would bet that they cause a lot more ranking problems than any "accidental" signals.
|If people don't like it, they don't bookmark it, they don't print it out, they don't explore other parts of the website, and they don't come back. |
In each case: How can g### tell?
There have been long discussions about bounce rates (corollary to "explore other parts of the website"). If you've got a specific question, and the question was answered on the first page you got to, doesn't that mean the page is good and the search results were reliable?
If the user needs to go to another page, the search engine has no way of knowing whether it's because they are interested in the site-- or simply that the anchor text sent out a stronger signal than the page that really has the information, so they started out on the wrong page.
|In each case: How can g### tell? |
Chrome. Toolbar data. Clickstream data from their own free wi-fi. Clickstream data purchased from other ISPs. I'm sure there are more possibilities but these are off the top of my head.
Bing also talked about watching these kinds of user signals - including "does the visitor scroll".
Time on site may indicate a visitors willingness to hang around. Longer articles, if well written, might help.
I noticed many Yahoo sports articles (baseball, drafted my fantasy team last week) recently started showing completely unrelated but currently popular videos at the very bottom of each page, perhaps in an effort to keep visitors on the page just a little longer?
I also noticed that they stopped adding multi-page buttons and now print the article out on one page with a lot of eye catching unrelated links sprinkled in each to possibly help lower bounce rate?
If anyone should know what search engines look for it would be the news section on a search engine site, right?
I'd like to think rankings was 100% visitor reaction/behavior driven but it's not. In terms of SEO however, what do spam sites have in common that I can actually avoid? Anything worth actively trying to fix or... just write good content?
Other potentially negative signals that I rarely hear mentioned could be:
1. Having an established RSS feed with few subscribers (Feedburner) or readers (Google Reader)
2. Sending out email newsletters (or any mass emails) that have a very low open rate (GMail)
I never thought about this.
For years, I have had 15 subscribers to my RSS feed.
Recently, I just got one more.
Should I toss it ASAP?
After reading your comment, I am leaning heavily in that direction.
Especially, since it is not a "new stuff" feed. It is merely a directory of all 20 subcategories on my site.
In trying to keep an open mind, it appears that what I have perceived as a positive feature, may in fact, be a red flag waving in the breeze. I don't want that.
Well, this is just conjecture at the moment - certainly nothing is proven as far as I know. But a couple of thoughts do come to mind:
1. If you're not using Google's Feedburner service but only offering a direct RSS feed, I can't see how Google would have access to subscriber data.
2. If all you offer is a directory rather than a regular feed, then you do not really NEED that RSS feed anyway - you could easily offer it in another format.
My inclination would be to drop the RSS because it doesn't sound like it can do any harm in your situation. If by some chance dropping it seems to help, please share that information with us.
tedster, I always figured G did guestimate subscriber data. The number of subscribers Google shows in /webmasters is completely different than the number of subscribers listed in Feedburner. And I mean they are off by a few thousand.
Pretty sure I've read a quote from MC stating they don't use these sorts of metrics in the algos.
I mean, on one of my sites if the user bounces it means they've found what they're looking for. If they go back to the SERPS it might indicate that they haven't, but you have to remember that everyone's browsing habits are different - i.e. some users might open up all tabs on the page then browse one by one until they find what they want etc.
|the number of subscribers Google shows in /webmasters is completely different than the number of subscribers listed in Feedburner. And I mean they are off by a few thousand. |
Interesting - is the number you see in WMT higher than Feedburner? That would make sense, since there are certainly other ways to subscribe to a feed.
|Pretty sure I've read a quote from MC stating they don't use these sorts of metrics in the algos. |
If you mean "bounces" in terms of the conventional analytics measures, yes. However, a "fast click" where the visitor QUICKLY bounces back to the SERP for another choice is a different situation. Still, I agree that it is a very nuanced metric, whether it is currently in use or not. Yahoo used it for years, and apparently Bing does today. I'm pretty sure Google records it, whether it's currently in active use or not.
Regarding Feedburner vs WMT -
Years ago, I tried for 2 weeks to get Feedburner to work - it never happened. So, I just put up the feed myself. Sixteen subscribers according to Google WMT. Wow.
On the other hand, I queried Google -
I get 99,000 results. Is this significant, in spite of what GWT reports?
I still don't know what to do.
Links to the feed? That's not actual subscribers, so I don't know what to make of it either.
Is the space you typed between "link:" and "www.mysite.com" a typo? There should be no space after the colon if you intend to use a special search operator.
No space produces zero results.
That's why I added the space.
Zero results is your answer - no one is linking to the URL for your RSS feed. When you add the space, you are using a regular query that includes the word [link]
There is all kinds of data available from third party sources, some of which Google may well access. Sites like Alexa, Compete, SpyFu and so on - even with all their inaccuracies - still provide a decent guesstimate of all types of user engagement. And a lot of what Google is using, IMO, is this kind of secondary indicator of what they are calling "quality".