Thanks for the juicy morsel.
Has Google ever commented on whether or not they aggregate logged-in user data for ranking purposes?
Thanks for the link tedster. I just skimmed it quickly but bookmarked it to return to. Didn't really find any ah hah stuff in there though. What's written is already a reflection of their public facing SERPs which leave a lot to be desired. Or another way of saying it is if their theories are so good why are their results so bad?
|A more sophisticated way to do this is to look at the number of people who bounce off a web site and they click on a different search result for the same search query. |
But that statement from within the article caught my eye. It's an indicator of one aspect of what's wrong with (much) of their logic. He puts emphasis on it being a "more sophisticated way"...yikes.
If I search for something important and find it on the first result I click on I still back-click to results to visit additional sites backing up and confirming the first one I read. If after reading the 5th result I decide I don't need any more confirmation that the original one I clicked on is valid I don't click-back. I just go directly back to the first one that I still have open in another tab. And chances are last site I visited was not the best choice. But the quoted statement above leads me to understand they would give more weight to the last site I visited because I didn't go back?
I've thought the same thing over and over Seven. Some good tid bits there Tedster.
I can see how Panda could apply to this and I’m not the best in trying to word it properly. We know Panda is about content farms. Other characteristics of a content farm not mentioned from Google (or at least I don‘t see any emphasis) is the interest easily seen by time on page. But I feel sorry for those other four sites Seven exampled. One of them may provide the best material out there, but our dependence today on finding the general consensus is hurting that site.
Incidentally I don't open new windows, and I highly doubt the public does either. I keep clicking back and forth. I guess I'm a good example of Joe Public in this case.
Make any sense?
as I've suspected before they spend a lot of thought as to what ecom is doing, 2 of the six listed above have only to do with ecom.
According to this thinking price shoppers can have a heavy effect on a site's perceived quality.
|Incidentally I don't open new windows, and I highly doubt the public does either. I keep clicking back and forth. I guess I'm a good example of Joe Public in this case. |
Make any sense?
Sure it makes sense, we all have our own habits for organizing ourselves.
I think it's important to remember this isn't a article about how Google does web search. This is just an article identifying some search issues. These could apply to a large ecommerce site doing its own internal search.
What if you open a batch of search results in different tabs and then read them all without ever closing or re-opening the original search page? What information does g### collect? How do they know how long you spent on each page?
I would hate to think that any significant part of anyone's algorithm is based on 1998-vintage browsers that only let you have one window open at a time. You're reading either this page or that page; no other options.
|Still no clue as to where Google would get this data - especially since we now have a public statement that Google rankings do not use any data from the Chrome browser |
I believe that whoever wrote that post believes that Google doesn't use the data collected from Chrome... but in reality, I'm somewhat certain that they do based on data that I have. I have a strong hunch that they also use data from Analytics.
The first I heard anyone from a search engine talk about using click-backs to the SERP it was Duane Forrester form Bing, not someone from Google. Every search engineer I've talked with at conferences has described click-backs as a noisy signal.
So there needs to be a lot more involved in extracting a strong signal from a click-back pattern - time on page before the click back takes place would be a possible factor. Other possible signals might be "did the user scroll the page before clicking back?" and other potential engagements signals.
I find the longer the page, the longer the average time on page.
I agree the signal can be noisy, but it could still be possible to get some decent relative data.
And any click back that happens within 1 second, consistently, isn't a good sign, but probably is a valid signal.
If I were programming the algo, I'd set the signal based on:
Avg. Time on Page/Page Length
You'd want to estimate the percentage of the page the user read. I'm assuming most users read an entire page if it's quality from top to bottom. I bounce back to Google as soon as I see junk/lose trust in the page.
I think you can cut down on the noise of the signal based on the ratio of time on page to page length. That way those sites that are concise aren't penalized for being concise.
I'm sure if Google collected enough data for any set of results for any keyword, it could find a range of bounce averages and anything outside that could be considered a red flag, or subject to more scrutiny.
|How people perceive a page is a big indicator of the quality of the page. |
Is there truth to that observation? A low quality page with a smart web design, great use of fonts and graphics and an easy to scan layout can be perceived to be a high quality web page. The opposite can be perceived as a low quality web page, even though the content might be original, high quality and useful. How a web page is perceived owes a lot to superficial factors outside of actual quality.
If it's enough for the web page to be perceived as quality, with the result that a Google user is satisfied, then the result is that the factors determining what will rank relate to perceived user satisfaction, not quality. Every metric listed, including conversion score, does not necessarily relate to quality or actual satisfaction for the user.
Every metric quoted above does not necessarily indicate the quality of the page. It only indicates how well the web page was engineered to motivate a site visitor toward a specific action, in addition to other superficial factors outside of actual quality.
|A low quality page with a smart web design, great use of fonts and graphics and an easy to scan layout can be perceived to be a high quality web page. The opposite can be perceived as a low quality web page, even though the content might be original, high quality and useful. |
Surely that's begging the question? Does quality of content = quality of page? Is HTML just a fancy word processor? Or is a page's quality made up of the totality of all its features-- including but not limited to text content?
To use a slightly over-worn analogy, if you went to a bookstore and asked them for the 'best' book to help you learn flower-arranging, you may have an expectation that they would be providing an 'expert' answer of some kind, based on a deeper knowledge of the contents of the available books than might be expected of a layman.
Being offered a book on the basis that 'no-one has complained about this book' or 'this is the most popular' might be satisfactory, but lacks expertise. As far as search engines go, over-reliance on "layman's" opinions would be the makings of a mediocre search engine, not a brilliant one. But you wouldn't have a lot of complaints ;)
|if you went to a bookstore and asked them for the 'best' book to help you learn flower-arranging |
Or, then again, the bookstore might point you to a book by the world's leading authority on flower arranging, featuring the most thorough and best-researched text. Meanwhile there are other books with fewer facts and less detailed instructions-- but with clearer pictures, a better overall structure, copious diagrams and a more useful Index. All the things that Recognized Authorities might dismiss as irrelevant fluff if they start from the position that Only Text Matters.
|Every metric quoted above does not necessarily indicate the quality of the page. |
But Google algorithms are said to use a combination of those metrics in arriving at the user engagement or page quality.
Can they not get whatever data they want from Analytics?
@lucy24: your flower arranging book analogy resonates with something I've been pondering about today. This past weekend I visited a flea market and bought a book about rhododendrons. It is a monumental volume of some 500+ pages, large format, beautifully printed and bound in early 1970's, the way they never did since. It is clearly the author's life achievement and contains references to results of 40+ years (!) of studies and experiments. It contains everything you even wanted to know about rhododendrons and then 400+ more pages. My own interest in rhododendrons compared to what I can find in this book is only a passing fancy - I just wanted to know why mine are dying :) and if I can do something to prevent that.
Anyhow, the reason I bring it up is that I only paid $3 for the book. It has no catchy dust jacket (not lost, it just didn't have it the way encyclopediae didn't need it back then) and most of the illustrations are hand-drawn in black ink (very nicely) and all photos in the book are black-and-white.
So, if you were in the business of "organizing the world's information", this would be the book you'd be showing to your customers (visitors). Your average modern consumer might be put off by B&W photos, hence UX parameters would not be very good - time with the book, number of pages open, that sort of thing. But the information contained there is top notch, you would really want to show this book.
On the other hand, if you were Barnes & Noble and had a primary interest in selling the book, you would not bring this type of a book forward - you would fist show a book with a glossy dust cover with a beautiful color photo, regardless of what's actually inside.
I guess, the parallel to Google is that we're still trying to wrap our heads around why it ranks sites the way it does thinking that they are still organizing the world's information. This is clearly no longer the main objective. They have a business to run and shareholders to report to.
As soon as you start picturing Google as B&N and not as a virtual Library of Congress, then it all makes sense:
* User experience rules! - feature what's already popular with users.
* Domain (host) crowding - sell more of what already sells well
* Trust sales (umm.. sorry, visits) data more than user reviews (if they buy more of what they hate most - who cares, sell more of that anyway)
So, anyway, Google has decided that they've collected enough of world's information, now they are in a business of retailing access to it. Now it all makes sense to me...
Google has publicly stated they do not use data from Analytics to alter search results for individual sites.
@bluntforce , I have heard these denials from Google before. I have also heard they have denied using bounce. But as Tedster has pointed out here, they were using bounce. My eyes are little more open to what's going on in the world today. Not everyone tells the truth. I find it very difficult to believe that Google is not going to use the wealth of data in analytics to affect the search engine results. I find it extremely hard to believe that they will not use the information from chrome in their index.
A friend of mine has a quote "don't pay attention to what they say, pay attention to what they do."
I'm fairly sure they don't use Analytics data to alter search results for individual sites. Of course they use aggregate data to explore potential algorithm changes.
Chrome needs to be a stand-alone product, it wouldn't make a lot of sense to have it "secretly" sending data. The potential backlash would destroy the product that now has a noticeable market share.
|But as Tedster has pointed out here, they were using bounce. |
More exactly, there's reason to think that Google uses a fast bounce metric (bouncing quickly back to the SERP.) Anything more than that is my guesswork, and I'm sorry if I misled anyone into thinking it was a proven fact. Just the fast bounce, or click-back rate factoring in the time involved, doesn't require any Analytics leaking into the algorithm.
What we know for sure and what is our best guess are two different things.
I DO encourage SEOs to use a kind of adjusted bounce rate from their analytics, one that accounts for time-on-page as well as the bounce. Hold on to your traffic with content that clearly works for them - not just content that ranks. If you see evidence that content isn't working for your visitors, dig in and fix it.
|Martin Ice Web|
|How many people add products to cart after visiting this page? |
How will they do it? I have a custom webshop coded myself. I don´t use addtobasket or something like that! How will they know the click is not for an offer request? Or for the leaflet?
Or is this all the same?
I think the alog is based on so many guesses that we don´t have do wonder why the serps got this bad.
|Martin Ice Web|
|Hold on to your traffic with content that clearly works for them - not just content that ranks. If you see evidence that content isn't working for your visitors, dig in and fix it. |
Do i get this right, i should write content that google likes? because google thinks users will like it?
What about doing sites for users not for google
Martin Ice Web, I thought he meant write for your visitors - "content that clearly works for them" where "them" = your vicitors.