homepage Welcome to WebmasterWorld Guest from 54.204.58.87
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

    
From a Former Google Research Scientist
tedster




msg:4501266
 5:34 pm on Sep 28, 2012 (gmt 0)

Here's an interesting article from former Google research scientist, Asutash Gorg. He left Google in 2008 after more than four years on staff. I always appreciate a look at what Google is (or was) working on behind the scenes. In this case, it sounds like Asutash might know at least something about Panda.

A Mathematical Model for Assessing Page Quality

Behavioral score of page How people perceive a page is a big indicator of the quality of the page. This can be measured by analyzing the user behavior. Some of the factors that are traditionally used are

1. Conversion score
2. Bounce rate
3. Number of page views
4. Number of repeat visitors to this page
5. How many people add products to cart after visiting this page?
6. Average amount of time that is spend on this page.

[stonetemple.com...]

Still no clue as to where Google would get this data - especially since we now have a public statement that Google rankings do not use any data from the Chrome browser [webmasterworld.com].

 

Panthro




msg:4501270
 5:42 pm on Sep 28, 2012 (gmt 0)

Thanks for the juicy morsel.

Sand




msg:4501290
 6:42 pm on Sep 28, 2012 (gmt 0)

Has Google ever commented on whether or not they aggregate logged-in user data for ranking purposes?

SevenCubed




msg:4501296
 7:07 pm on Sep 28, 2012 (gmt 0)

Thanks for the link tedster. I just skimmed it quickly but bookmarked it to return to. Didn't really find any ah hah stuff in there though. What's written is already a reflection of their public facing SERPs which leave a lot to be desired. Or another way of saying it is if their theories are so good why are their results so bad?

A more sophisticated way to do this is to look at the number of people who bounce off a web site and they click on a different search result for the same search query.


But that statement from within the article caught my eye. It's an indicator of one aspect of what's wrong with (much) of their logic. He puts emphasis on it being a "more sophisticated way"...yikes.

If I search for something important and find it on the first result I click on I still back-click to results to visit additional sites backing up and confirming the first one I read. If after reading the 5th result I decide I don't need any more confirmation that the original one I clicked on is valid I don't click-back. I just go directly back to the first one that I still have open in another tab. And chances are last site I visited was not the best choice. But the quoted statement above leads me to understand they would give more weight to the last site I visited because I didn't go back?

LostOne




msg:4501335
 8:36 pm on Sep 28, 2012 (gmt 0)

I've thought the same thing over and over Seven. Some good tid bits there Tedster.


I can see how Panda could apply to this and Im not the best in trying to word it properly. We know Panda is about content farms. Other characteristics of a content farm not mentioned from Google (or at least I dont see any emphasis) is the interest easily seen by time on page. But I feel sorry for those other four sites Seven exampled. One of them may provide the best material out there, but our dependence today on finding the general consensus is hurting that site.

Incidentally I don't open new windows, and I highly doubt the public does either. I keep clicking back and forth. I guess I'm a good example of Joe Public in this case.

Make any sense?

timwilliams




msg:4501390
 11:25 pm on Sep 28, 2012 (gmt 0)

as I've suspected before they spend a lot of thought as to what ecom is doing, 2 of the six listed above have only to do with ecom.

According to this thinking price shoppers can have a heavy effect on a site's perceived quality.

SevenCubed




msg:4501392
 11:32 pm on Sep 28, 2012 (gmt 0)

Incidentally I don't open new windows, and I highly doubt the public does either. I keep clicking back and forth. I guess I'm a good example of Joe Public in this case.

Make any sense?


Sure it makes sense, we all have our own habits for organizing ourselves.

tlainevool




msg:4501406
 12:54 am on Sep 29, 2012 (gmt 0)

I think it's important to remember this isn't a article about how Google does web search. This is just an article identifying some search issues. These could apply to a large ecommerce site doing its own internal search.

lucy24




msg:4501441
 3:00 am on Sep 29, 2012 (gmt 0)

What if you open a batch of search results in different tabs and then read them all without ever closing or re-opening the original search page? What information does g### collect? How do they know how long you spent on each page?

I would hate to think that any significant part of anyone's algorithm is based on 1998-vintage browsers that only let you have one window open at a time. You're reading either this page or that page; no other options.

Andem




msg:4501442
 3:05 am on Sep 29, 2012 (gmt 0)

Still no clue as to where Google would get this data - especially since we now have a public statement that Google rankings do not use any data from the Chrome browser


I believe that whoever wrote that post believes that Google doesn't use the data collected from Chrome... but in reality, I'm somewhat certain that they do based on data that I have. I have a strong hunch that they also use data from Analytics.

tedster




msg:4501458
 5:01 am on Sep 29, 2012 (gmt 0)

The first I heard anyone from a search engine talk about using click-backs to the SERP it was Duane Forrester form Bing, not someone from Google. Every search engineer I've talked with at conferences has described click-backs as a noisy signal.

So there needs to be a lot more involved in extracting a strong signal from a click-back pattern - time on page before the click back takes place would be a possible factor. Other possible signals might be "did the user scroll the page before clicking back?" and other potential engagements signals.

potentialgeek




msg:4501874
 2:26 am on Sep 30, 2012 (gmt 0)

I find the longer the page, the longer the average time on page.

I agree the signal can be noisy, but it could still be possible to get some decent relative data.

And any click back that happens within 1 second, consistently, isn't a good sign, but probably is a valid signal.

If I were programming the algo, I'd set the signal based on:

Avg. Time on Page/Page Length

You'd want to estimate the percentage of the page the user read. I'm assuming most users read an entire page if it's quality from top to bottom. I bounce back to Google as soon as I see junk/lose trust in the page.

I think you can cut down on the noise of the signal based on the ratio of time on page to page length. That way those sites that are concise aren't penalized for being concise.

I'm sure if Google collected enough data for any set of results for any keyword, it could find a range of bounce averages and anything outside that could be considered a red flag, or subject to more scrutiny.

martinibuster




msg:4501912
 5:47 am on Sep 30, 2012 (gmt 0)

How people perceive a page is a big indicator of the quality of the page.


Is there truth to that observation? A low quality page with a smart web design, great use of fonts and graphics and an easy to scan layout can be perceived to be a high quality web page. The opposite can be perceived as a low quality web page, even though the content might be original, high quality and useful. How a web page is perceived owes a lot to superficial factors outside of actual quality.

If it's enough for the web page to be perceived as quality, with the result that a Google user is satisfied, then the result is that the factors determining what will rank relate to perceived user satisfaction, not quality. Every metric listed, including conversion score, does not necessarily relate to quality or actual satisfaction for the user.

Every metric quoted above does not necessarily indicate the quality of the page. It only indicates how well the web page was engineered to motivate a site visitor toward a specific action, in addition to other superficial factors outside of actual quality.

lucy24




msg:4501961
 11:17 am on Sep 30, 2012 (gmt 0)

A low quality page with a smart web design, great use of fonts and graphics and an easy to scan layout can be perceived to be a high quality web page. The opposite can be perceived as a low quality web page, even though the content might be original, high quality and useful.

Surely that's begging the question? Does quality of content = quality of page? Is HTML just a fancy word processor? Or is a page's quality made up of the totality of all its features-- including but not limited to text content?

Andy Langton




msg:4501975
 12:11 pm on Sep 30, 2012 (gmt 0)

To use a slightly over-worn analogy, if you went to a bookstore and asked them for the 'best' book to help you learn flower-arranging, you may have an expectation that they would be providing an 'expert' answer of some kind, based on a deeper knowledge of the contents of the available books than might be expected of a layman.

Being offered a book on the basis that 'no-one has complained about this book' or 'this is the most popular' might be satisfactory, but lacks expertise. As far as search engines go, over-reliance on "layman's" opinions would be the makings of a mediocre search engine, not a brilliant one. But you wouldn't have a lot of complaints ;)

lucy24




msg:4501990
 1:33 pm on Sep 30, 2012 (gmt 0)

if you went to a bookstore and asked them for the 'best' book to help you learn flower-arranging

Or, then again, the bookstore might point you to a book by the world's leading authority on flower arranging, featuring the most thorough and best-researched text. Meanwhile there are other books with fewer facts and less detailed instructions-- but with clearer pictures, a better overall structure, copious diagrams and a more useful Index. All the things that Recognized Authorities might dismiss as irrelevant fluff if they start from the position that Only Text Matters.

indyank




msg:4502046
 4:26 pm on Sep 30, 2012 (gmt 0)

Every metric quoted above does not necessarily indicate the quality of the page.


But Google algorithms are said to use a combination of those metrics in arriving at the user engagement or page quality.

The_Fox




msg:4502745
 11:51 pm on Oct 1, 2012 (gmt 0)

Can they not get whatever data they want from Analytics?

1script




msg:4503079
 5:25 pm on Oct 2, 2012 (gmt 0)

@lucy24: your flower arranging book analogy resonates with something I've been pondering about today. This past weekend I visited a flea market and bought a book about rhododendrons. It is a monumental volume of some 500+ pages, large format, beautifully printed and bound in early 1970's, the way they never did since. It is clearly the author's life achievement and contains references to results of 40+ years (!) of studies and experiments. It contains everything you even wanted to know about rhododendrons and then 400+ more pages. My own interest in rhododendrons compared to what I can find in this book is only a passing fancy - I just wanted to know why mine are dying :) and if I can do something to prevent that.

Anyhow, the reason I bring it up is that I only paid $3 for the book. It has no catchy dust jacket (not lost, it just didn't have it the way encyclopediae didn't need it back then) and most of the illustrations are hand-drawn in black ink (very nicely) and all photos in the book are black-and-white.

So, if you were in the business of "organizing the world's information", this would be the book you'd be showing to your customers (visitors). Your average modern consumer might be put off by B&W photos, hence UX parameters would not be very good - time with the book, number of pages open, that sort of thing. But the information contained there is top notch, you would really want to show this book.

On the other hand, if you were Barnes & Noble and had a primary interest in selling the book, you would not bring this type of a book forward - you would fist show a book with a glossy dust cover with a beautiful color photo, regardless of what's actually inside.

I guess, the parallel to Google is that we're still trying to wrap our heads around why it ranks sites the way it does thinking that they are still organizing the world's information. This is clearly no longer the main objective. They have a business to run and shareholders to report to.

As soon as you start picturing Google as B&N and not as a virtual Library of Congress, then it all makes sense:

* User experience rules! - feature what's already popular with users.
* Domain (host) crowding - sell more of what already sells well
* Trust sales (umm.. sorry, visits) data more than user reviews (if they buy more of what they hate most - who cares, sell more of that anyway)

So, anyway, Google has decided that they've collected enough of world's information, now they are in a business of retailing access to it. Now it all makes sense to me...

bluntforce




msg:4503203
 10:03 pm on Oct 2, 2012 (gmt 0)

@The_Fox
Google has publicly stated they do not use data from Analytics to alter search results for individual sites.

Google does use cookies to personalize results for up to 180 days when a person is not logged in to Google. When a user is logged in, there is undoubtedly more data accessible to Google. Up to 30% of my search engine referrals do not have a keyphrase(logged in user) so I'd guess Google does obtain a pretty good representative data sample from it's users.

RP_Joe




msg:4506253
 2:12 am on Oct 10, 2012 (gmt 0)

@bluntforce , I have heard these denials from Google before. I have also heard they have denied using bounce. But as Tedster has pointed out here, they were using bounce. My eyes are little more open to what's going on in the world today. Not everyone tells the truth. I find it very difficult to believe that Google is not going to use the wealth of data in analytics to affect the search engine results. I find it extremely hard to believe that they will not use the information from chrome in their index.

A friend of mine has a quote "don't pay attention to what they say, pay attention to what they do."

bluntforce




msg:4506278
 2:54 am on Oct 10, 2012 (gmt 0)

I'm fairly sure they don't use Analytics data to alter search results for individual sites. Of course they use aggregate data to explore potential algorithm changes.

Chrome needs to be a stand-alone product, it wouldn't make a lot of sense to have it "secretly" sending data. The potential backlash would destroy the product that now has a noticeable market share.

tedster




msg:4506284
 3:08 am on Oct 10, 2012 (gmt 0)

But as Tedster has pointed out here, they were using bounce.

More exactly, there's reason to think that Google uses a fast bounce metric (bouncing quickly back to the SERP.) Anything more than that is my guesswork, and I'm sorry if I misled anyone into thinking it was a proven fact. Just the fast bounce, or click-back rate factoring in the time involved, doesn't require any Analytics leaking into the algorithm.

What we know for sure and what is our best guess are two different things.

I DO encourage SEOs to use a kind of adjusted bounce rate from their analytics, one that accounts for time-on-page as well as the bounce. Hold on to your traffic with content that clearly works for them - not just content that ranks. If you see evidence that content isn't working for your visitors, dig in and fix it.

Martin Ice Web




msg:4506358
 9:13 am on Oct 10, 2012 (gmt 0)

How many people add products to cart after visiting this page?


How will they do it? I have a custom webshop coded myself. I dont use addtobasket or something like that! How will they know the click is not for an offer request? Or for the leaflet?
Or is this all the same?

I think the alog is based on so many guesses that we dont have do wonder why the serps got this bad.

Martin Ice Web




msg:4506383
 10:08 am on Oct 10, 2012 (gmt 0)

Hold on to your traffic with content that clearly works for them - not just content that ranks. If you see evidence that content isn't working for your visitors, dig in and fix it.


Do i get this right, i should write content that google likes? because google thinks users will like it?
What about doing sites for users not for google

diberry




msg:4507504
 4:15 pm on Oct 12, 2012 (gmt 0)

Martin Ice Web, I thought he meant write for your visitors - "content that clearly works for them" where "them" = your vicitors.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved