Welcome to WebmasterWorld Guest from 54.198.52.8

Forum Moderators: Robert Charlton & andy langton & goodroi

Message Too Old, No Replies

Optimal Bounce Rates Different for Long Tail (is there a bounce floor)?

     
5:46 pm on Aug 2, 2015 (gmt 0)

Full Member from US 

5+ Year Member Top Contributors Of The Month

joined:Oct 9, 2009
posts:301
votes: 6


Okay, I don't have any pages that AREN'T targeting the long tail - just some more than others. And I don't have any objective way of determining the degree of long-tailishness of a search term, just my own intuition.

But given that, I've noticed that pages that are MORE long tail and rank well for their terms tend to have a low bounce rate, and the more traffic increases, the higher that bounce rate goes, capped at maybe double.

The well-ranking pages that are LESS long tail have always had a much higher bounce rate, even though they are on more in-depth topics and would intuitively have less bounce. Even previously, taken from data both pre- and post- original Panda, when their traffic was lower and comparable to that of the more long-tail pages, the bounce rate was still twice what it was on my very long tail pages at their best.

Generally, I'd expect bounce rate to increase with traffic, but not for long-tailishness to be a factor. Is it?

Are my results specific to my pages, or part of an overall trend?

Is it typical for the long tail searches to have a lower bounce rate than the popular searches, all told?

For pages ranking highly, what is the probability of increasing one's long-tail search traffic significantly while still maintaining the bounce rate, or even lowering it?

What other conclusions might be drawn?

A possibly relevant data point in my case is that these less-long-tail pages are not on sites I manage, but on larger UGC sites.
6:48 pm on Aug 13, 2015 (gmt 0)

Moderator This Forum from US 

WebmasterWorld Administrator robert_charlton is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Nov 11, 2000
posts:11902
votes: 294


Moved from another forum. Kicking this up.
8:55 pm on Aug 13, 2015 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member themadscientist is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 14, 2008
posts:2910
votes: 62


Bounce-rate as a stand-alone (or even webmaster-usable stat) isn't indicative of much -- Not really enough to even worry about imo. Coupled with "time on page" it says a bit more, but bounce-rate in isolation isn't something to worry about.

I've had pages with a +90% bounce-rate in the top 5 for years (until I let the site go), which sounds outrageous to someone who thinks bounce-rate is a standalone factor, but they don't know the page had an average visit time of 6+ minutes unless I tell them, so I know people found what they were looking for, which was instructions on how to do something. The fact the visitor found it on the page they landed on isn't a "negative"; it actually means I did my job, the search engine(s) did their job and the visitor found what they were looking for on the page in the results -- Win, win, win.

Beyond that, I can easily have a bounce rate of very close to 0 by opening another page via AJAX when someone lands on a page and having my stat keeping on both the landing page and the page I tell their browser to open asynchronously. It's that simple. So, trying to manage a number like that is a bit pointless, imo, because the "real information search engines could use" is more along the lines of:

Time Results Were Generated & Shown to User.
Time User Clicked on a Link.
Did User Return to the Results or Search Again?

If They Returned or Searched Again:
Time Between User Left Results and User Returned to Results or Searched Again.
Average Returns to Results and "Same Action User Took" for the Query or Closely Related Queries from Other Sites.

If User Returned to Results:
What Did They Do After Returning?
Search Again, Close Page, Leave Page "Blurred" in an Open Tab, etc.

If User Searched Again:
What Did They Search for?
The Same Thing, Something Completely Different, Something Tangentially Related, etc.

And on, and on, and on...

You won't ever know anything about what happened before or after a bounce, so all you know is "when they landed on your site and how long they were on the page", which means trying to manage "the bounce-rate number" rather than concentrating on trying to give the visitor what they're looking for and a good experience isn't really doing anything except giving a "feel good" about a number that in isolation is very close to useless, imo.

[edited by: TheMadScientist at 9:00 pm (utc) on Aug 13, 2015]

8:59 pm on Aug 13, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member aristotle is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Aug 4, 2008
posts:3225
votes: 228


Most likely the biggest factor in determining bounce rate is how well-matched the traffic is.

Virtually all pages get some mis-matched traffic, but how much depends on the search terms that people use when looking for info about the topic in question. Generally it's easier for a search engine to find pages that match a longer, more specific term, whereas a shorter less-specific term doesn't pin it down as precisely.

Another factor is how much weight the search engine gives to relevance. Google used to give more weight to relevance than it does now, so overall there's more mis-matched traffic now.
9:47 am on Aug 14, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member 5+ Year Member

joined:Mar 9, 2010
posts:1806
votes: 9


Time Results Were Generated & Shown to User.
Time User Clicked on a Link.
Did User Return to the Results or Search Again?

If They Returned or Searched Again:
Time Between User Left Results and User Returned to Results or Searched Again.
Average Returns to Results and "Same Action User Took" for the Query or Closely Related Queries from Other Sites.

If User Returned to Results:
What Did They Do After Returning?
Search Again, Close Page, Leave Page "Blurred" in an Open Tab, etc.

If User Searched Again:
What Did They Search for?
The Same Thing, Something Completely Different, Something Tangentially Related, etc.

And on, and on, and on...


@Madscientist, I have been hearing about them for ages, but do you strongly believe that Google SE system keeps track of those information and uses them in their algos?
2:30 pm on Aug 14, 2015 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member themadscientist is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 14, 2008
posts:2910
votes: 62


Keep track of? Absolutely

Between queries made, personalization & query revision (changing SERPs based on previous searches predicting future "next desired result"), and a "block results from [blah.com]" link in the results after a fast click-back, they have to track it all. Besides, they keep every bit of data they can find on visitors, even when the visitor is not on their site.

It's also not as much to track as it sounds like:
Search query === We know they do -- Goes without saying
What a visitor clicked === We know they do -- Have to for personalization
Time between click and returning to the results === We know they do -- Block results from link for "quick visits"

The rest is just processing those 3 data points for a large group of people/queries.

As far as using it, how much they do and who they use it for and from is another question that depends on what parsing/processing the data says -- Personally, I think I'd look more at logged-in, known "regular searcher" data for use in the algo than "one-off" type searcher's data as a starting point for trying to apply it to the results shown, because using it from "logged in known searchers" cuts down on the noise and influence of bots pretending to click a link to influence rankings.
3:35 pm on Aug 15, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member aristotle is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Aug 4, 2008
posts:3225
votes: 228


Mad Scientist --
I don't see how bounce rate can ever be a reliable ranking factor in Google's algorithm, because in some cases a high bounce rate for a site could be Google's own fault. This is because, as I mentioned earlier, if Google sends a lot of mis-matched traffic to a site, this would cause a high bounce rate. But it would be Google's fault for sending so much mis-matched traffic.
5:51 pm on Aug 15, 2015 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member themadscientist is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 14, 2008
posts:2910
votes: 62


In the case of mismatched traffic, as Google, you would want to remove the mismatched page from the results it's showing in, so it would be a very good use for it if you could get that indication from the data -- That's not even "punishing" the page being removed from the result set or the site it's on; it's just making the results better for the searcher by removing something that shouldn't be there in the first place. Plus, the page in question only loses traffic that doesn't want to see/find it anyway, so no there's real loss there.
7:12 pm on Aug 15, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member aristotle is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Aug 4, 2008
posts:3225
votes: 228


Well Google doesn't have any way to know if the high bounce rate is caused by mis-matched traffic, or if it is due to some other reason. That's why I said that bounce rate can never be a reliable ranking factor.
7:55 pm on Aug 15, 2015 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member themadscientist is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 14, 2008
posts:2910
votes: 62


Use it as a standalone factor? You're correct.

Use it in any way or combined with other factors?
You're incorrect, but I'm not feeling like arguing any more than to say that, so I'll let martinibuster do it for me -- Please see his posts ITT for the position on bounce-rate being used: [webmasterworld.com...]
8:56 pm on Aug 15, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member aristotle is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Aug 4, 2008
posts:3225
votes: 228


Well Mad Scientist- I looked at that thread you referenced but didn't see any mention of mis-matched traffic. And as for being "incorrect", if you don't fully understand something yourself, then you shouldn't try to make that judgement about someone else.
9:08 pm on Aug 15, 2015 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member themadscientist is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 14, 2008
posts:2910
votes: 62


I do understand it could be used and how I would use it if I had the data, but the explanation is too long to bother with, so you, and everyone else for that matter, can think "mismatched traffic" would mean bounce-rate combined with other variables cannot be used, but you'd be wrong. Anyone who reads martinibuster's posts can easily conclude it's likely coupled with other factors and in use to some extent, "mismatched traffic" or not.

As far as mismatched traffic goes, when you're playing a game of "whack-a-mole" or "one-of-these-things-is-not-like-the-other" with a 1,000,000,000,000+ URLs to possibly show for a query and the behavior related to those URLs when shown in the results for a query, you don't worry about whether a user's behavior is due to a mismatch of a term or the page being cluttered or pop-up windows/divs/overlays, etc. You make decisions on patterns of behavior related to the URLs for a specific result set and when the pattern of behavior for one URL included indicates "this URL doesn't 'fit' with the query/result-set like the rest do" you remove it -- How you "get to this URL does not fit the query", once again requires too long to try and explain, and I'm fairly certain it wouldn't matter if I took the time or not anyway, because your mind is already closed on the subject so it's pointless to try further, imo.
5:33 am on Aug 16, 2015 (gmt 0)

Moderator from US 

WebmasterWorld Administrator martinibuster is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Apr 13, 2002
posts:14557
votes: 370


I think this point needs to be investigated:

Generally it's easier for a search engine to find pages that match a longer, more specific term...


My understanding is that a ranking algorithm tries to understand the various meanings of a phrase, relate that to what a searcher is trying to accomplish (education, research, purchase, research a purchase, find a phone number) and then match it to a site that is likely to match what the user is trying to accomplish. Sometimes there are multiple meanings because of differences in user intent, the motivation behind the search, what the user is trying to accomplish. This is calculated by studying user interactions with the search engine itself. Subsequently the algorithm is trained to replicate these successful outcomes.

But what happens if there is less data to draw from? I think this is what happens in some longtail phrases. The algorithm defaults to simple pattern matching, matching keywords, because it can't categorize the query. It could be the fault of the user, by not properly articulating their query. That's something else to think about.
12:09 pm on Aug 16, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member aristotle is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Aug 4, 2008
posts:3225
votes: 228


As far as i know, none of us here has a full knowledge and understanding of exactly how Google uses the data it collects.

And oftentimes a person who says that someone else is "incorrect', actually reveals their own lack of understanding.
6:59 pm on Aug 16, 2015 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member themadscientist is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 14, 2008
posts:2910
votes: 62


if($oftentimes!=$always && $using_other_words+$implications===$doing_the_same_thing-$just_saying_it) {
$black=$pot+$kettle;
$double_standard_applied=TRUE;
$worth_discussing=FALSE;
exit;
}
9:59 pm on Aug 16, 2015 (gmt 0)

Moderator from US 

WebmasterWorld Administrator martinibuster is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Apr 13, 2002
posts:14557
votes: 370


The Google Algorithm Black Box is a myth. What Google does is taught in universities around the world. It's called Information Retrieval.

The problem is a lack of understanding from relying too much on what Ilyes and Cutts have said. A self described seo guru I won't name actually compiled all Mueller etc have said about panda and called it leaked info. That's naive and ignorant beyond belief.

The Google Algo is not mysterious. The black box is a myth.

[edited by: Robert_Charlton at 1:18 am (utc) on Aug 17, 2015]
[edit reason] edited per poster request [/edit]

6:54 am on Aug 17, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member 5+ Year Member

joined:Mar 9, 2010
posts:1806
votes: 9


What a visitor clicked === We know they do -- Have to for personalization
Time between click and returning to the results === We know they do -- Block results from link for "quick visits"


hmm...they might also be using cookies to do such things as quoted above and they might not have to persist them in their dbs unless they are processing/cooking such info for ranking purposes...

but yes the block result action, if taken, might be persisted as it actually confirms that user doesn't like that page/site and when there are several such users, the page/site might perform badly...
10:47 am on Aug 17, 2015 (gmt 0)

Moderator from US 

WebmasterWorld Administrator martinibuster is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Apr 13, 2002
posts:14557
votes: 370


Indy, good post. Now take the data from millions of such poor experiences and identify characteristics of those sites (layout, grammar, word count, etc.) and now you have the ability to predict the likelihood of a site not satsfying user intent. Does not have to wait for a user to click a serp. A machine can be trained to predict these outcomes.
11:54 am on Aug 17, 2015 (gmt 0)

Preferred Member

Top Contributors Of The Month

joined:Sept 12, 2014
posts:380
votes: 65


Why does any of this matter? Afterall, if a page is relevant, accurate and helpful to the user and still has a high bounce rate then your options are limited to removal, reworking or nothing. Why would you remove a page that is helpful? If a page is good, then anything you add would just be fluff.

too many sites have developed a strategy of making the user search a page and even click onto other pages to find what they are looking for, all in an effort to control the bounce rate. The end result is a site that the user avoids in the future because of this effort.

Of course, if the bounce rate is high because of page quality, you have more important issues to deal with.
8:15 pm on Aug 17, 2015 (gmt 0)

Full Member

10+ Year Member Top Contributors Of The Month

joined:June 3, 2005
posts:298
votes: 12


What I believe Google could easily know about is if the majority of surfers view the number 1 position sites then "bounces" off quickly (dwell time) and finds the information on at number 2 position site then the number 2 site is better.

Many surfers sign up to give Google this data and much more (bounce rate, dwell time etc...) via their Google account and Chrome (but not Google Anylitics for obvious reasons).