| 3:32 am on Feb 26, 2010 (gmt 0)|
|I suspect Google doesn't actually calculate page N + 1 until you request it |
You may be right. They may have created a preliminary data-set, but not filled it out.
This reminds me of some funny stuff I've been noticing -- doing site: operator queries plus a keyword. I'll see "about 180 pages" at the top, but at the bottom there are only two pages to choose from (10 results per page). Somewhere in the process they already knew that there were only 17 results (in this case) that they were going to serve. But the top of the page didn't get the message.
| 4:10 am on Feb 26, 2010 (gmt 0)|
|But the top of the page didn't get the message. |
Interesting thought tedster. I've seen what you mention. Interesting to think about the page being constructed on the fly from data from different places. And being out of sync.
| 4:33 am on Feb 26, 2010 (gmt 0)|
|I'll see "about 180 pages" at the top, but at the bottom there are only two pages to choose from (10 results per page)... there were only 17 results (in this case) ... |
I haven't looked lately, but when this happens is it by an order of magnitude? It could be as simple as the estimate is off by a 0 somewhere and it's not that important to go find out where the 0 is being added to the top of the page, which would mean the initial estimate was 18 and the actual number of results they put together when you requested the second set of results was 17... Only off by 1.
I'm sure I've noticed something along those lines that on a site or two I've coded and thought, 'Yeah, I'll track that down in the 3,000 lines of PHP I'm working on right now instead of waiting until everything is getting updated or the important stuff is finished... NOT!' LOL.
Really, been there, done that and the funny thing is those glitches always seem like the least important part to me, but always seem to be the first thing people ask about...
Them: 'Hey, why isn't that little part right?'
Me: (Shaking my head thinking to myself it takes 3000 lines of code to make this page happen and it's faster than most static sites and you really think it's important there's an extra 0 there for some reason? I know why they invented preg_replace() ... ) 'Oh, I'll get that in a minute...'
Anyway, just thought I'd share some of the fun of the little glitches from the side of the people writing the code. It might not be why it's off, but it could...
[edited by: TheMadScientist at 4:54 am (utc) on Feb 26, 2010]
| 4:38 am on Feb 26, 2010 (gmt 0)|
I'll keep the order of magnitude question in mind while I work - it certainly was in today's case. Good question!
| 6:32 am on Feb 26, 2010 (gmt 0)|
This is filed in my brain as very old knowledge so I don't remember why I think this, nor do I know if it is still applicable, but my belief has been that when Google counts way more matching pages than are displayed and less the standard 900 plus are shown it is because pages containing multiple occurrences of a query match were pulled more than once from the index during the retrieval process. The duplicates get sorted out during the ranking process leaving less pages to be displayed in the SERPs.
| 7:44 am on Feb 26, 2010 (gmt 0)|
I always use the 100 results per page option; I turn it on in every computer I put my hands on.
| 8:49 am on Feb 26, 2010 (gmt 0)|
|I always use the 100 results per page option; I turn it on in every computer I put my hands on. |
Those who use this option for rank checking should keep in mind how Google clusters. If you have, say, a #1 result and a #100 result for a query, with 100 results per page Google will cluster them as #1 and #2, which might give you a false sense of confidence. ;)
| 3:18 pm on Feb 26, 2010 (gmt 0)|
Surely it is only SEOs and spammers who go looking beyond page 4?
Uncommon but I do see visitors who have gone though 10 or 15 pages.
I'll see "about 180 pages" at the top, but at the bottom there are only two pages to choose from (10 results per page).
I haven't noticed this for a while but always used to confuse me.
| 4:17 pm on Feb 26, 2010 (gmt 0)|
|I'd say the "average" user (if there is such a thing) probably doesn't go past page 3. |
When looking for a certain file or have difficult programming questions with older languages I'll go through tons of pages.
| 5:34 pm on Feb 26, 2010 (gmt 0)|
I never go pass page 2 of serp in web search neither when checking keyword rank or common query. But in image search I can go up to page 5-10.
One thing I find that total page results rather strange like example search below:
a.) kw1 kw2 kw3 -- total page result more than 100 millions.
b.) kw1 kw2 -- total page result around 30 millions.
Don't b.) supposed to give more result then a.) ?
| 6:06 pm on Feb 26, 2010 (gmt 0)|
hasimsg, yes - I've been noticing that more in recent weeks, too. Or you add a + sign before a word and total results goes up.
It's some artifact of how the SERP is constructed - possibly that some SERPs (shorter phrases that use no advanced operators) have been cached in a way - that is, the identifiers for the individual URLs to be included are pre-cached. The recent appearance of more of these discrepancies signals some kind of shift in Google's technical approach, but the specifics are not clear.
| 8:24 pm on Feb 26, 2010 (gmt 0)|
In the case of: kw1 kw2 kw3
the page #1 result also show the keywords in back order (kw3 kw2 kw1). I guess that's why the total result were so many pages.
I've noticed that three keywords serp behavior because those words happen to have meaning even in reverse order.