Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: open
Most results have something like this as their last line:
"www.yyy.org/blah/blah.html - 15k - Cached - Similar pages"
But the last result has:
"www.yyy.org/blah/blah2.html - 7k - Supplemental Result - Cached - Similar pages"
Anyone else seen this before?
This is purely a guess, but could this be a page that was found by freshbot but hasn't been fully incorporated into the google databases yet?
(I'd be happy to sticky the particulars of the search)
Google augments results for difficult queries by searching a supplemental collection of more web pages. Results from this index are marked in green as "Supplemental."
So, now there are two kinds of web pages: normal, and other pages that form a 'supplemental collection'?
Dcheney, was your query 'difficult'?
Let's hope GG drops by to shine some light...
* It's an experimental feature
* It augments search results for hard-to-answer queries, by searching a supplemental collection of web pages -- in addition to its main index of 3.3 billion web pages. Results from this index are marked "Supplemental" because they originate from a separate, experimental index that is only used to answer the most obscure and infrequent queries.
* I've [Gary Price] asked a few follow-up questions including: how do they determine what goes in this index and what is an obscure query? Stay tuned.
I was looking for information on AltaVista's Enterprise search - all the supplemental results are pages that are gone, and now re-direct to FAST's site.
It could be that there are some dead pages that Google has decided are important enough to keep indexed, so we can access the cached copy, and/or visit where they used to be.
1) On our recent post about Google size estimates being off, a Google spokesperson tells us what we for the most part already knew, "We looked at your queries and the bottom line is that Google's estimator is an estimate, not an exact number...we're working to making it more accurate." Like I said a few days ago, those of you who use Google page estimates as a way of determining popularity need to be very careful.
This was his recent post [resourceshelf.com] and check the Exemple 2!
But he did not investigate deeply enough like I post it here [webmasterworld.com] with no answer at all.
Here all the links and numbers [multiforum.info] (in french but it is mostly digits and links)!
i searched for:- google "supplemental result"
the only result has the text next to it ..
ive never seen it before, even tho you have pointed out in the interpret section that it looks like it has been there for a while ..
must be doing something if all of a sudden this many people have noticed this change
Hope that helps,
As a user, how am I supposed to interpret these results? Are they less relevant? Or is Google telling me "your query returned 5 pages, and because it's such a difficult query we'll show you another page you might want to try"?
As a SE watcher, I wonder how G determines whether a page should go into the supplemental collection or not?
Sorry for the dead link in msg # 9. It pointed to an earlier posting with the same title as this one, and was deleted by an admin.
Thanks for the reply. I suppose its a trade secret, but you did skip the rather obvious question - what criteria determines whether a spidered page is placed into the main index vs. the supplemental index?
Obviously one generally wouldn't want their content placed into the second index since it will never be returned unless its an "obscure" search.
For a site like my own its not as big a deal since the majority of successful searches are for specific, and often fairly obscure, proper names.
Think of this as icing on the cake. If there's an obscure search, we're willing to do extra work with this new experimental feature to turn up more results. The net outcome is more search results for people doing power searches.
My guess: whenever Google has some spare spider power, and time up their sleeves, they go find some extra pages at big important sites (like blogging sites). The data is supplementary because they never know when these pages might be indexed again, if ever. They know the data is old, so they don't include it in the main results, unless it looks like it could help.
I thought Google's goal was to index the entire web. If they do that then any query can be answered, there's no need for a "supplemental" index or some strange idea of an "obscure" query.
If I search for "snark jubjub -boojum" and there's only one match, so be it. As long as the page is indexed then I don't care if it Google thinks it "obscure".
With my user hat on I can't think of any good reason for differentiating between the main and supplemental index. The Google algo is designed to put "most relevant" answers at the top regardless.
So if it's not for the users' benefit it must be for Google's own benefit.
Sounds like an excuse for hiving off a chunk of the main index. Is this another pointer to capacity problems and people trying to invent ways round them?
I have also seen examples that lead me to believe that supplemental results are orphaned pages that no longer have any backlinks.
As for still-valid orphaned pages, I think this is a good thing. But for 301 or 404 pages, I don't see any value to the user - those pages should disappear forever, as soon as possible.
You can imagine a lot of useful criteria, including that we saw a url during the main crawl but didn't have a have a chance to crawl it when we first saw it.
This doesn't make sense to me. It implies that supplemental results are fresher, but it seems to me that they're more stale. Plus, lately I've seen new pages being added to the main index in a matter of days. So if a page misses out on being indexed, why whould it not just wait a few more days to be indexed? That's the whole point of fresh listings.
Hi valeyard, and welcome to WebmasterWorld! The supplemental results are above and beyond the pages that we already search. So we're not taking away any docs--in fact, we're searching even more docs than before.