Welcome to WebmasterWorld Guest from 126.96.36.199
Every url crawled by a search engine is saved somewhere. It might get buried, unused, forgotten, ... It is highly unlikely that it will ever be actually deleted. It is in the "eternal index". On the bright side, perhaps it enables webmasters to use the Last-Modified header to limit robots to retrieve only freshly minted pages. There is a certain logic to this "eternal index" being that if a web site is down, or the page is temporarily unavailable, the search engine can still get the result. On the dark side, there are dreaded supplemental results, and this dark side is not restricted to google.
If the search engine is instructed not to index a url, then it won't be included in the "regular index". If the search engine is instructed not to cache a url, then it won't indicate that there is a "cached page" available.
Any url having been crawled that later ceases to exist or ceases to have any link pointing to it may be removed from the "regular index". Changing a url for a web page allows duplicates to "exist". Having the www and non-www also allows duplicates to "exist".
Any web page that becomes buried, forgotten, duplicated, or sufficiently defective for any reason gets to be a candidate to enter "supplemental results". If a page has a doppelganger in "supplemental results", both will fall in. A search engine would always prefer, if possible, to answer a query from the "regular index" and delve not into that deep "eternal index".
Some web pages have inexplicably emerged from "supplemental hell" and others have inexplicably fallen in.
Some suggestions have been:
be happy that you have any results
adopt orphaned web pages
remove duplicate content
META noindex tag
META noarchive tag
META description tags unique
start using google sitemap.xml
stop using google sitemap.xml
use removal tool
don't use removal tool
get a new domain
don't get a new domain
never make a mistake that might become a supplemental result
Concensus on what works:
GoogleGuy on Supplemental Results [webmasterworld.com]
Play it again Sam. LOL
I don't understand this.
We have a competitor with site cashed mid january by Google. MSN and Yahoo have updated the cashed date.
But our competitor is higher than ever for most imprtant singel words. In fact they are number one for most of them in Scandinavia.
They are absolutly not buried or forgotten
What you say suggests that the old domain records should be classed in a "supplemental" cache, and that the same (duplicate) pages served from my new domain would then get cast into the pit of supplemental oblivion.
But Google is currently showing 373 pages for my domain, more than it ever has before.
Am I misunderstanding this issue?