Welcome to WebmasterWorld Guest from 126.96.36.199
All my previous sites have been indexed in couple of days time by building backlinks from different older sites, social bookmarking, submitting sitemap, etc., you know the drill.
The last site of mine that was created two weeks ago still doesn't come up with any result in Google when doing site:example.com. When I log into webmaster tools I see that my sitemap info says how urls have been submitted, almost all of them (25), but next to indexed urls is one big 0.
Anyone else experiencing the same thing?
then the SERPs in the top 10 would be really shuffling around
Indexing a page (a visit by the bot) is not the same as returning it in the SERPS.
Google not indexing new sites as it used to
Sorry for being a semantic 'nut' on this, but Google's results are the index
Users can always search our full index, but sometimes we can serve up even fresher pages as an extra nicety. :)
I'd expect that things will be back to their normal level of everflux by New Orleans. But we do have incremental indexing after all, so it's normal to expect a certain amount of change to the index every day or so (aka everflux).
In fact, everflux is a pretty good analogy. If you go back to summer 2003, update Fritz was the beginning of the transition from a monthly update to an incremental index. It caused a lot of comments, because plenty of people were happy with an index that only changed once a month.
I'm happy to confirm it's a new index.
I've been poking through the new index myself. I found one link from April, but almost all the links I found were newer.
Let's see, what else? I guess this index answers many of the questions about deepbot vs. freshbot.
The difference between the "deep crawl" and the "fresh crawl" was much more apparent this time last year when we were only pushing a new deep index about once a month.
SJ index is not old Critter, the SJ index isn't an older index. You can verify that by doing a topical query such as SARS. The results are more fresh in SJ than they are in our regular index.
As I interpret the quotes you posted, the term "Index" is used as a term for the collection of indexed pages, although not the entire collection: only that part of Google's entire collection that forms the base from which the results of a search query are pulled. Meaning that "the index" (mentioned in the quotes you posted) is only that part of the "total index" that constitutes "the active/current index" (for lack of better words). The total collection of pages that Google has is larger than the "active index" as it may hold several historical versions of each page as well as duplicate pages and whatnot.
They do not index all the pages they spider, but the results people see are [in] the index... I posted this in another thread, and think it 'sheds some light' on the terminology, so if you want to know why results would [could] be called the index, think database. They 'index' what they return as results. If it's in the index it's in the results some where. If it's not in the index it's not shown, but may be in the underlying data from spidering.
[edited by: Robert_Charlton at 3:51 pm (utc) on May 7, 2010]
[edited by: tedster at 4:25 pm (utc) on May 7, 2010]