Forum Moderators: open
Other than that: yep, pretty much! Though don't confuse PR with SERPs. They're different things. PR being (very basically) a numeric representation of the value of the page determined by inbound links while Search Engine Rank Position would be the actual location of the page in any given search...interconnected, but completely different things.
(1) Page is fetched
(2) Page appears in 'cache'
(3) Page is analysed/indexed
(4) Pr of page is calculated (PR0 if duplicate content)
(5) Pr of page propagates to toolbar Pr servers
It isn't very clear what the timing is on all these, and there may be other complications, such as partial indexing, approximate estimation of Pr, etc.
George
And just for my own curiosity: How on earth can Google 'cache' billions of internet pages?
Doesnt that mean they have a 'back-up' of the whole (spiderable) internet? Where do they keep it?
Thanks again,
Proust
every 2 days (i think due to my meta <meta name="REVISIT-AFTER" content="2 DAYS">
Most here do not think that this meta tag or most meta tags do anything. I for one include them on the index page just in case. Perhaps your observation is accurate.
Spidering and crawling are the same function.
Good questons. Keep asking them.
jb
See
[www-db.stanford.edu...]
although that of course describes the original much smaller prototype, with about 24 million pages.
But yes, Google has huge numbers of machines with big disks.
Reading other threads on this board i'd be curious to know what the point is of doing searches on www2 and www3?
Do surfers take that route?
(Or is it 'a trick' to see which datacenters know what)
Can we say : it only matters what www.google serps?
And why is 'all this talk' called the "Florida update"?
What has Florida got to do with it? (Isnt it a Californian thing?)
Thank you