Forum Moderators: open
In other words, after the crawl when Google caches pages for the next update, are those the links we will see regardless of changes throughout the month?
The best chance to ensure in the next update - is posting during the update - since googlebot and minty cease to crawl during this time period.
In other words, after the crawl when Google caches pages for the next update, are those the links we will see regardless of changes throughout the month?
Yes except when minty does a refresh crawl through you but this is only temporary.
Apart from the freshbot, I am seeing a 2 month lag for results to show. So something that is crawled today won't show until March.
I guess they use the freshbot as an offset for this. Is this accurate with other people's observations?
I am referring to links showing up in the link to's - it seems to take 2 months.
I have notice on occasion that Googlebot skips (or possibly misses a change) but not too often.
I suspect the difference in observation might be the frequency at which Googlebot returns.
Over the last month, (December) I got a bunch of new incomming links. They do not show up in the backlinks of the Google Bar now after the most recent Google update and my position actually slid down a couple places. Is it actually so that the links we do this month don't register until 2 months later?
Jabzebedwa
You should expect to wait one or two updates [webmasterworld.com] for backliks to appear. New pages in Google Fresh listings seem to get a ranking that's heavily influenced by their backlinks, though.
What do you think?
crawl4.googlebot.com[07/Jan/2003:08:41:47 -0500]GET / HTTP/1.0 "-" 200 6316
crawl1.googlebot.com[07/Jan/2003:17:36:45 -0500]GET / HTTP/1.0 "-" 200 6316
crawl5.googlebot.com[08/Jan/2003:01:41:31 -0500]GET /amy/index.php HTTP/1.0 "-" 200 5214
crawl8.googlebot.com[08/Jan/2003:06:28:40 -0500]GET /amy/index.php HTTP/1.0 "-" 200 5214
crawl9.googlebot.com[08/Jan/2003:04:43:57 -0500]GET /family/index.php HTTP/1.0 "-" 200 4942
crawl9.googlebot.com[08/Jan/2003:11:10:07 -0500]GET /family/index.php HTTP/1.0 "-" 200 4942
crawl1.googlebot.com[08/Jan/2003:03:54:52 -0500]GET /geocaching/index.php HTTP/1.0 "-" 200 5475
crawl7.googlebot.com[08/Jan/2003:09:19:11 -0500]GET /geocaching/index.php HTTP/1.0 "-" 200 5475
crawl4.googlebot.com[08/Jan/2003:01:25:03 -0500]GET /house/index.php HTTP/1.0 "-" 200 4521
crawl2.googlebot.com[08/Jan/2003:09:19:10 -0500]GET /house/index.php HTTP/1.0 "-" 200 4521
crawl5.googlebot.com[08/Jan/2003:09:43:24 -0500]GET /house/pictures.php HTTP/1.0 "-" 200 5873
crawl7.googlebot.com[08/Jan/2003:05:03:34 -0500]GET /ian/index.php HTTP/1.0 "-" 200 6107
crawl4.googlebot.com[08/Jan/2003:11:07:06 -0500]GET /ian/index.php HTTP/1.0 "-" 200 6107
crawl3.googlebot.com[07/Jan/2003:22:04:27 -0500]GET /lamaze/index.php HTTP/1.0 "-" 200 5130
crawl7.googlebot.com[08/Jan/2003:11:06:02 -0500]GET /lamaze/index.php HTTP/1.0 "-" 200 5130
crawl4.googlebot.com[08/Jan/2003:08:15:02 -0500]GET /mike/aboutsite.php HTTP/1.0 "-" 200 8265
crawl4.googlebot.com[08/Jan/2003:11:09:38 -0500]GET /mike/aboutsite.php HTTP/1.0 "-" 200 8265
crawl1.googlebot.com[08/Jan/2003:04:39:42 -0500]GET /mike/index.php HTTP/1.0 "-" 200 6926
crawl4.googlebot.com[08/Jan/2003:06:50:56 -0500]GET /mike/index.php HTTP/1.0 "-" 200 6926
crawl7.googlebot.com[08/Jan/2003:04:39:56 -0500]GET /putnamfest/index.php HTTP/1.0 "-" 200 5240
crawl4.googlebot.com[08/Jan/2003:07:22:49 -0500]GET /putnamfest/index.php HTTP/1.0 "-" 200 5240
crawl7.googlebot.com[08/Jan/2003:01:51:51 -0500]GET /rendezvous/index.php HTTP/1.0 "-" 200 4528
crawl1.googlebot.com[08/Jan/2003:07:57:19 -0500]GET /rendezvous/index.php HTTP/1.0 "-" 200 4528
crawl4.googlebot.com[07/Jan/2003:08:41:42 -0500]GET /robots.txt HTTP/1.0 "-" 200 30
crawl1.googlebot.com[07/Jan/2003:17:36:42 -0500]GET /robots.txt HTTP/1.0 "-" 200 30
crawl7.googlebot.com[08/Jan/2003:11:06:01 -0500]GET /robots.txt HTTP/1.0 "-" 200 30
Welcome to WebmasterWorld. Brett has a good glossary at the link below:
If the fresh dating is still being used (other seeing it), it means that Google has changed its method of fresh dating. In other words, perhaps determining how much of the page has changed, or how recently.
My changes tend to be "frequent but irregular" and often involve changes affecting only 10% of the info on a page.
The problem is that it's hard to know when Googlebot has crawled the other site (unless you're manipulating links between your own sites). Plus, like ciml, I see a couple of deep crawls per month, so it's hard to know when exactly your pages will be picked up for the next dance.
Finally, it's possible that Freshbot crawls throughout the month may feed into the main database - so that if the crawl is done on the 12th and Freshbot shows up again on the 20th, the fresher version of the page could be used.
That's just a conjecture, but it's what I would do if I were Google - the crawl is usually done weeks in advance of the dance, and I'd be surprised if it takes THAT much time to build the next database. I think they build some extra time into the schedule, for example so that they can revisit sites that were down on the first visit.