Forum Moderators: open

Message Too Old, No Replies

linking for fresh"google"bot

1 site freshbot daily, 10 site, can't buy a freshbot. If I link 1 to all 10

         

teeceo

12:35 am on Dec 20, 2002 (gmt 0)

10+ Year Member



will that bring me a freshbot? I can link the 1 site that gets freshbot daily to the others, is this wise? No back linking, I know:).

teeceo.

ScottM

12:57 am on Dec 20, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Probably won't work.

'Freshbot' seems to be on a 'PR' basis.

That being said, the links to other websites will help the PR somewhat, but not necessarily enough to achieve freshness. You could try, but it may not be enough.

teeceo

4:38 am on Dec 20, 2002 (gmt 0)

10+ Year Member



the thing is, two of the other sites are pr6 and the rest pr5 and the site that get freshbot daily is pr5?

teeceo.

jimbeetle

5:06 am on Dec 20, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



>>'Freshbot' seems to be on a 'PR' basis.

I'm not too sure about that. Our main site is PR<1 (reads 0). Freshbot visits on a regular basis and many of our pages have fresh tags -- if you can find them.

From the very little I know and have observed, freshbot does not do much deep crawling. It hits the page, checks to see if it has changed, then moves on. It does not look like it follows links -- nor does it seen to cache -- any of its findings. If it finds a fresh page it indexes it, puts a fresh date on it, that's it.

I think that to get freshbot to do anything your page it already has to be in the index and it must have *changed*. (And there are always the exceptions of new pages).

There was talk some years ago when AV first started "freshness dates." How did you force a spider to reindex your page? Was it by the header date or somethingelse? At that time some more informed people suggested that changing the overall page size would force the spider to re-index, something along the lines of 75 bytes or so.

It sill works.

As for the linking part -- unless you're abusing it it's fine. But to get freshbot to reindex those pages you have to do something to them. If they don't change freshbot will ignore them. Hence its name.

Jim

coconutz

5:25 am on Dec 20, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



It does not look like it follows links -- nor does it seen to cache -- any of its findings.

I have a page that I updated last night (just prior to freshbot stopping by) and added links to 10 new pages I just created. Freshbot followed these links and the pages are in the index today with fresh dates, cached and with similar pages (30) listed for them.

jimbeetle

5:44 am on Dec 20, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



>>(And there are always the exceptions of new pages).

I'm an old intelligence wimp (It's *probable* that the bad guys are invading the good guys -- when we're watching them do it!) I always try to qualify, there are just too many dang exceptions.

And -- the pages are NEW! They're FRESH. That's what freshbot wants.

Kind of like: Same old stuff, same old stuff, NEW, same old stuff, same old stuff, CHANGED. Bingo, you've got a new page and a changed page in freshbot.

So let's say this: Freshbot is not a deep crawl. It checks pages that are already in its index. If it finds a changed page it re-indexes it. If it find a new link it follows it and indexes that page.

Makes sense from a resources point of view.

sleet

8:18 am on Dec 20, 2002 (gmt 0)

10+ Year Member



'Freshbot' seems to be on a 'PR' basis.

I have a site that currently shows PR grey bar (still trying to work out if it has been penalised or not, but if it has, no idea why) and Freshbot has visited virtually every day this month.

vitaplease

8:52 am on Dec 20, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Freshbot is for confusing WebmasterWorld members ;)

I look at it this way. Even if I do not fully understand its workings, this is how I would do it if I were Google.

I think Freshbot visits are split in these two categories:

Most important pages frequently and most topical recently.

What are the most important pages that need frequent or recent spidering?

1. Pages that contain content that has recently become very important topically-wise, according to the searchers and the web community at large.

How to determine what those pages are? Best bet would be:

1.1 Pages that got fresh links (recent links) from pages that are considered authorative (high Pagerank). To do that, you need to spider those authorative pages frequently ;)

1.2. Pages that contain content and turn up top 100 in Serps, for search queries, that recently are searched for a lot compared to the average recent past.

Take the generic search data Google has and e.g. shows in a limited way in Zeitgeist.
Google can detect which keyphrases are suddenly occuring more frequently. [wired.com]

2. Does this mean that on-page content should be changed frequently with the above set-up?

No, I would say not initially, but maybe for the authorative Fresh pages to remain Fresh, they should have new or altered links on them once in while?

teeceo

8:43 am on Dec 21, 2002 (gmt 0)

10+ Year Member



I asked the other day if freshbot would jump from one site to the other if a link was added and then i did just that and BAM, freshbot and fresh cached. I linked my 1 site to all ten of my other sites(no crosslinking or backlinking)just one link from the front page and that did the trick. Love you google, googlebot and even you googleguy:. have a great new year all with the topdog of 2002, google. Peace out.

teeceo.